Understanding the Problem & Identifying User Needs
The Business Problem
At the time, UserTesting was transitioning towards working with larger, enterprise-sized companies, and unfortunately, we weren’t offering a product that appealed to nor addressed the needs of the
executives. So, we asked ourselves:
How can we provide value to the executive?
Our Users: Executives (and Senior Leadership)
We knew about our regular users; we had the doers, influencers, and signers, but we knew very little about their bosses— the executives.
Senior leadership, VP's, and C-Level executives. They care about five things; increasing revenue, decreasing costs, increasing new business, increasing existing business, and increasing shareholder value. They're also incredibly busy people, obviously. They're overseeing several teams, strategizing, and planning. Oftentimes at a high-level and most likely a year (or two) into the future. They're attempting to align their teams around a product roadmap, measure the success of (and optimize) existing products, as well as, attempting to discover new opportunities. It's difficult for them to focus on backlogs and think about improvements while they're already this busy. It was clear; they needed help.
Iterating & Validating
We hit the ground running— facilitating several design sprints and holding meetings to discuss and align on the problem definition. We focused on their needs and discussed our goals with the greater team on a regular basis; synthesizing our findings, iterating, testing and... repeating.
Identifying their Underserved Needs
We scheduled user interviews and usability sessions at a regular cadence. We recruited executives at AirBnB, Sephora, Target, Priceline, Chase, Verizon, and other companies. We started to see some clear trends emerge— they needed help doing four things:
As it stood, it was nearly impossible for executives and senior leadership to make well-informed decisions around their product's user experience. There was no tried-and-true method in understanding and
measuring the overall health of their products. Net Promoter Score (NPS) and Customer Satisfaction (CSAT), could not provide tangible, actionable insights or better visibility into the overall health of their user experience as well as their
At this point in time, we believed UserTesting was nicely positioned to help Executives address their needs.
Validating & Invalidating our
Diving deep into Task Success
As proponents of Google’s HEART framework, we believed we were uniquely positioned to provide incredibly valuable insights around two of the five measurments. Those being; task success and user
happiness. While, engagement, adoption, and retention all had to come from internal metrics (at least for early releases). We knew we could thread the needle between a companies quantitative data and our qualitative data.
We began by developing a set of experience factors to score each task. For each task, we evaluated this set of criteria:
If the overall score was in the negative, we’d consider it a disaster. If it were a positive experience, we’d consider it a success. The overall average scores of the combined tasks would act as the UX Health measure for each month. We performed the same evaluation on each task for the competition as well. Our base plan included 4 core flows for the execs to evaluate and track.
Trust in the Number
After several interviews and rounds of research, we noticed that these people were very excited about, but also reluctant around, the numbers. They wanted to know that these tests were statistically
significant and also wanted to know more about the methodology. It made sense, these people were naturally drawn to data, numbers, and metrics. At first, our design prototypes over-compensated for this concept of 'trusting the number' by putting
too much emphasis around how we came up with the score. We thought the execs would want to dive into each task and analyze the score at a granular level. Turns out, it was the exact opposite. Once the exec knew how the methodology worked they
trusted the methodology and the number. We were more transparent about the methodology on our pricing page and the sales team was trained to answer these questions and explain the methodology.
The screenshot below is an early example of what we showed execs to gauge the level of granularity we needed. Most said they didn't need to go this deep, but thought it would be valuable for their teams to.
Solving the Problem and Delivering a Valuable Product
Refining the Solution
As we brought everything together, we were able to focus and refine the execs needs and deliver the core functionality they needed help with. After iterating on several concepts, we started to polish the user interface. Below are some examples of how the solutions to their needs manifested in the application.
1. A baseline for understanding their user experience
Monthly reports served as a foundation for execs to quickly monitor at a high-level and share with both the teams who reported to them as well as the other members of senior leadership, cross-functionally.
2. Benchmarking their UX and the competitions, over time
Benchmarking their scores over time was very important to execs. This gave them a high level view of the competitive landscape. This was important to them, but almost every executive was more keen on learning about how and where they can improve. The next section is what execs valued most.
3. Prioritizing backlogs, improvements, and new features
This set of features was hands down the most popular part of this tool. Executives loved getting their automated reports with nuggets of quick insights. The ability for executives to quickly see and hear customer insights proved invaluable.
4. Monitoring their competition's changes and optimizations
This feature was mostly requested from companies in highly competitive markets (ie. Travel, Beauty, Retail). Automated screenshots may seem like a simple solution to their needs, but this was a lean solution that met their needs. For a highly competitive company who is optimizing every part of every flow and consistently running multivariate tests on various small features, this was a valuable section.
Does this tool provide value?
I believe it does. And, a much more cohesive story is told when qualitative data is merged with quantitative data. We were able to provide the executives with the focus and clarity they desired; helping
them better prioritize, plan, and improve their products. Unlike the Net
Promoter Score (NPS) and Customer Satisfaction (CSAT), our solution provided actionable insights and better visibility into the overall health of their user experience, over time. The guess work was removed.
Recently, an article from McKinsey suggests that revenue growth and shareholder returns may more than double at companies who adopt a methodical approach to design. McKinsey said this was especially true when they devoted as much time and resources to their user experience as they did to understanding their costs and revenue.
I think that's valuable.
About the Team
As part of the 'New Initiatives' team, we were tasked with solving some of UserTesting's most pressing problems. Collaboration was stronlgy valued and most teams operated in pairs. Myself and another designer were partnered up and we worked closely along side pairs of product managers, researchers, and engineers. I believe the most successful solutions are collaborative ones.
©2019 Derrick Schippert. All rights reserved.
Built using Tachyons