Investing in UX undoubtedly has large ROI potential, as huge tycoons already showcase. Google, for example, have developed their own framework to aid thoughtful design and evaluation for their user experience processes. Their 'HEART' Framework covers five main areas:
- Task Success.
By allowing the UX design team to separate and narrow their focus to key user areas, we can quantify these metrics and evaluate them objectively.
For each section, the team will decide on goals, signals and metrics. Goals will define broad objectives, signals will include KPIs to ensure the progress is measurable, and the metrics will allow for quantifiable data points to measure such success (or failure).
Whilst all five elements of the HEART framework play unique roles, certain projects or products may not require all five steps.
Goals and Signals.
Before deciding which metrics to use, we must first decide what we're measuring and how we wish to gauge success. Firstly, we'll define our goals and signals, before matching up the best-suited metrics to monitor progress.
For example, when looking at 'Happiness' we may have the goal of increasing the enjoyability of our app or website experience. To signal that the experience has improved, we may aim for higher engagement times—i.e. users will stay on our app or website longer if it's built-in a pleasing way.
When choosing a metric for the above example, we may think about churn (users unsubscribing from our app) or the bounce rate (how many users quickly leave our landing pages).
UX metrics are often split between two major categories: Behaviour and Attitudinal. The former refers to what users do, whereas the latter is about what they say.
Understanding how people interact with your brand is critical. User research helps develop effective UX strategies built for long term success. Task-based usability testing is therefore a standard information gathering step.
To track behaviour, we may monitor the 'abandonment rate' of our app or website. This can be a key indicator to how user-friendly and inviting our content and UX is, with high abandonments a clear indicator that something problematic is happening.
Further to this, if our goal is more about quantity (rather than simply quality), we may look to target page views. This metric is a good indicator of brand awareness and overall discoverability. Low page views may call for an increase in SEO, SEM, and SMM campaigns.
If our UX leads to users performing a particular task—such as a signup form—we could track the time taken to complete the task, and also the percentage and number of users able to successfully complete the task. Lengthy times and high amounts of failed attempts may suggest that the UX is in need of revision, or low average times may point to less thought or freedom involved in performing a particular task.
The last consideration we often mention here involves problems and frustrations. Designers and coders may be too close to their own brand or product, meaning they assume multiple things that the target market may not. It's important to monitor the pitfalls that users are facing, and look to solve these issues with improved UX solutions.
As an example, imagine an app is easy to discover, with thousands of downloads each week. Yet, despite this high engagement, there are very few new users. The signup process may involve too many awkward steps or a problematic hurdle such as the authentication e-mail going straight to spam inboxes for anyone without the brand's email address extension. The abandonment rates here would likely be high, and the frustration equally worrying—with task time too high for a simple sign up step.
Aside from measuring certain in-app/web behaviours, it can be incredibly useful to monitor user attitudes. We can aim to track and measure the quality of appearance, credibility, usability and loyalty for our creations through various metrics.
Beginning with the latter, Loyalty is often assigned a score through NPS or SUS results.
NPS stands for Net Promoter Score. This survey question hinges on how likely the user is to recommend the brand, service, product, or experience, to their friends, family, or colleagues.
The ratings are ranked from 0 to 10, with those rating 0-6 marked as 'detractors'. These users are unhappy with their experience and very likely won't continue using your product again. Scores of 7 or 8 are marked as passive and could be poached to competitor services that better suit their expectations—with no real loyalty per se. Finally, the 9 or 10 scores are essentially 'promoters', who will likely recommend your brand to other people and continue to follow and buy from you in the future.
By subtracting a percentage amount of detractors from the percentage of promoters, you will get your NPS score.
A SUS (System Usability Scale) can be used. Again, this relies on users filling out a short questionnaire, and their answers are mapped on a Likert scale to assign quantitative values to their qualitative opinions.
Likert scales offer an emotive range for opinion, giving various degrees of positive or negative scoring— often ranging between values such as very unsatisfied, unsatisfied, neutral, satisfied, and very satisfied. To avoid 'unhelpful' scores, it is suggested to remove 'neutral' options when possible and force a user to have an opinion one way or the other.
For measuring appearance, it's useful to adopt Microsoft's 'product reaction cards'. These long lists of descriptive words offer users the opportunity to provide their reactive feelings to how your product (or website/app) looks, giving a good emotive description of how your brand is perceived. By comparing these results to your targeted brand personality, you can spot any potential mismatches in content and experience to be rectified.
Finally, for measuring Credibility, the SUPR-Q (Standardized User Experience Percentile Rank Questionnaire) is an 8 item solution. It includes a focus on trust, value and comfort—as a lack of trust is ultimately damaging for brand perception and growth.
Whilst NPS scores are hugely popular, the rival 'CSAT' (Customer Satisfaction Score) offers a higher customisation opportunity. This option has no strict limits on the number of questions to ask, but note that lengthier surveys are only likely to be completed by those that either hate or love your brand—missing out on a potentially large audience in between.
Task Performance Indicators.
A final consideration is to use McGovern's 'TPI (Task Performance Indicator)'. This was set out to measure the "impact of changes on customer experience". The method involves asking ten to twelve task-orientated questions that are centred around the main tasks you wish to measure in your UX flow. These tasks should be repeatable as retesting half a year, or a year later, is advisable with this process. Each user is given a task to complete, and a follow-up question to answer. Then, the user is asked how confident they are in their answer. TPI scores should not change unless the UX process is changed.
The TPI score is made up of six main variables:
- Time out
- Minor Wrong
- Give Up
If the score is 100, then the user has successfully completed the task within the agreed target times, however, each step allows for key measurable issues.
Time refers to the ideal target for the task to be completed under practice conditions, with an exceeded time affecting the target score. Time out refers to an overall maximum allocated time, and in particular when a user takes longer than this time limit.
Even if the task question is correctly answered, if the user reports low confidence in their answer then this will negatively impact their TPI score. In an ideal scenario, users should be confident in their actions and the overall experience you construct. If the answer given is not completely correct, then a 'minor wrong' can be indicated, which is expanded upon further in the event of a 'disaster' whereby the user is highly confident yet achieves the wrong result and therefore an issue should be flagged. Finally, if the person simply gives up, this should also be recorded and negatively affect the TPI score.
Applying the HEART Framework.
Based on the various metrics discussed, we can begin to solve potential problems with our HEART framework.
As an example, we may have an app that allows users to download premium ringtones.
We can start with our blank framework grid:
Then, we can fill out each section with thoughtful goals, signals and suitable metrics. You may be focused on: ease of use, particular conversion rate or error rate scores, a task success rate, navigation, number of visits, average time spent on a large number of pages or increasing and tracking a particular amount of visitors to your website or app.
By having clearly defined goals, signals, and chosen metrics, we can now begin to shape up a meaningful, impactful UX design that lasts.
User behavior can vary over time based on new technologies, trends and general expectations. Be sure to adjust your key performance indicators along the way and ensure you have the right metrics to give valuable insights into how any particular business goals can be met through intelligent UX redesign. Aim to ensure users' satisfaction can be upheld by your UX team, keeping positive and lasting interactions with your brand at the forefront of your user research methodology.
For more information on our UX services, visit our dedicated User Experience Design page.
Request a Proposal.
If you would like to #workwithmad then send us an email at firstname.lastname@example.org and let's Make It Happen.™