User experience is not magic. You don’t run a simple test that Becky the marketing intern read a blog about once and uncover quick-fix solutions to generate huge growth. UX strategy is a science: a science that has been around since long before the first computer blipped into existence and long before UX became a buzzword.
All scientific theories begin as a hypothesis. The assumption of purpose. Why are these events happening? Then you test the hypothesis by collecting data to validate or invalidate the hypothesis – with user testing, for example. It then becomes a theory.
A theory is a validated explanation of why something is happening. A theory is not based on bias, nor is it based on what the loudest person in the room is saying. It’s based on factual data collected through a replicable method.
Without that structure, it’s easy to run a test and fallback on confirmation bias, or data manipulation, to get the feedback you want. That’s not how this works. We don’t control the outcome. We find a means to communicate the complex nuance of user behaviour in a simple way. Sometimes the data proves us wrong and that’s okay. The goal isn’t to always be right; it’s to uncover the facts.
User data solutions like Google Analytics rely heavily on assumption. You can export records and use a service like IBM Watson to find correlating trends. However, don’t confuse data with fact. Predictive modelling or assumptions are the first step, but they don’t answer the golden question of why. Why a user is motivated to take an action is the central focus of UX.
This is the inherent problem with user experience. Everyone thinks they have all the answers. UX then becomes guided by perception bias.
Think of it this way. The sales team thinks they know what customers want to buy and the marketing team thinks they know how to convince customers they want it. Management has an approved budget based on what they assumed the teams would need a year ago and it likely didn’t include budget for UX research. Sound familiar?
Each organisation, department or employee has their own perspective on what should be done based on their own experience with customers. The problem is they’re all right. The bigger problem is that they’re all wrong too.
Organisations that fall into this perception trap often find themselves avoiding the conflict of a heated debate and try to serve everyone. The problem with trying to serve everyone is that you’re not serving anyone.
The job of user experience is to remove that bias and help the group to understand a bigger picture: the needs and expectations of the customer. So how can we reframe the conversation and make it less about opinion? Let data do the talking.
The process of collecting data is misunderstood by the vast majority of people. It does not need to be devoid of emotion, nor does it need to focus strictly on usability. What it needs to have is a purpose.
What kinds of data are you collecting and why? There are two core types of data to collect:
- Qualitative: Non-numerical, emotional feedback from participants – think first reactions or personal opinion-based feedback. What you liked and why, and descriptions instead of numbers. Qualitative = quality.
- Quantitative: This is numerical, scientific feedback – ‘Perform this action and rate the ease of completing the action on a scale of one to 10’. This is the basis for systems like Net Promotor Score (NPS). Quantitative = quantity.
If you’re tasked with creating a baseline for customer satisfaction on member sign-up or checkout in a shopping cart you’re going to need quantitative data. This lets you collect unbiased numbers that show a clear progression from where you began to where you ended months or years later. This is crucial in showing the importance of investing in UX within an organisation.
Many organisations will see the initial improvement and not understand the value in retesting. Seeing an increase in signups, revenue or drop in support requests is fantastic but there are many variables that could influence results. Attribution is your friend. It’s also the friend of the departments that you will be working with to showcase explicitly that the testing performed and subsequent changes were validated.
This goes back to the scientific validation we discussed earlier. Collect the data, make the change and validate that the change was accurate. If it wasn’t, create a hypothesis as to why it wasn’t and begin again. The trick is to always try to prove something wrong.
If you’re redesigning a consumer facing website without a long-term UX plan it may be okay to focus on qualitative feedback: descriptions and emotions. This works well for design-centric UX like landing pages for marketing or blogs. This does not work well for long-term strategy as trends are fluid. What works today for a tested demographic may not work well next year, so be careful.
Qualitative feedback is harder to distil into strategy because what users say they want and what they actually want are two completely different things in most cases. It requires a lot of foresight into when to peel back the layers of feedback and dig deeper with follow-up questions or facilitation.
Without the context of motivation, you become trapped in a feedback loop. This tends to lead down the perception trap again. If you’re stuck without direction you will try to find meaning in the data by applying bias. Once that happens, you focus on the wrong meaning and the data becomes useless.
Finding the right meaning
Let’s take a look at another example: Tenants in a New York office building would complain because, in their opinion, there was too much time in-between pressing the button and when the elevator would arrive, ding and open. Several tenants threatened to move out.
They wanted a faster elevator to solve the problem. This is qualitative feedback and emotional responses. Management requested a feasibility study to determine cost and effectiveness, which means hard numbers and quantitative data.
A different perspective from someone in the psychology field focused on the tenants’ core needs by digging deeper than their initial feedback. They ignored the numeric feedback of the financial study because it was not cost-effective to replace the elevator and rebuild the structure to accommodate the tenant’s suggestions.
The psychologist determined that finding a way to occupy the tenants’ time would offset this frustration. They suggested installing mirrors in the landing area. The manager agreed due to the low cost and quick fix to see what would happen.
Miraculously, the complaints stopped. Now you see mirrors installed in hotels and office lobbies throughout the world as a cost-effective way to appease the frustration of elevator users. Your data is only as valuable as the questions you ask and to whom you ask them.
Asking the right questions
Let’s assume you are in the process of redesigning a website for a client. You’ve been asked to perform a user test to help define the direction the design needs to take. Think broad stroke details: colours, fonts, layout, sizing and so on. You propose a qualitative test.
Don’t compose a questionnaire for the survey without thinking about how a user may first respond. This is why you need to formulate your hypothesis first. It provides important direction.
If you want to collect feedback on three website homepages you could run a set of questions and repeat them for each. The repetition is important for collecting similar feedback on each website. But what would those questions be exactly?
If you test 10 participants and eight of them come back with completely different feedback it makes your job harder than it needs to be and falls back on bias to prioritise the data. Ask questions that are very pointed to get actionable feedback.
- Instead of asking ‘What do you like about the homepage?’ ask ‘Without scrolling, do you know what this website is marketing?’
- Instead of asking ‘What do you like about the menu navigation?’ ask ‘Looking at the menu, is it clear this website has information on careers?’
You also have to factor in how you propose the questions. Is the question that you’ve asked leading them toward the goal itself?
- Leading: ‘Find the careers link in on the top of the page. Click on it to view information about the available careers.’
- Less leading: ‘What link at the top of the page would you click on to view information about the available careers?’
- Ideal: ‘How would you expect to locate information about the available careers on this website?’
Instead of guiding the user towards a goal, you are moving the decision-making back onto the user themselves. From this, you will get a better understanding of how that user, and their specific demographic segment, will expect to navigate the website. If you asked one of the first two questions you lose out on all of that data. It’s not always about the question you ask, but how you ask it that defines the result.
If you are unable to compose such defined questions, you are moving too quickly through the process. Take a step back and think about the pain points of the users in which you’re trying to communicate. Each decision you make should be working toward providing an effective solution for not only the business but their customers as well.
Organisational UX maturity
At Candorem we have a straightforward system for understanding the UX maturity of our clients. This enables us to quickly define the need for additional data collection, what type of data to collect and how quickly we can begin providing guidance. It’s also a great way of understanding the existing perception of the value of investing in UX. This can also break down into four core levels of data that they will have available for us to start assessments.
- Level 1 data: Google Analytics and heatmaps
- Level 2 data: Curated customer data (email, gender, location and purchase history by customer segment)
- Level 3 data: Customer survey data (likes, dislikes, ratings and interest levels), anonymous website recordings
- Level 4 data: User testing sessions, customer persona profiles and quantitative data
Businesses will have some variation of what we’ve outlined above. If they don’t, get them set up with Level 1 and allow adequate time for collection of some low-level data. Mining this data is crucial in creating your own hypothesis.
When an organisation unfamiliar with the nuance of UX defines the goals for a project without supporting data to guide them, it limits the potential outcome. Setting a goal is easy, but defining the right path takes time and experience.
Increasing revenue is not a goal, it’s an idea. Set specific goals like increasing revenue ten per cent for a segment of customers aged 24-35 that are shopping for a specific category of product. This consists of specific requirements that can be tested to generate a hypothesis, validated to create a theory and initial baseline, and then retested to validate the plan for growth over time.
User experience is about understanding the needs and expectations of your customer and collecting the necessary data to tell the story in an unbiased way.