Success Metrics for UX

9 Jun 2011 - 11:14am
3 years ago
8 replies
1672 reads
david.shaw6@gma...
2004

Hey everyone... I was just asked by someone if there are examples of success metrics that we (UX/IxDers) use to validate that our design was "successful".  I had a long discussion because this is so subjective.  How do you measure what one person considers successful while another may not.  So what I am looking for really is if there are any examples that people have used before to appease the biz teams.  Maybe it's as simple as a checklist with criteria that states "person uses product without throwing out the window."
Love to hear thoughts!
Thanks,
David

--
"Some people are so preoccupied with whether or not they can do something, they never stop to think if they should."

http://spinjunkey.blogspot.com/

Comments

9 Jun 2011 - 12:49pm
nlguy516
2010

I'm very interested in this, as it's come to my attention that my boss and his boss both expect that I will be able to begin producing 100% successful designs for our internal and external customers (I'm the web designer/developer for a credit union and our VP of IT assigned both my boss and me the book Don't Make Me Think last summer).

With just a little formal education and - so far - no mentor, how can I not only validate but document my level of success with regards to our projects, while I'm still also trying to absorb everything I can about this area?

 

10 Jun 2011 - 10:05am
jbfirestone
2010

100% success in anything on the Web really just isn't realistic. We're talking about content and interaction here, and people's reception to content is roughly the same as people's reaction to music. What you read isn't always to people's tastes, and everyone reads things differently, and remembers the end product differently. What is one person's success is another's failure, or falling short. You have to take that into account, and communicate that possibility in order to set up an understanding between you and your stakeholders. There's a reason why a saying like "You can't please all of the people all of the time" exists, and it's not just blather, even if you didn't like John Lydgate or Abraham Lincoln.

Remember, if it were that easy to produce 100% successful content, you'd have mass mailing achieve much more than the 5% average success rate on people who actually read the flyer you've distributed. Now, can you get closer than 5% in our line of work? Absolutely, but still, 100% really isn't possible.

You're also producing interaction. Close to 100% success can be achieved, but 100% related to functionality isn't really plausible either. The more complex a system is, the more difficult it is to make it's use simplistic. Making a complex system simplistic takes time, effort, research and testing to do right by your users. And-- there are always better ways of getting things done. After all, what could or is simple about a credit union? What laws are you aware of that influence usability or interaction in your end products? Do you have to follow ADA Section 508 guidelines? Is it a popular credit union? What do people appreciate about your organization? What can you enhance to get closer to a higher approval rating? And you have to ensure that another expectation you explode is the myth that if someone finds a better way of doing things, you've somehow failed. That's also not realistic. Everyone sees things differently, and one of the best things you can do in this kind of work is to be transparent as possible. Let people know what is research, what is industry standard (which isn't and wasn't always based on research -- remember, just because Amazon did it, doesn't mean it's good, or it should be standard (with apologies to Amazon ;D)), what's a best practice and also when you feel in your gut and professional experience (regardless of how experienced you actually are) that you feel a particular course of action is the right way to go.

Regardless of what you're doing and how you go about it, it's incredibly important to find ways of gently reinforcing the need to go through the broad steps of research, design, growth and maintenance and all the sub steps of each of those directives. If you don't test against users, if you don't interview them and ask questions and get their input, you're really not doing user centered design. You're just designing. I'll bet your stakeholders are getting to the point where they'd agree that if you didn't take the time to investigate how some functionality is going to effect users, you're not getting the job done. Let them know that it'll take more time than it used to in order to design this way. You're adding more steps to the process, and important steps that have a risky result if you elect to avoid them.

So, documentation, validation, statistics: What I would do is, find a process for interaction design, usability or Web strategy that you like. Message it. Maybe produce a powerpoint (edward tufte is cringing right now) or a short one page memo, or release an infographic about it within the organization. Find something comprehensive enough but say, "I see x number of steps in our process, and here they are. Again, encourage the recipients not to cut corners. Detail the system and the additional investment in time and resources. Set up some documentation somewhere that people can browse, where you can build a stockpile of research, statistics and findings as you flesh out what your processes and subprocesses look like. If someone challenges you, point them toward your research, the results of your testing with users and invite them to peruse it and discuss it with them. Stats can easily be everything from, "it used to take our customers 10 clicks to do this, and now it takes 4. This saves them time and frustration. What was once a 15 minute process we've shaved down to 5, and we're still getting all the same information we used to collect." Celebrate and document every little improvement. It all ads up.

Also -- make sure once you set something up, and create an expectation around it (this now takes 5 minutes) make sure if you head back and change things in that section in the future, make sure your QA people test that and validate it. Update it as a requirement in your future designs for regression testing. If something's backsliding, you have to know, or it'll come back to haunt you. :)

I hope that helps. Let me know if there's anything else I can help you with, or talk over with you. Happy to help.

-J

9 Jun 2011 - 2:05pm
jbfirestone
2010

Hi David,

Successful metrics depends quite a bit on what kind of communication you're having with the stakeholders. I would be asking questions (which you may have asked already) of the clients that should give you some clues as to what metrics to be examining.

The first set of questions centers around the goals of the site or function. Is the site or the set of functions you're concerned about meeting those goals? And do a little digging to determine what defines a successfully accomplished goal in the short term, and the long term? How does the functionality you're supposed to improve relate to those goals? If it's not attached to a goal, why is it being done in the first place? Put some priorities on those goals, what's important? "Everything" and "It's all equal" are not satisfactory answers. That first set of questions details what you do, why you're doing it and ultimately what you have to tie a statistic back to.

The second set of questions centers around the functions and tasks themselves. What are they trying to do here and what's causing someone pain? The whole point of user experience is to help develop a better one for the user (not necessarily the developer ;D) so what would make for a better user experience? A more efficient process/workflow? A more pleasing color set? Improving the user's ability to find something? This set of questions will help you really define statistics. Some of them may be numbers related, some are a little more esoteric. Part of your mission here is to suss out what defines success in that arena... For example: Let's say your site has an exceptional number of failed logins each month and the belief is that with a better user experience we could reduce the number of failed logins. This could be everything from "reducing failed logins by as much as 50% in the quarter following implementation. To, "7 out of 10 tested users felt that the new process was more efficient, easier to understand and made it easier for them to get back to work. If the stakeholders would find something like that to be a successful resolution, metrics surrounding that, either from testing, or from Web statistics after a change would be great place to start.

It's all about the ROI (return on investment) David.

Good luck. Feel free to write if you have any questions. Happy to help or point you in the right direction.
-J

10 Jun 2011 - 9:02am
Jeff Kraemer
2009

Hi all,

Richard Dalton at Vanguard gave a great presentation on this at the 2011 IA Summit (http://www.slideshare.net/mauvyrusset/a-practical-guide-to-measuring-user-experience), and fortunately he's also packaged it up as an article at UX Magazine: http://uxmag.com/strategy/ten-guidelines-for-quantitative-measurement-of-ux.

I missed the IAS11 seminar on web analytics for IAs, but here are the slides: http://www.slideshare.net/strottrot/numbers-are-our-friends-7512000.

And, on a similar note, Joshua Porter's SXSW presentation about metrics-driven design might help you: http://bokardo.com/talks/metrics-driven-design/

One more IAS11 presentation: Rob Fay and JoAanna Hunt did a great presentation on using design principles as a measurement tool (ie. qualitative measurement). I think there's a real trade-off in their approach to design principles (they used keywords more than specific, distinctive statements) but it really worked for them: http://www.slideshare.net/robfay/design-for-the-rudes-the-value-of-design-principles-7549638

Jeff

10 Jun 2011 - 9:05am
chrischandler
2008

David,

Great question. Frankly, I think design success is pretty much of an open frontier in our profession.

Richard Dalton gave a great presentation at this year's IA Summit that I think you'll find interesting:

A Practical Guide to Measuring User Experience
http://www.slideshare.net/mauvyrusset/a-practical-guide-to-measuring-user-experience

-cc


On Thu, Jun 9, 2011 at 12:16 PM, david.shaw6@gmail.com <david.shaw6@gmail.com> wrote:

Hey everyone... I was just asked by someone if there are examples of success metrics that we (UX/IxDers) use to validate that our design was "successful".  I had a long discussion because this is so subjective.  How do you measure what one person considers successful while another may not.  So what I am looking for really is if there are any examples that people have used before to appease the biz teams.  Maybe it's as simple as a checklist with criteria that states "person uses product without throwing out the window."
Love to hear thoughts!
Thanks,
David
--
"Some people are so preoccupied with whether or not they /can/ do something, they never stop to think if they /should/."

http://spinjunkey.blogspot.com/ [1]

(((P
10 Jun 2011 - 11:05am
david.shaw6@gma...
2004

This is great stuff everyone... thanks for all the replies!  I knew this would open up a can of proverbial worms per se, but I think it's something we need to address.  Especially in order to get a seat at the C level table, we need to be able to translate what we do into "MBA" speak. 
David

On Fri, Jun 10, 2011 at 8:06 AM, chrischandler <chris.chandler@gmail.com> wrote:

David,

Great question. Frankly, I think design success is pretty much of an open frontier in our profession.

Richard Dalton gave a great presentation at this year's IA Summit that I think you'll find interesting:

A Practical Guide to Measuring User Experience
http://www.slideshare.net/mauvyrusset/a-practical-guide-to-measuring-user-experience [1]

-cc

On Thu, Jun 9, 2011 at 12:16 PM, david.shaw6@gmail.com [2] <david.shaw6@gmail.com [3]> wrote:

Hey everyone... I was just asked by someone if there are examples of success metrics that we (UX/IxDers) use to validate that our design was "successful".  I had a long discussion because this is so subjective.  How do you measure what one person considers successful while another may not.  So what I am looking for really is if there are any examples that people have used before to appease the biz teams.  Maybe it's as simple as a checklist with criteria that states "person uses product without throwing out the window."

Love to hear thoughts!
Thanks,
David
--
"Some people are so preoccupied with whether or not they /can/ do something, they never stop to think if they /should/."

http://spinjunkey.blogspot.com/ [4] [1]

(((P

(((Plea
11 Jun 2011 - 3:10am
uxgrace
2010

This question is only posed by those who don't believe in User Experience design, thus trying to trap the designer in a "I don't know" answer. In fact, it is the product owners' job to set the goals.

When a Product Manager is embrassing User Experience design he doesn't do it because it is a buzz word. He does it because he expects softaware to become more usable, more efficient and help the User to succeed on the task he wants him to. The Product owner should have set the tasks in advance and define what would success mean for him.

Then by simply compering before and after situation you get your metrics and impress the boss :)

12 Jun 2011 - 1:09am
Phillip Hunter
2006

Hi all,

A couple of basic things to add to this important topic.

First, remember that metric means using something measurable. Too often discussions about success get lost down rabbit trails of events that aren't very directly measurable, or at least not on a large and reliable scale.

Second, thoroughly discuss the why behind desired metrics. Figure out what each is and isn't going tell you. As the Dalton article says, data can be misinterpreted easily, or made to say things it really doesn't. Avoid that by predefining it. That way, if you're wrong, at least you'll all be wrong together. ;-)

Third, agree on what the desired target value of each metric should be *before* the design is finished. Some measures, such as time to completion, can conflict with others. Know what the most important metrics are and what targets are desired while you can do something about them during the design.

To directly answer the question, one thing we look at is task completion rate, measuring end-to-end completion versus dropping out along the way. It's challenging to instrument, but is very telling when in place.

Phillip

Syndicate content Get the feed