6 Metrics for Managing UI Design

18 Aug 2008 - 10:52pm
6 years ago
8 replies
1898 reads
russwilson
2005

I've been working with my team to devise a small set of metrics (don't want
overkill) to help both guide our efforts and measure our progress. Please
take a look at my post if you have any interest in this area and comment. I
would love to get feedback on metrics that others are working with.

http://blog.dexodesign.com/2008/08/18/6-metrics-for-managing-ui-design/

Russell Wilson
Vice President of Product Design, NetQoS
Blog: http://www.dexodesign.com

Comments

19 Aug 2008 - 6:28am
Phil C
2008

On #2: If you%u2019re trying to measure effort, not sure if this is detailed enough. I%u2019ve seen interactive prototypes that take a day and others that take weeks. I suppose it depends on whether you have standards for what represents an %u201Cinteractive prototype%u201D

On #4: Is number of users tested as important as number of complete user testing cycles run?

Nielsen argued that 7 is the max number of test subjects needed per user testing pass. Theoretically your team could be wasting calendar time on a test of 20 or more subjects but still max out their User Testing metric. Changing it to user testing phases completed would require standards to be established for what constitutes a user test: hallway testing, formal tests with users, unit testing on specific features vs. full regression testing with users, etc.

19 Aug 2008 - 1:33pm
Scott Berkun
2008

One quick test of any metric is to spend 5 minutes trying to hack it - What
if you were your evil twin: how could you make evil happen while still
scoring well on these metrics? Better metrics make life harder for your evil
twin. Lousy metrics make it easy.

> 1) Number of layouts delivered
> 2) Number of interactive prototypes created
> 3) Percentage of product design requests completed by commit date
> 4) Number of users tested
> 5) Number of product improvements made
> 6) Number of products insights documented

One big assumption you're making is higher numbers means better results. One
excellent prototype might do the work of 5 mediocre ones, but the designer
who tends to need 5 mediocre ones will score better here. Same for # of
users tested (you're rewarding people with sloppy study designs, or who
can't win basic arguments without going to the lab), etc. Volume is a very
poor measure for quality. But since measuring volume is easy and popular it
explains the dozens of organizations proud of their fancy metrics, but
somehow in denial of their lousy products. I'm really not a fan of
systematic metrics - it's a favorite fuel for micromanagers.

You should also note there is nothing wrong with subjective metrics. Why
cant your team score itself 1 to 10 on team performance every month, or even
better, ask your clients & stakeholders to rate your performance. Then at
least you have a metric that is very difficult to manipulate. So what if
it's not scientific: science is not a panacea. If the goal is to get a sense
of how you're doing and focus team energy, qualitative measures can be just
as effective as quantitative ones. RMPT can work fine with subjective
measures.

Lastly thinking like a general manager, which I was most of my career, the
only metric I'd ever evaluate you on if I were your boss would be #5: number
of product improvements made. That's the *only* metric that earns your team
its salary. A favorite scheme I've seen used for usability engineers is
simply this: # of usability issues found, # of recommendations made, # of
recommendations approved. You might need a different set for designers, but
you get the idea.

If you discover more layouts, more prototypes, more magic spells, lead to
more approved recommendations, you'll be rewarded for it. And if those
things (layouts, protos, etc.) turn out to be a waste of time, you wont have
a team of people doing those things anyway just because there is a metric
that rewards it. (But do note that this is pretty much the only way to get
people to respect metrics: they must be tied to rewards).

And finally, I'd guess NetQos is a metric happy place give the business
you're in, which is fine. But Creative work doesn't fit metric schemes as
well as, say, performance testing does - creative work is inherently sloppy,
messy and wasteful - I'd seek out other creative groups, PR, Marketing,
Advertising, etc. and see how they're handling fitting their creative work
into metrics. I suspect you'll get better ideas from them than from the
engineering and Q&A orgs.

-Scott

Scott Berkun
www.scottberkun.com

-----Original Message-----
From: discuss-bounces at lists.interactiondesigners.com
[mailto:discuss-bounces at lists.interactiondesigners.com] On Behalf Of Russell
Wilson
Sent: Monday, August 18, 2008 8:53 PM
To: discuss at ixda.org
Subject: [IxDA Discuss] 6 Metrics for Managing UI Design

I've been working with my team to devise a small set of metrics (don't want
overkill) to help both guide our efforts and measure our progress. Please
take a look at my post if you have any interest in this area and comment. I
would love to get feedback on metrics that others are working with.

http://blog.dexodesign.com/2008/08/18/6-metrics-for-managing-ui-design/

19 Aug 2008 - 1:14pm
Katie Albers
2005

Everything Scott said and one more point: UI, Design, UX, IA and all
the associated fields are qualitative fields. They cannot -- BY
DEFINITION -- be measured. The closest you can come to measuring how
well you do your job is to measure clients'/customers'
dissatisfaction...does your help desk get fewer calls on this problem
than they used to? Are complaints lower than a similar app's are (and
good luck trying to get *that* data). Users seldom laud our work --
the closer we are to "perfect", the less they notice that it's been
done at all -- so really, you're stuck tracking the reverse.

Katie
"
At 11:33 AM -0800 8/19/08, <info at scottberkun.com> wrote:
>One quick test of any metric is to spend 5 minutes trying to hack it - What
>if you were your evil twin: how could you make evil happen while still
>scoring well on these metrics? Better metrics make life harder for your evil
>twin. Lousy metrics make it easy.
>
>> 1) Number of layouts delivered
>> 2) Number of interactive prototypes created
>> 3) Percentage of product design requests completed by commit date
>> 4) Number of users tested
>> 5) Number of product improvements made
>> 6) Number of products insights documented
>
>One big assumption you're making is higher numbers means better results. One
>excellent prototype might do the work of 5 mediocre ones, but the designer
>who tends to need 5 mediocre ones will score better here. Same for # of
>users tested (you're rewarding people with sloppy study designs, or who
>can't win basic arguments without going to the lab), etc. Volume is a very
>poor measure for quality. But since measuring volume is easy and popular it
>explains the dozens of organizations proud of their fancy metrics, but
>somehow in denial of their lousy products. I'm really not a fan of
>systematic metrics - it's a favorite fuel for micromanagers.
>
>You should also note there is nothing wrong with subjective metrics. Why
>cant your team score itself 1 to 10 on team performance every month, or even
>better, ask your clients & stakeholders to rate your performance. Then at
>least you have a metric that is very difficult to manipulate. So what if
>it's not scientific: science is not a panacea. If the goal is to get a sense
>of how you're doing and focus team energy, qualitative measures can be just
>as effective as quantitative ones. RMPT can work fine with subjective
>measures.
>
>Lastly thinking like a general manager, which I was most of my career, the
>only metric I'd ever evaluate you on if I were your boss would be #5: number
>of product improvements made. That's the *only* metric that earns your team
>its salary. A favorite scheme I've seen used for usability engineers is
>simply this: # of usability issues found, # of recommendations made, # of
>recommendations approved. You might need a different set for designers, but
>you get the idea.
>
>If you discover more layouts, more prototypes, more magic spells, lead to
>more approved recommendations, you'll be rewarded for it. And if those
>things (layouts, protos, etc.) turn out to be a waste of time, you wont have
>a team of people doing those things anyway just because there is a metric
>that rewards it. (But do note that this is pretty much the only way to get
>people to respect metrics: they must be tied to rewards).
>
>And finally, I'd guess NetQos is a metric happy place give the business
>you're in, which is fine. But Creative work doesn't fit metric schemes as
>well as, say, performance testing does - creative work is inherently sloppy,
>messy and wasteful - I'd seek out other creative groups, PR, Marketing,
>Advertising, etc. and see how they're handling fitting their creative work
>into metrics. I suspect you'll get better ideas from them than from the
>engineering and Q&A orgs.
>
>-Scott
>
>Scott Berkun
>www.scottberkun.com

--

------------------
Katie Albers
User Experience Strategy & Project Management
katie at firstthought.com

19 Aug 2008 - 3:23pm
russwilson
2005

Katie - not sure I agree with the fact that UI cannot be measured. I think
it's more difficult to come up with valid metrics, but I think it's
possible. And I would also argue that if you ever want to be taken
seriously at the corporate level, you *better* come up with some sort of
quantitative indicator of the value UI brings to the table... however
fragile that indicator is. And to your point, maybe "tracking the reverse"
is a method worth exploring?

Russell Wilson
Vice President of Product Design, NetQoS
Blog: http://www.dexodesign.com

On Tue, Aug 19, 2008 at 2:14 PM, Katie Albers <katie at firstthought.com>wrote:

> Everything Scott said and one more point: UI, Design, UX, IA and all the
> associated fields are qualitative fields. They cannot -- BY DEFINITION -- be
> measured. The closest you can come to measuring how well you do your job is
> to measure clients'/customers' dissatisfaction...does your help desk get
> fewer calls on this problem than they used to? Are complaints lower than a
> similar app's are (and good luck trying to get *that* data). Users seldom
> laud our work -- the closer we are to "perfect", the less they notice that
> it's been done at all -- so really, you're stuck tracking the reverse.
>
> Katie
>
> "
> At 11:33 AM -0800 8/19/08, <info at scottberkun.com> wrote:
>
>> One quick test of any metric is to spend 5 minutes trying to hack it -
>> What
>> if you were your evil twin: how could you make evil happen while still
>> scoring well on these metrics? Better metrics make life harder for your
>> evil
>> twin. Lousy metrics make it easy.
>>
>> 1) Number of layouts delivered
>>> 2) Number of interactive prototypes created
>>> 3) Percentage of product design requests completed by commit date
>>> 4) Number of users tested
>>> 5) Number of product improvements made
>>> 6) Number of products insights documented
>>>
>>
>> One big assumption you're making is higher numbers means better results.
>> One
>> excellent prototype might do the work of 5 mediocre ones, but the designer
>> who tends to need 5 mediocre ones will score better here. Same for # of
>> users tested (you're rewarding people with sloppy study designs, or who
>> can't win basic arguments without going to the lab), etc. Volume is a very
>> poor measure for quality. But since measuring volume is easy and popular
>> it
>> explains the dozens of organizations proud of their fancy metrics, but
>> somehow in denial of their lousy products. I'm really not a fan of
>> systematic metrics - it's a favorite fuel for micromanagers.
>>
>> You should also note there is nothing wrong with subjective metrics. Why
>> cant your team score itself 1 to 10 on team performance every month, or
>> even
>> better, ask your clients & stakeholders to rate your performance. Then at
>> least you have a metric that is very difficult to manipulate. So what if
>> it's not scientific: science is not a panacea. If the goal is to get a
>> sense
>> of how you're doing and focus team energy, qualitative measures can be
>> just
>> as effective as quantitative ones. RMPT can work fine with subjective
>> measures.
>>
>> Lastly thinking like a general manager, which I was most of my career, the
>> only metric I'd ever evaluate you on if I were your boss would be #5:
>> number
>> of product improvements made. That's the *only* metric that earns your
>> team
>> its salary. A favorite scheme I've seen used for usability engineers is
>> simply this: # of usability issues found, # of recommendations made, # of
>> recommendations approved. You might need a different set for designers,
>> but
>> you get the idea.
>>
>> If you discover more layouts, more prototypes, more magic spells, lead to
>> more approved recommendations, you'll be rewarded for it. And if those
>> things (layouts, protos, etc.) turn out to be a waste of time, you wont
>> have
>> a team of people doing those things anyway just because there is a metric
>> that rewards it. (But do note that this is pretty much the only way to get
>> people to respect metrics: they must be tied to rewards).
>>
>> And finally, I'd guess NetQos is a metric happy place give the business
>> you're in, which is fine. But Creative work doesn't fit metric schemes as
>> well as, say, performance testing does - creative work is inherently
>> sloppy,
>> messy and wasteful - I'd seek out other creative groups, PR, Marketing,
>> Advertising, etc. and see how they're handling fitting their creative work
>> into metrics. I suspect you'll get better ideas from them than from the
>> engineering and Q&A orgs.
>>
>> -Scott
>>
>> Scott Berkun
>> www.scottberkun.com
>>
>
>
> --
>
> ------------------
> Katie Albers
> User Experience Strategy & Project Management
> katie at firstthought.com
>
> ________________________________________________________________
> Welcome to the Interaction Design Association (IxDA)!
> To post to this list ....... discuss at ixda.org
> Unsubscribe ................ http://www.ixda.org/unsubscribe
> List Guidelines ............ http://www.ixda.org/guidelines
> List Help .................. http://www.ixda.org/help
>

19 Aug 2008 - 5:17pm
Scott Berkun
2008

One tool that fits in this conversation is Usability benchmarking. It is one
quantitative way to track the total impact of a user interface effort. See -
http://www.google.com/search?q=usability+benchmark. It is a baseline
measurement of the overall experience and can be compared against
periodically.

At the corporate level, UX folks are always at a large disadvantage because
they are rarely a primary function in the company, and treated accordingly.
>From the CEO view, why shouldn't the minority be evaluated in the same way
as the majority? Metrics can not fix that alone. The good news is that you
don't necessarily need metrics to be taken seriously anyway. What you do
need is the support of the senior engineers and business folks the
executives already respect. If they applaud and cheer when you ask for more
budget, and support your requests whatever they are, I doubt anyone will ask
to see your metrics. And if they did demand numbers ROI type arguments,
where the cost of a UI designer is translated into a 1.5x impact on the
bottom line, is a better line of data than tracking how many prototypes were
made.

But even if metrics were the only way to get executive interest, it's
critical to separate out a) the things you do as a manager to jockey for
executive support, from b) what things actually improve the quality of the
user experience. You don't want your team confusing a with b.

-Scott

-----Original Message-----
From: discuss-bounces at lists.interactiondesigners.com
[mailto:discuss-bounces at lists.interactiondesigners.com] On Behalf Of Russell
Wilson
Sent: Tuesday, August 19, 2008 1:24 PM
To: Katie Albers
Cc: discuss at ixda.org
Subject: Re: [IxDA Discuss] 6 Metrics for Managing UI Design

Katie - not sure I agree with the fact that UI cannot be measured. I think
it's more difficult to come up with valid metrics, but I think it's
possible. And I would also argue that if you ever want to be taken
seriously at the corporate level, you *better* come up with some sort of
quantitative indicator of the value UI brings to the table... however
fragile that indicator is. And to your point, maybe "tracking the reverse"
is a method worth exploring?

Russell Wilson
Vice President of Product Design, NetQoS
Blog: http://www.dexodesign.com

On Tue, Aug 19, 2008 at 2:14 PM, Katie Albers <katie at firstthought.com>wrote:

> Everything Scott said and one more point: UI, Design, UX, IA and all
> the associated fields are qualitative fields. They cannot -- BY
> DEFINITION -- be measured. The closest you can come to measuring how
> well you do your job is to measure clients'/customers'
> dissatisfaction...does your help desk get fewer calls on this problem
> than they used to? Are complaints lower than a similar app's are (and
> good luck trying to get *that* data). Users seldom laud our work --
> the closer we are to "perfect", the less they notice that it's been done
at all -- so really, you're stuck tracking the reverse.
>
> Katie
>
> "
> At 11:33 AM -0800 8/19/08, <info at scottberkun.com> wrote:
>
>> One quick test of any metric is to spend 5 minutes trying to hack it
>> - What if you were your evil twin: how could you make evil happen
>> while still scoring well on these metrics? Better metrics make life
>> harder for your evil twin. Lousy metrics make it easy.
>>
>> 1) Number of layouts delivered
>>> 2) Number of interactive prototypes created
>>> 3) Percentage of product design requests completed by commit date
>>> 4) Number of users tested
>>> 5) Number of product improvements made
>>> 6) Number of products insights documented
>>>
>>
>> One big assumption you're making is higher numbers means better results.
>> One
>> excellent prototype might do the work of 5 mediocre ones, but the
>> designer who tends to need 5 mediocre ones will score better here.
>> Same for # of users tested (you're rewarding people with sloppy study
>> designs, or who can't win basic arguments without going to the lab),
>> etc. Volume is a very poor measure for quality. But since measuring
>> volume is easy and popular it explains the dozens of organizations
>> proud of their fancy metrics, but somehow in denial of their lousy
>> products. I'm really not a fan of systematic metrics - it's a
>> favorite fuel for micromanagers.
>>
>> You should also note there is nothing wrong with subjective metrics.
>> Why cant your team score itself 1 to 10 on team performance every
>> month, or even better, ask your clients & stakeholders to rate your
>> performance. Then at least you have a metric that is very difficult
>> to manipulate. So what if it's not scientific: science is not a
>> panacea. If the goal is to get a sense of how you're doing and focus
>> team energy, qualitative measures can be just as effective as
>> quantitative ones. RMPT can work fine with subjective measures.
>>
>> Lastly thinking like a general manager, which I was most of my
>> career, the only metric I'd ever evaluate you on if I were your boss
would be #5:
>> number
>> of product improvements made. That's the *only* metric that earns
>> your team its salary. A favorite scheme I've seen used for usability
>> engineers is simply this: # of usability issues found, # of
>> recommendations made, # of recommendations approved. You might need a
>> different set for designers, but you get the idea.
>>
>> If you discover more layouts, more prototypes, more magic spells,
>> lead to more approved recommendations, you'll be rewarded for it. And
>> if those things (layouts, protos, etc.) turn out to be a waste of
>> time, you wont have a team of people doing those things anyway just
>> because there is a metric that rewards it. (But do note that this is
>> pretty much the only way to get people to respect metrics: they must
>> be tied to rewards).
>>
>> And finally, I'd guess NetQos is a metric happy place give the
>> business you're in, which is fine. But Creative work doesn't fit
>> metric schemes as well as, say, performance testing does - creative
>> work is inherently sloppy, messy and wasteful - I'd seek out other
>> creative groups, PR, Marketing, Advertising, etc. and see how they're
>> handling fitting their creative work into metrics. I suspect you'll
>> get better ideas from them than from the engineering and Q&A orgs.
>>
>> -Scott
>>
>> Scott Berkun
>> www.scottberkun.com
>>
>
>
> --
>
> ------------------
> Katie Albers
> User Experience Strategy & Project Management katie at firstthought.com
>
> ________________________________________________________________
> Welcome to the Interaction Design Association (IxDA)!
> To post to this list ....... discuss at ixda.org Unsubscribe
> ................ http://www.ixda.org/unsubscribe List Guidelines
> ............ http://www.ixda.org/guidelines List Help
> .................. http://www.ixda.org/help
>
________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... discuss at ixda.org Unsubscribe ................
http://www.ixda.org/unsubscribe List Guidelines ............
http://www.ixda.org/guidelines List Help ..................
http://www.ixda.org/help

20 Aug 2008 - 7:41am
Chauncey Wilson
2007

Just a niggling comment here: You note that subjective metrics are "not
scientific" but in fact there is a great deal of research into "subjective
metrics" like customer satisfaction. There are different definitions of
"science" and science can involve both qualitative and quantitative methods
and systematic data collection and analysis.

When I managed a usability group, I invited clients to give quarterly
feedback on how well the team was doing (ratings and qualitative comments
that were then coded by similarity). One issue that emerged from this
"client satisfaction" survey was that the type of report desired differed by
stakeholders. From that I learned to give clients examples of our reports
and ask them for feedback about how well the format would support their
needs (requirements input, detailed UI design, high-level consistency
issues, etc.).

Chauncey

> "You should also note there is nothing wrong with subjective metrics. Why
> cant your team score itself 1 to 10 on team performance every month, or
> even
> better, ask your clients & stakeholders to rate your performance. Then at
> least you have a metric that is very difficult to manipulate. So what if
> it's not scientific: science is not a panacea. If the goal is to get a
> sense
> of how you're doing and focus team energy, qualitative measures can be just
> as effective as quantitative ones. RMPT can work fine with subjective
> measures."
>

21 Aug 2008 - 10:39am
Mike Poulter
2008

We are lucky enough to be working for a corporation who values the work our Human Factors team does. We have the final say.

That being said, we have been regularly taking prototypes and such out to customers as part of the Contextual Inquiry process we run. At the end, especially for long term customers, we have been asking them to do a simple rank of the application, on a 1-10 scale. If we have increased, why? If we have fallen, why? We track these numbers and share them with management.

This feedback gives us a small metric into how effective the UX team is doing.

Admittedly not overly scientific and fairly subjective, but in some ways it is possibly better since it is direct customer feedback. You know, the folks who pay the bills...

20 Aug 2008 - 2:05pm
netwiz
2010

On Mon, 18 Aug 2008 23:52:53 -0500, Russell wrote:

>I've been working with my team to devise a small set of metrics (don't want
>overkill) to help both guide our efforts and measure our progress. Please
>take a look at my post if you have any interest in this area and comment. I
>would love to get feedback on metrics that others are working with.
>
>http://blog.dexodesign.com/2008/08/18/6-metrics-for-managing-ui-design/

Hi Russell. As others have commented, your metric focus on numbers,
and not quality.

I think you have two primary sets of people to address. The first is
the people you work with (presumably developers and people generating
business requirements/product owners). They will have their own ideas
as to what they see as success from you, which will include whether
you deliver designs on time, and whether they trust your professional
judgement. You could derive some metrics from discussions with them.

The second audience is the users of the applications that you are
developing for. You can survey and talk to users, and get measures of
overall satisfaction, or of satisfaction with key modules. You can
also measure in a number of ways whether people can successfully
complete tasks, and how long it takes, and where they get stuck. You
can measure how many problems you identify with the usability of live
applications, and how many you have design solutions for.

Those are the sorts of things I'd be more interested in, as they
relate more directly to business success.

* Nick Gassman - Usability and Standards Manager - http://ba.com *

Syndicate content Get the feed