Heuristic Evaluations - A Personal Approach

6 Mar 2009 - 10:22am
5 years ago
3 replies
2085 reads
Liam Greig
2009

Hi everyone,

Apologies in advance for the long post. The agency where I work as an
IxD recently started an employee driven lunch and learn series where I
had the opportunity to introduce the idea of Heuristics to the group.
I covered off the basics and background from Nielson on and it was
all very well received. Following the session we opened up the floor
and had some great dialogue on the personification of software and
it's application to Heuristics. As a follow up I have been asked to
present a more 'human friendly' version of Heuristics to the team
using personal traits as a guide for non-expert use. We will most
likely begin to guinea pig this idea into our process next week. I
thought to myself today - 'what a great chance to finally post an
IXDA discussion thread'. I have been reading along for a very long
time...

So I would love any feedback you have to offer. Here is my first pass
at a new take on Heuristics based largely on Nielson's originals as
well as the ISOs ergonomics of human system interactions:

Begin each Heuristic with 'A design should be...'

1. Transparent
At all times a person should understand where they are, what actions
are available and how those actions can be performed. Information and
objects should be made visible so that a person does not have to rely
on memory from one screen to another

Ask Yourself:
• Where am I?
• What are my options?

2. Responsive
Whenever appropriate, useful feedback should let a person know what
is going on within a reasonable amount of time. If a person initiates
an action, they should receive a clear response.

Ask Yourself:
• What is happening right now?
• Am I getting what I need?

3. Considerate
A person should understand the language, words, terminology and
phrases presented to them. Error messages should be expressed in
plain language, precisely indicate the problem and constructively
suggest a solution. Predictable contextual needs and commonly
accepted conventions should be followed.

Ask Yourself:
• Does this make sense to me?

4. Supportive
A person should feel supported in the effective and efficient
completion of their task. A person should feel enabled to focus on
the task itself as opposed to the technology chosen to perform that
task.

Ask Yourself:
• Can I focus on my task?
• Do I feel frustrated?

5. Consistent
A person should not have to wonder whether different words,
situations, or actions mean the same thing. Additionally a person
should not discover that similar words, situations or actions mean
different things. Establish and maintain conventions.

Ask Yourself:
• Are my expectations accurate?

6. Forgiving
Despite evident errors in input, a person should be capable of
achieving their intended result with either no or minimal corrective
effort. Damage control, error correction, or error management should
be handled by the technology as opposed to the person using that
technology whenever possible and appropriate.

Ask Yourself:
• Are mistakes easy to fix?
• Does the technology blame me for errors?

7. Guiding
A person should feel capable of learning what is required to
accomplish their goals. Help documentation and information should be
easy to locate and search, focused on the task at hand and be only as
long as necessary.

Ask Yourself:
• Do I know where to go for help?

8. Accommodating
A person should be able to initiate and control the direction and
pace of an interaction until the point at which the goal has been
met. Emergency exits should allow a person to leave any unwanted
state without having to go through an extended process. Support undo
and redo.

Ask Yourself:
• Am I in control?
• Am I afraid to make mistakes?

9. Flexible
A person should be able to modify the interaction and presentation of
information to suit their individual capabilities and needs.

Ask Yourself:
• Can I customize my experience?

10. Intelligent
The technology should be able to recognize usage patterns and adapt
itself to accommodate the individual. The person should experience an
improvement of quality over time with repeat visits.

Ask Yourself:
• Does the technology know who I am?
• Did the technology remember the way I left things?

Thanks for you time!

Comments

6 Mar 2009 - 5:18pm
Chauncey Wilson
2007

Hi,

This is a very interesting approach. Some general points:

What is your measure of success for your new heuristics? There is
some lively debate about success metrics with some examples being:
number of problems found; number of major problems found relative to
usability testing, abiltiy of developers to find problems in their
software, ...... There are some good papers by Woolrych and Cockton
discussing the efficacy of inspection methods.

One of the criticisms of the original heuristics is that they are not
complete. There have been additional sets of heuristics. The
originals for example, don't really deal with collaboration for
example or basic human factors problems like foreground/background
contrast.

The context for an inspection will vary by the user/task/environment
context. For example, transparency may be good or bad depending on
context. Guiding might always be good for a product used once a year
(tax program) or once when you set something complex up, but guiding
is not always good.

Here are some weaknesses of heuristic evaluations that you might consider:

• Different evaluators often find different problems for the same
product. This “evaluator effect” (Jacobsen, Hertzum, & John, 1998) has
implications for deciding what changes should be made to a design.
What do you do if five evaluators each come up with quite different
sets of problems (Kotval, Coyle, Santos, Vaughn, & Iden, (2007)?
• The heuristic evaluation method is based on finding usability
“problems”, but there is debate about what constitutes a problem and
whether heuristic reviews are good at finding “real problems”
• Heuristic reviews may not scale well for complex interfaces
(Slavkovic & Cross, 1999). In complex interfaces, a small number of
evaluators may find only a small percentage of the problems in an
interface and may miss some serious problems.
• Evaluators may report problems at different levels of granularity.
For example, one evaluator may list a global problem of “bad error
messages” while another evaluator lists separate problems for each
error message encountered. The instructions and training for a
heuristic review should discuss what level of granularity is
appropriate. The facilitator of a heuristic evaluation will invariably
have to extract important high level issues from sets of specific
problems.
• Lack of clear rules for assigning severity judgments may yield major
differences; one evaluator says “minor” problem while others say
“moderate” or “serious” problem. In a recent study comparing the
results of usability testing with those of heuristics review there
were a few instances where some reviewers listed the same observation
as both problem and a “positive design feature”.
• Some organizations find heuristic evaluation such a popular method
that they are reluctant to use other methods like usability testing or
participatory design.

> Begin each Heuristic with 'A design should be...'
>
> 1. Transparent
> At all times a person should understand where they are, what actions
> are available and how those actions can be performed. Information and
> objects should be made visible so that a person does not have to rely
> on memory from one screen to another
>
> Ask Yourself:
> •     Where am I?
> •     What are my options?

Transparency is a complex concept and is not always good.

>
> 2. Responsive
> Whenever appropriate, useful feedback should let a person know what
> is going on within a reasonable amount of time. If a person initiates
> an action, they should receive a clear response.
>

"Reasonable amount of time" depends on the context for the most part
and there is perceived versus real responsiveness.

> Ask Yourself:
> •     What is happening right now?
> •     Am I getting what I need?
> 5. Consistent
> A person should not have to wonder whether different words,
> situations, or actions mean the same thing. Additionally a person
> should not discover that similar words, situations or actions mean
> different things. Establish and maintain conventions.

Consistency is very complex. What about consistency within different
parts of an app; consistency with other apps the user works with,
consistency with corporate style, consistent with OS, etc. See the
classic article by Grudin on consistency for a good notion about when
it is consistent to be inconsistent.
Grudin, J. 1989. The case against user interface consistency. Commun.
ACM 32, 10 (Oct. 1989), 1164-1173. DOI=
http://doi.acm.org/10.1145/67933.67934

> 7. Guiding
> A person should feel capable of learning what is required to
> accomplish their goals. Help documentation and information should be
> easy to locate and search, focused on the task at hand and be only as
> long as necessary.

Guiding and flexible can be contradictory. One person's guiding can
be another person's flexibility. Guiding can be good or very bad
depending on the user training and task frequency.

Chauncey

8 Mar 2009 - 10:56am
Liam Greig
2009

Hi Chauncey,

Thanks for your reply, you have given me some very valid points to
consider. Overall, the high level goal of this exercise is to start
moving towards a more objective approach to critique within an
organization that has been (for the most part successfully) founded
on genius design. To introduce a common language for that critique is
a major part of that goal. So while I do not expect the technique to
be flawless, I am hoping that it pushes us in a positive direction.

I imagine that at least in the early stages of the evaluations I will
be acting as facilitator so hopefully I will be successful in
communicating and establishing consistent reporting techniques,
severity guidelines and granularity within the issues discovered. I
have not given that enough thought so I appreciate the feedback and I
look forward to that challenge.

On a side note, we also had some good dialogue on the need to modify
the heuristics on a project by project basis. As you mentioned,
heuristics such as guiding and transparent may not fit withing the
given context of a project. I feel that minor adjustments or the
addition/removal of project specific heuristics make sense but
overall I see consistency in the heuristics used as necessary to
establish a common language as well as to work towards internalizing
the heuristics applied. I would be interested in hearing your
thoughts.

Cheers!

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=39587

9 Mar 2009 - 1:44pm
Dana Chisnell
2008

Liam,

I think this is an absolutely awesome approach to what you're
calling a heuristic review. It transforms the Nielsen heuristics from
a checklist that just looks at the elements of the user interface in a
sort of localized way to looking at the whole experience of using an
interface from the point of view of a task-oriented human who has a
goal to reach. It also reflects something that Jared Spool taught me
to ask my clients about how to think about designs: "If this interface
were the perfect human helper, what would that person be like?"

One of the issues I've always had with a heuristic evaluation
methodology is that it often gets treated by evaluators as a
checklist. Yes, there's help. Yes, the buttons have reasonable labels.
No, the error messages stink and it's too hard to recover from making
errors. Etc. But if we put ourselves in the place of real users --
which you can do if you have personas or user profiles available, or
if you're designing for yourself -- we can do a much better job of
applying heuristics as an evaluation technique.

Three things to think about:

1. Using heuristics as a checklist means the evaluator views the user
interface in a mechanical way that can miss nuances people reveal when
they're doing real tasks. A lot of what you'll find are cosmetic issues.

2. Performing an evaluation in the way you suggest gives a much closer-
to-true view of what a user might do. But it is also massively labor
intensive and it is unlikely that any one person will be able to
evaluate an *entire* largish web site or application this way. It's
just too exhausting. But by taking a snapshot through key, high-
priority tasks, a team can see where they're missing the mark on key,
high-priority items that should be factors contributing to a primo
user experience.

3. Performing a heuristic evaluation -- or any evaluation that relies
on guidelines is definitely no substitute for putting a design in
front of a real person who has real goals. Humans are just -- happily
-- unpredictable.

Fabulous work. I can't wait to try it out.

Dana

:: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: ::
Dana Chisnell
desk: 415.392.0776
mobile: 415.519.1148

dana AT usabilityworks DOT net

www.usabilityworks.net
http://usabilitytestinghowto.blogspot.com/

On Mar 6, 2009, at 2:22 AM, Liam Greig wrote:

> Hi everyone,
>
> Apologies in advance for the long post. The agency where I work as an
> IxD recently started an employee driven lunch and learn series where I
> had the opportunity to introduce the idea of Heuristics to the group.
> I covered off the basics and background from Nielson on and it was
> all very well received. Following the session we opened up the floor
> and had some great dialogue on the personification of software and
> it's application to Heuristics. As a follow up I have been asked to
> present a more 'human friendly' version of Heuristics to the team
> using personal traits as a guide for non-expert use. We will most
> likely begin to guinea pig this idea into our process next week. I
> thought to myself today - 'what a great chance to finally post an
> IXDA discussion thread'. I have been reading along for a very long
> time...
>
> So I would love any feedback you have to offer. Here is my first pass
> at a new take on Heuristics based largely on Nielson's originals as
> well as the ISOs ergonomics of human system interactions:
>
> Begin each Heuristic with 'A design should be...'
>
> 1. Transparent
> At all times a person should understand where they are, what actions
> are available and how those actions can be performed. Information and
> objects should be made visible so that a person does not have to rely
> on memory from one screen to another
>
> Ask Yourself:
> • Where am I?
> • What are my options?
>
> 2. Responsive
> Whenever appropriate, useful feedback should let a person know what
> is going on within a reasonable amount of time. If a person initiates
> an action, they should receive a clear response.
>
> Ask Yourself:
> • What is happening right now?
> • Am I getting what I need?
>
> 3. Considerate
> A person should understand the language, words, terminology and
> phrases presented to them. Error messages should be expressed in
> plain language, precisely indicate the problem and constructively
> suggest a solution. Predictable contextual needs and commonly
> accepted conventions should be followed.
>
> Ask Yourself:
> • Does this make sense to me?
>
> 4. Supportive
> A person should feel supported in the effective and efficient
> completion of their task. A person should feel enabled to focus on
> the task itself as opposed to the technology chosen to perform that
> task.
>
> Ask Yourself:
> • Can I focus on my task?
> • Do I feel frustrated?
>
> 5. Consistent
> A person should not have to wonder whether different words,
> situations, or actions mean the same thing. Additionally a person
> should not discover that similar words, situations or actions mean
> different things. Establish and maintain conventions.
>
> Ask Yourself:
> • Are my expectations accurate?
>
> 6. Forgiving
> Despite evident errors in input, a person should be capable of
> achieving their intended result with either no or minimal corrective
> effort. Damage control, error correction, or error management should
> be handled by the technology as opposed to the person using that
> technology whenever possible and appropriate.
>
> Ask Yourself:
> • Are mistakes easy to fix?
> • Does the technology blame me for errors?
>
> 7. Guiding
> A person should feel capable of learning what is required to
> accomplish their goals. Help documentation and information should be
> easy to locate and search, focused on the task at hand and be only as
> long as necessary.
>
> Ask Yourself:
> • Do I know where to go for help?
>
> 8. Accommodating
> A person should be able to initiate and control the direction and
> pace of an interaction until the point at which the goal has been
> met. Emergency exits should allow a person to leave any unwanted
> state without having to go through an extended process. Support undo
> and redo.
>
> Ask Yourself:
> • Am I in control?
> • Am I afraid to make mistakes?
>
> 9. Flexible
> A person should be able to modify the interaction and presentation of
> information to suit their individual capabilities and needs.
>
> Ask Yourself:
> • Can I customize my experience?
>
> 10. Intelligent
> The technology should be able to recognize usage patterns and adapt
> itself to accommodate the individual. The person should experience an
> improvement of quality over time with repeat visits.
>
> Ask Yourself:
> • Does the technology know who I am?
> • Did the technology remember the way I left things?
>
> Thanks for you time!
>
>
> ________________________________________________________________
> Reply to this thread at ixda.org
> http://www.ixda.org/discuss?post=39587
>
> ________________________________________________________________
> Welcome to the Interaction Design Association (IxDA)!
> To post to this list ....... discuss at ixda.org
> Unsubscribe ................ http://www.ixda.org/unsubscribe
> List Guidelines ............ http://www.ixda.org/guidelines
> List Help .................. http://www.ixda.org/help

Syndicate content Get the feed