UPA Website Usability Study Seeking Paid Participants

11 Mar 2009 - 7:43am
5 years ago
31 replies
1014 reads
Jen Giroux
2009

The Usability Professionals’ Association (UPA) is currently working to
re-design its website, www.upassoc.org, and expect that the new site
will be launched by the end of 2009.

We are conducting a usability study of our new design and are
currently looking for representative users between the ages of 18-65
to participate in the study.

Compensation for qualifying participants will be a $25 Amazon gift
certificate for a 30-60 minute, one-on-one interview session. The
session will not take place in-person, it will take place via web
conference and telephone call.

Participation requires the following:

• Ability to participate in a 30-60 minute session between the hours
of 8am and 7pm EST on Monday March 16th or Tuesday March 17th.

• US Participants: Windows-based PC that is able to connect to a
secure web conference at the time of the session, and a telephone on
which you may be reached for the duration of the session

• International Participants: Windows or Mac based PC that is able to
connect to a secure web conference at the time of the session, and a
telephone with which to call a toll-free number for the duration of
the session

During the session, participants will be instructed how to show their
Windows desktop in the web conference, asked to demonstrate how they
would interact with wireframes of the design using a series of tasks
as directed by the moderator, and provide comments on the
experience.

Participants who complete the interview session will be sent a $25
Amazon gift certificate via email within 24 hours of the end of the
session. To find out if you qualify and to volunteer for the session,
please fill out the following survey:

http://www.surveymonkey.com/s.aspx?sm=VLe9OuKPeS5ZZws2ekuSWg_3d_3d

Comments

11 Mar 2009 - 9:54am
Dana Chisnell
2008

May I ask why the age range limits to 65? Are there no members of UPA
who are older than that? I'm pretty sure there are.

Dana
:: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: ::
Dana Chisnell
desk: 415.392.0776
mobile: 415.519.1148

dana AT usabilityworks DOT net

www.usabilityworks.net
http://usabilitytestinghowto.blogspot.com/

On Mar 11, 2009, at 6:43 AM, Jen Giroux wrote:

> The Usability Professionals’ Association (UPA) is currently working to
> re-design its website, www.upassoc.org, and expect that the new site
> will be launched by the end of 2009.
>
> We are conducting a usability study of our new design and are
> currently looking for representative users between the ages of 18-65
> to participate in the study.
>
> Compensation for qualifying participants will be a $25 Amazon gift
> certificate for a 30-60 minute, one-on-one interview session. The
> session will not take place in-person, it will take place via web
> conference and telephone call.
>
> Participation requires the following:
>
> • Ability to participate in a 30-60 minute session between the hours
> of 8am and 7pm EST on Monday March 16th or Tuesday March 17th.
>
> • US Participants: Windows-based PC that is able to connect to a
> secure web conference at the time of the session, and a telephone on
> which you may be reached for the duration of the session
>
> • International Participants: Windows or Mac based PC that is able to
> connect to a secure web conference at the time of the session, and a
> telephone with which to call a toll-free number for the duration of
> the session
>
> During the session, participants will be instructed how to show their
> Windows desktop in the web conference, asked to demonstrate how they
> would interact with wireframes of the design using a series of tasks
> as directed by the moderator, and provide comments on the
> experience.
>
> Participants who complete the interview session will be sent a $25
> Amazon gift certificate via email within 24 hours of the end of the
> session. To find out if you qualify and to volunteer for the session,
> please fill out the following survey:
>
>
> http://www.surveymonkey.com/s.aspx?sm=VLe9OuKPeS5ZZws2ekuSWg_3d_3d
>
>
> ________________________________________________________________
> Reply to this thread at ixda.org
> http://www.ixda.org/discuss?post=39845
>
> ________________________________________________________________
> Welcome to the Interaction Design Association (IxDA)!
> To post to this list ....... discuss at ixda.org
> Unsubscribe ................ http://www.ixda.org/unsubscribe
> List Guidelines ............ http://www.ixda.org/guidelines
> List Help .................. http://www.ixda.org/help

11 Mar 2009 - 8:20pm
Todd Warfel
2003

Perhaps because the core audience isn't older than 65? Not to say that
there aren't any, but I'd imagine, based on the meetings and
conferences that I've been to, that the number of people over 65 are
statistically quite small.

On Mar 11, 2009, at 11:54 AM, Dana Chisnell wrote:

> May I ask why the age range limits to 65?

Cheers!

Todd Zaki Warfel
Principal Design Researcher
Messagefirst | Designing Information. Beautifully.
----------------------------------
Contact Info
Voice: (215) 825-7423
Email: todd at messagefirst.com
AIM: twarfel at mac.com
Blog: http://toddwarfel.com
Twitter: zakiwarfel
----------------------------------
In theory, theory and practice are the same.
In practice, they are not.

11 Mar 2009 - 10:04pm
Jared M. Spool
2003

So? Why limit the age range? How does that benefit the research?

On Mar 11, 2009, at 10:20 PM, Todd Zaki Warfel wrote:

> Perhaps because the core audience isn't older than 65? Not to say
> that there aren't any, but I'd imagine, based on the meetings and
> conferences that I've been to, that the number of people over 65 are
> statistically quite small.
>
> On Mar 11, 2009, at 11:54 AM, Dana Chisnell wrote:
>
>> May I ask why the age range limits to 65?

12 Mar 2009 - 12:51am
Andrew Boyd
2008

On Thu, Mar 12, 2009 at 1:20 PM, Todd Zaki Warfel <lists at toddwarfel.com>wrote:

> Perhaps because the core audience isn't older than 65? Not to say that
> there aren't any, but I'd imagine, based on the meetings and conferences
> that I've been to, that the number of people over 65 are statistically quite
> small.
>

You would seriously recommend that they step into age discrimination
territory by way of statistical significance, as opposed to adding a simple
"over 65" option? :)

Just as well there are not a lot of UPA folk here in Oz - if there were,
someone would no doubt be getting some rude emails from Grey Power advocates
:)

Best regards, Andrew

--
---
Andrew Boyd
http://uxaustralia.com.au -- UX Australia Conference Canberra 2009
http://uxbookclub.org -- connect, read, discuss
http://govux.org -- the government user experience forum
http://resilientnationaustralia.org Resilient Nation Australia

12 Mar 2009 - 5:44am
Todd Warfel
2003

If they don't have members over 65, then using them in research would
end up leading to false data, or untruth. If they have people over 65
and it's significant enough to warrant including them, then include
them.

For example, if they have 5 members out of 1000 who are 65, then
what's the benefit of including someone over 65 over someone who is 62?

On Mar 12, 2009, at 12:04 AM, Jared Spool wrote:

> So? Why limit the age range? How does that benefit the research?

Cheers!

Todd Zaki Warfel
Principal Design Researcher
Messagefirst | Designing Information. Beautifully.
----------------------------------
Contact Info
Voice: (215) 825-7423
Email: todd at messagefirst.com
AIM: twarfel at mac.com
Blog: http://toddwarfel.com
Twitter: zakiwarfel
----------------------------------
In theory, theory and practice are the same.
In practice, they are not.

12 Mar 2009 - 7:17am
Dana Chisnell
2008

Thanks for the prompt, Jared. There's no reason to limit the age
range *at all.* As long as the behaviors are the same -- that is, the
task goals of the users -- across age ranges, then it doesn't matter a
bit how old the participants are.

As members of UPA, people over 65 would very likely have the same
tasks and goals in mind as someone younger: Maintain membership
information, renew memberships, find out what's going on in the
association, get in the consulting directory, find out who is on the
board, find out where the conference is, etc.

Limiting the age range wouldn't benefit the research. In fact,
limiting may be a detriment.

Dana

:: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: ::
Dana Chisnell
desk: 415.392.0776
mobile: 415.519.1148

dana AT usabilityworks DOT net

www.usabilityworks.net
http://usabilitytestinghowto.blogspot.com/

On Mar 12, 2009, at 12:04 AM, Jared Spool wrote:

> So? Why limit the age range? How does that benefit the research?
>
> On Mar 11, 2009, at 10:20 PM, Todd Zaki Warfel wrote:
>
>> Perhaps because the core audience isn't older than 65? Not to say
>> that there aren't any, but I'd imagine, based on the meetings and
>> conferences that I've been to, that the number of people over 65
>> are statistically quite small.
>>
>> On Mar 11, 2009, at 11:54 AM, Dana Chisnell wrote:
>>
>>> May I ask why the age range limits to 65?
>

12 Mar 2009 - 7:21am
James Page
2008

Out of interest how many participants are you testing with? Could
you break the numbers down?
James
http://blog.feralabs.com

2009/3/12 Dana Chisnell <dana at usabilityworks.net>

>
> Thanks for the prompt, Jared. There's no reason to limit the age range *at
> all.* As long as the behaviors are the same -- that is, the task goals of
> the users -- across age ranges, then it doesn't matter a bit how old the
> participants are.
>
> As members of UPA, people over 65 would very likely have the same tasks
> and goals in mind as someone younger: Maintain membership information, renew
> memberships, find out what's going on in the association, get in the
> consulting directory, find out who is on the board, find out where the
> conference is, etc.
>
> Limiting the age range wouldn't benefit the research. In fact, limiting
> may be a detriment.
>
> Dana
>
> :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: ::
> :: ::
> Dana Chisnell
> desk: 415.392.0776
> mobile: 415.519.1148
>
> dana AT usabilityworks DOT net
>
> www.usabilityworks.net
> http://usabilitytestinghowto.blogspot.com/
>
>
> On Mar 12, 2009, at 12:04 AM, Jared Spool wrote:
>
> So? Why limit the age range? How does that benefit the research?
>>
>> On Mar 11, 2009, at 10:20 PM, Todd Zaki Warfel wrote:
>>
>> Perhaps because the core audience isn't older than 65? Not to say that
>>> there aren't any, but I'd imagine, based on the meetings and conferences
>>> that I've been to, that the number of people over 65 are statistically quite
>>> small.
>>>
>>> On Mar 11, 2009, at 11:54 AM, Dana Chisnell wrote:
>>>
>>> May I ask why the age range limits to 65?
>>>>
>>>
>>
> ________________________________________________________________
> Welcome to the Interaction Design Association (IxDA)!
> To post to this list ....... discuss at ixda.org
> Unsubscribe ................ http://www.ixda.org/unsubscribe
> List Guidelines ............ http://www.ixda.org/guidelines
> List Help .................. http://www.ixda.org/help
>

12 Mar 2009 - 7:35am
Jen Giroux
2009

Thanks everyone for your feedback. We are actually not limiting the
study to those under 65, as it may have indicated in the posting. We
do have an 'over 64' age group in the qualifying survey.
Thanks,
Jen

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=39845

12 Mar 2009 - 7:36am
Dana Chisnell
2008

What difference does it make how many you're testing? By breaking the
sample into groups, you're just creating extra work. Are you going to
compare the data by age group? Why would you do that? The only reason
I can think of is if you're creating different sites. You're not.

Dana

On Mar 12, 2009, at 9:21 AM, James Page wrote:

> Out of interest how many participants are you testing with? Could
> you break the numbers down?
>
> James
> http://blog.feralabs.com
>
> 2009/3/12 Dana Chisnell <dana at usabilityworks.net>
>
> Thanks for the prompt, Jared. There's no reason to limit the age
> range *at all.* As long as the behaviors are the same -- that is,
> the task goals of the users -- across age ranges, then it doesn't
> matter a bit how old the participants are.
>
> As members of UPA, people over 65 would very likely have the same
> tasks and goals in mind as someone younger: Maintain membership
> information, renew memberships, find out what's going on in the
> association, get in the consulting directory, find out who is on the
> board, find out where the conference is, etc.
>
> Limiting the age range wouldn't benefit the research. In fact,
> limiting may be a detriment.
>
>
> Dana
>
> :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: ::
> Dana Chisnell
> desk: 415.392.0776
> mobile: 415.519.1148
>
> dana AT usabilityworks DOT net
>
> www.usabilityworks.net
> http://usabilitytestinghowto.blogspot.com/
>
>
> On Mar 12, 2009, at 12:04 AM, Jared Spool wrote:
>
> So? Why limit the age range? How does that benefit the research?
>
> On Mar 11, 2009, at 10:20 PM, Todd Zaki Warfel wrote:
>
> Perhaps because the core audience isn't older than 65? Not to say
> that there aren't any, but I'd imagine, based on the meetings and
> conferences that I've been to, that the number of people over 65 are
> statistically quite small.
>
> On Mar 11, 2009, at 11:54 AM, Dana Chisnell wrote:
>
> May I ask why the age range limits to 65?
>
>
> ________________________________________________________________
> Welcome to the Interaction Design Association (IxDA)!
> To post to this list ....... discuss at ixda.org
> Unsubscribe ................ http://www.ixda.org/unsubscribe
> List Guidelines ............ http://www.ixda.org/guidelines
> List Help .................. http://www.ixda.org/help
>

12 Mar 2009 - 7:42am
Jen Randolph
2008

I wonder why the requirement is windows-based PC for US participants,
but both windows-based PCs and Macs for international participants?
Why not windows-based and Mac for both groups?

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=39845

12 Mar 2009 - 7:52am
Jared M. Spool
2003

The key question is: How do you know that people older than 65 will
behave differently than people younger than 65?

Remember, there are three ways to build your participant schedule:
Screening, balancing, and analyzing.

By limiting the group to those under 65, you're *screening* out older
participants. You'd only want to do this if you felt that the data you
would collect from these individuals would some how unfairly change
the inferences, opinions, and recommendations the design team would
come to.

Another alternative is to *balance*: have a significant sample of both
older-than-65 and 65-or-younger participants. By balancing, you could
compare the behavioral differences. But that only would make sense to
do if you had evidence that the behavior differences would be
substantial or if you wanted to know if you could exclude the
screening or balancing later, because it wouldn't make a difference.

(For example, with one corporate intranet client, I recommended
balancing the first few studies between both HQ employees and field-
office employees, so we could tell if the two groups behaved
differently. If they do, then we know we have to test both going
forward. If they don't, then we know that HQ employees -- easier for
the team to have access to -- are good surrogates for field-office
employees.)

The third alternative is to just *analyze*: Here, you'd let chance
take it's place. Odds are, in recruiting participants for this study,
you'd end up with a proportional representation of ages if you didn't
pay attention at all. (No one age is more likely to volunteer than
others.) When analyzing, you just take note of what the ages are and
see, once you've collected your data, if you can identify behavioral
differences.

Analyzing is the cheapest as it puts no constraints on the recruitment
process. Balancing is more expensive because you (a) have to
disqualify otherwise qualified candidates based on the criteria and
(b) you have to study more participants because you need reasonable
samples sizes of each segment. Screening is most expensive because
you're throwing away participants and their data.

I always recommend to our clients, if they don't have any idea whether
a factor makes a difference to go the analyze route until they can
clearly tell me why balancing or screening will be worth it.

Long answer. Sorry.

Jared

Jared M. Spool
User Interface Engineering
510 Turnpike St., Suite 102, North Andover, MA 01845
e: jspool at uie.com p: +1 978 327 5561
http://uie.com Blog: http://uie.com/brainsparks Twitter: jmspool
UIE Web App Summit, 4/19-4/22: http://webappsummit.com

On Mar 12, 2009, at 7:44 AM, Todd Zaki Warfel wrote:

> If they don't have members over 65, then using them in research
> would end up leading to false data, or untruth. If they have people
> over 65 and it's significant enough to warrant including them, then
> include them.
>
> For example, if they have 5 members out of 1000 who are 65, then
> what's the benefit of including someone over 65 over someone who is
> 62?
>
> On Mar 12, 2009, at 12:04 AM, Jared Spool wrote:
>
>> So? Why limit the age range? How does that benefit the research?
>
>
> Cheers!
>
> Todd Zaki Warfel
> Principal Design Researcher
> Messagefirst | Designing Information. Beautifully.
> ----------------------------------
> Contact Info
> Voice: (215) 825-7423
> Email: todd at messagefirst.com
> AIM: twarfel at mac.com
> Blog: http://toddwarfel.com
> Twitter: zakiwarfel
> ----------------------------------
> In theory, theory and practice are the same.
> In practice, they are not.
>
>
>
>

12 Mar 2009 - 7:59am
Jared M. Spool
2003

On Mar 12, 2009, at 6:35 AM, Jen Giroux wrote:

> Thanks everyone for your feedback. We are actually not limiting the
> study to those under 65, as it may have indicated in the posting. We
> do have an 'over 64' age group in the qualifying survey.

Another piece of unsolicited advice:

I don't like pre-bucketing age groups. (18-25, 26-35, and so on.)

I have found that it's better to ask for year of birth. Once you have
your data, you can see if there are certain pivot years where
behavioral differences appear. It's better to bucket your data (for
reporting purposes) *after* you can see where the differences fall.

(For most technology these days, you'll find that age is not a factor.)

That advice is worth exactly what you paid for it,

Jared

Jared M. Spool
User Interface Engineering
510 Turnpike St., Suite 102, North Andover, MA 01845
e: jspool at uie.com p: +1 978 327 5561
http://uie.com Blog: http://uie.com/brainsparks Twitter: jmspool
UIE Web App Summit, 4/19-4/22: http://webappsummit.com

12 Mar 2009 - 8:07am
Todd Warfel
2003

On Mar 12, 2009, at 9:52 AM, Jared Spool wrote:

> The key question is: How do you know that people older than 65 will
> behave differently than people younger than 65?

My thoughts exactly. I don't see how you would know this until you've
actually done some research and testing on it. If you know that your
audience doesn't really have people over 65, then there's no reason to
recruit them. If you know you do, then you should recruit them, even
if it's a small number "somewhat" proportionate to the percentage that
65 and over make up in your group.

I say somewhat, because if you recruit 20 participants and 65 and over
only represent 2% of your population, you'd recruit less than 1
participant. In a case like this, we'd typically recruit 2-3 so we'd
have enough to see if there's a significant difference that would
warrant additional research on that smaller group.

Cheers!

Todd Zaki Warfel
Principal Design Researcher
Messagefirst | Designing Information. Beautifully.
----------------------------------
Contact Info
Voice: (215) 825-7423
Email: todd at messagefirst.com
AIM: twarfel at mac.com
Blog: http://toddwarfel.com
Twitter: zakiwarfel
----------------------------------
In theory, theory and practice are the same.
In practice, they are not.

12 Mar 2009 - 8:21am
James Page
2008

@dana
I am bit confused here by your question "What difference does it make how
many you're testing?"
Surely factors such as "margin of error", and "statistical power"
are important, or are they not?

The point of testing is to find out if your wrong, or right. How do you know
if your wrong or right based on a small sample.

@jenrandolph
On remote usability testing we get more behavioural differences by machine
configuration, then by age. What I mean by machine configuration is
manufacture and screen size. Mac users are different, why - I don't know.
And we get allot of behavioural differences by culture - (place of birth vs
residence). Also environment seams to have quite a large impact. People in
the lab, and at home spend more time to trying to complete a task before
giving up, then people at work. This is of course impacts success/failure
rates.

We are doing more research here.

James
http://blog.feralabs.com

2009/3/12 Dana Chisnell <dana at usabilityworks.net>

>
> What difference does it make how many you're testing? By breaking the
> sample into groups, you're just creating extra work. Are you going to
> compare the data by age group? Why would you do that? The only reason I can
> think of is if you're creating different sites. You're not.
> Dana
>
>
>
> On Mar 12, 2009, at 9:21 AM, James Page wrote:
>
> Out of interest how many participants are you testing with? Could
> you break the numbers down?
> James
> http://blog.feralabs.com
>
> 2009/3/12 Dana Chisnell <dana at usabilityworks.net>
>
>>
>> Thanks for the prompt, Jared. There's no reason to limit the age range
>> *at all.* As long as the behaviors are the same -- that is, the task goals
>> of the users -- across age ranges, then it doesn't matter a bit how old the
>> participants are.
>>
>> As members of UPA, people over 65 would very likely have the same tasks
>> and goals in mind as someone younger: Maintain membership information, renew
>> memberships, find out what's going on in the association, get in the
>> consulting directory, find out who is on the board, find out where the
>> conference is, etc.
>>
>> Limiting the age range wouldn't benefit the research. In fact, limiting
>> may be a detriment.
>>
>> Dana
>>
>> :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: ::
>> :: ::
>> Dana Chisnell
>> desk: 415.392.0776
>> mobile: 415.519.1148
>>
>> dana AT usabilityworks DOT net
>>
>> www.usabilityworks.net
>> http://usabilitytestinghowto.blogspot.com/
>>
>>
>> On Mar 12, 2009, at 12:04 AM, Jared Spool wrote:
>>
>> So? Why limit the age range? How does that benefit the research?
>>>
>>> On Mar 11, 2009, at 10:20 PM, Todd Zaki Warfel wrote:
>>>
>>> Perhaps because the core audience isn't older than 65? Not to say that
>>>> there aren't any, but I'd imagine, based on the meetings and conferences
>>>> that I've been to, that the number of people over 65 are statistically quite
>>>> small.
>>>>
>>>> On Mar 11, 2009, at 11:54 AM, Dana Chisnell wrote:
>>>>
>>>> May I ask why the age range limits to 65?
>>>>>
>>>>
>>>
>> ________________________________________________________________
>> Welcome to the Interaction Design Association (IxDA)!
>> To post to this list ....... discuss at ixda.org
>> Unsubscribe ................ http://www.ixda.org/unsubscribe
>> List Guidelines ............ http://www.ixda.org/guidelines
>> List Help .................. http://www.ixda.org/help
>>
>
>
>

12 Mar 2009 - 8:31am
Jared M. Spool
2003

On Mar 12, 2009, at 10:07 AM, Todd Zaki Warfel wrote:

>> The key question is: How do you know that people older than 65 will
>> behave differently than people younger than 65?
>
> My thoughts exactly. I don't see how you would know this until
> you've actually done some research and testing on it. If you know
> that your audience doesn't really have people over 65, then there's
> no reason to recruit them. If you know you do, then you should
> recruit them, even if it's a small number "somewhat" proportionate
> to the percentage that 65 and over make up in your group.

If you're recruiting from communities that are part of your target
audience, then it would be difficult to recruit people outside your
target audience. If you're not recruiting from communities that are
part of your target audience, there's probably something terribly
wrong with your recruitment sourcing strategy.

> I say somewhat, because if you recruit 20 participants and 65 and
> over only represent 2% of your population, you'd recruit less than 1
> participant. In a case like this, we'd typically recruit 2-3 so we'd
> have enough to see if there's a significant difference that would
> warrant additional research on that smaller group.

Again, if you're recruiting from communities that are part of your
target audience, it would be hard to dramatically over-represent a
group through pure random selection. (If you're picking out of a bag
of 1,000 marbles where 20 of them are blue, it's unlikely you'll pick
all 20 in the first round.)

Leave it to chance, I say. Hopefully, you're not doing only one round
of testing and therefore any problems that arise from recruitment
skewing in the early rounds can be compensated for in later rounds.

Jared

Jared M. Spool
User Interface Engineering
510 Turnpike St., Suite 102, North Andover, MA 01845
e: jspool at uie.com p: +1 978 327 5561
http://uie.com Blog: http://uie.com/brainsparks Twitter: jmspool
UIE Web App Summit, 4/19-4/22: http://webappsummit.com

12 Mar 2009 - 8:32am
Jared M. Spool
2003

On Mar 12, 2009, at 10:21 AM, James Page wrote:

>
> Surely factors such as "margin of error", and "statistical power"
> are important, or are they not?

They are not.

Jared

12 Mar 2009 - 8:33am
Dana Chisnell
2008

Ah, I meant with regard to age. If the sample is 8, say, in 99.9% of
cases, age won't matter. Just get a mix of participants who do and are
motivated to do what you're interested in observing.

If you're testing 30 or 50 or 100 participants, you might want to pay
attention to make sure you have participants from all the age ranges
you care about, but you shouldn't be selecting or screening on age as
long as they do and are motivated to do the same kinds of tasks with
your designs.

Most demographics don't matter in usability testing, most of the time.
(Of course, there are exceptions.) Why? Because the purpose of
usability testing is not to generalize preferences to a larger
audience but instead to identify problems with a design that cause
frustration and confusion. If one or two participants in your mix have
the issue, you want to fix that because you don't want *anyone* to
have it.

Dana

On Mar 12, 2009, at 10:21 AM, James Page wrote:

> @dana
> I am bit confused here by your question "What difference does it
> make how many you're testing?"
>
> Surely factors such as "margin of error", and "statistical power"
> are important, or are they not?
>
> The point of testing is to find out if your wrong, or right. How do
> you know if your wrong or right based on a small sample.
>
> @jenrandolph
> On remote usability testing we get more behavioural differences by
> machine configuration, then by age. What I mean by machine
> configuration is manufacture and screen size. Mac users are
> different, why - I don't know. And we get allot of behavioural
> differences by culture - (place of birth vs residence). Also
> environment seams to have quite a large impact. People in the lab,
> and at home spend more time to trying to complete a task before
> giving up, then people at work. This is of course impacts success/
> failure rates.
>
> We are doing more research here.
>
> James
> http://blog.feralabs.com
>
>
> 2009/3/12 Dana Chisnell <dana at usabilityworks.net>
>
> What difference does it make how many you're testing? By breaking
> the sample into groups, you're just creating extra work. Are you
> going to compare the data by age group? Why would you do that? The
> only reason I can think of is if you're creating different sites.
> You're not.
>
> Dana
>
>
>
> On Mar 12, 2009, at 9:21 AM, James Page wrote:
>
>> Out of interest how many participants are you testing with? Could
>> you break the numbers down?
>>
>> James
>> http://blog.feralabs.com
>>
>> 2009/3/12 Dana Chisnell <dana at usabilityworks.net>
>>
>> Thanks for the prompt, Jared. There's no reason to limit the age
>> range *at all.* As long as the behaviors are the same -- that is,
>> the task goals of the users -- across age ranges, then it doesn't
>> matter a bit how old the participants are.
>>
>> As members of UPA, people over 65 would very likely have the same
>> tasks and goals in mind as someone younger: Maintain membership
>> information, renew memberships, find out what's going on in the
>> association, get in the consulting directory, find out who is on
>> the board, find out where the conference is, etc.
>>
>> Limiting the age range wouldn't benefit the research. In fact,
>> limiting may be a detriment.
>>
>>
>> Dana
>>
>> :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: ::
>> Dana Chisnell
>> desk: 415.392.0776
>> mobile: 415.519.1148
>>
>> dana AT usabilityworks DOT net
>>
>> www.usabilityworks.net
>> http://usabilitytestinghowto.blogspot.com/
>>
>>
>> On Mar 12, 2009, at 12:04 AM, Jared Spool wrote:
>>
>> So? Why limit the age range? How does that benefit the research?
>>
>> On Mar 11, 2009, at 10:20 PM, Todd Zaki Warfel wrote:
>>
>> Perhaps because the core audience isn't older than 65? Not to say
>> that there aren't any, but I'd imagine, based on the meetings and
>> conferences that I've been to, that the number of people over 65
>> are statistically quite small.
>>
>> On Mar 11, 2009, at 11:54 AM, Dana Chisnell wrote:
>>
>> May I ask why the age range limits to 65?
>>
>>
>> ________________________________________________________________
>> Welcome to the Interaction Design Association (IxDA)!
>> To post to this list ....... discuss at ixda.org
>> Unsubscribe ................ http://www.ixda.org/unsubscribe
>> List Guidelines ............ http://www.ixda.org/guidelines
>> List Help .................. http://www.ixda.org/help
>>
>
>

12 Mar 2009 - 8:35am
Todd Warfel
2003

On Mar 12, 2009, at 10:21 AM, James Page wrote:

> Mac users are different, why - I don't know.

We've found the same over the years and contributed it to the
different environment of the MacOS to Windows OS.

> And we get allot of behavioural differences by culture - (place of
> birth vs residence).

Absolutely. Culture has a significant impact on behavior.

Out of curiosity, how does your Asynchronous model work w/
Webnographer? With the absence of a moderator, you lose the richness
of the conversation and ability to probe. Instead, you're left with a
participant filling out a survey. And any good researcher knows how
limiting surveys are.

How do you work around that w/Webnographer? Great concept, I'm just
wondering about the limitations.

Cheers!

Todd Zaki Warfel
Principal Design Researcher
Messagefirst | Designing Information. Beautifully.
----------------------------------
Contact Info
Voice: (215) 825-7423
Email: todd at messagefirst.com
AIM: twarfel at mac.com
Blog: http://toddwarfel.com
Twitter: zakiwarfel
----------------------------------
In theory, theory and practice are the same.
In practice, they are not.

12 Mar 2009 - 9:00am
James Page
2008

@dana
>
> Just get a mix of participants who do and are motivated to do what you're
> interested in observing.
>
agree.....

If one or two participants in your mix have the issue, you want to fix that
> because you don't want *anyone* to have it.
>
Totally agree in the ideal world all issues should be fixed. With
Webnographer some of our clients are so overwhelmed by a long list of issue,
what they are wanting is prioritisation of issues. Many times there is a
very long list of usability issues that have been waiting for years in
some cases. By being able to say X% of users experienced this issue starts
helping the solution to be prioritised, and fixed. Often there is an
argument between one camp and another camp if a usability issue is real or
not.

Prioritisation it seams is very important to Agile operations.

James
http://blog.feralabs.com

2009/3/12 Dana Chisnell <dana at usabilityworks.net>

>
> Ah, I meant with regard to age. If the sample is 8, say, in 99.9% of cases,
> age won't matter. Just get a mix of participants who do and are motivated to
> do what you're interested in observing.
> If you're testing 30 or 50 or 100 participants, you might want to pay
> attention to make sure you have participants from all the age ranges you
> care about, but you shouldn't be selecting or screening on age as long as
> they do and are motivated to do the same kinds of tasks with your designs.
>
> Most demographics don't matter in usability testing, most of the time. (Of
> course, there are exceptions.) Why? Because the purpose of usability testing
> is not to generalize preferences to a larger audience but instead to
> identify problems with a design that cause frustration and confusion. If one
> or two participants in your mix have the issue, you want to fix that because
> you don't want *anyone* to have it.
>
> Dana
>
>
> On Mar 12, 2009, at 10:21 AM, James Page wrote:
>
> @dana
> I am bit confused here by your question "What difference does it make how
> many you're testing?"
> Surely factors such as "margin of error", and "statistical power"
> are important, or are they not?
>
> The point of testing is to find out if your wrong, or right. How do you
> know if your wrong or right based on a small sample.
>
> @jenrandolph
> On remote usability testing we get more behavioural differences by machine
> configuration, then by age. What I mean by machine configuration is
> manufacture and screen size. Mac users are different, why - I don't know.
> And we get allot of behavioural differences by culture - (place of birth vs
> residence). Also environment seams to have quite a large impact. People in
> the lab, and at home spend more time to trying to complete a task before
> giving up, then people at work. This is of course impacts success/failure
> rates.
>
> We are doing more research here.
>
> James
> http://blog.feralabs.com
>
>
> 2009/3/12 Dana Chisnell <dana at usabilityworks.net>
>
>>
>> What difference does it make how many you're testing? By breaking the
>> sample into groups, you're just creating extra work. Are you going to
>> compare the data by age group? Why would you do that? The only reason I can
>> think of is if you're creating different sites. You're not.
>> Dana
>>
>>
>>
>> On Mar 12, 2009, at 9:21 AM, James Page wrote:
>>
>> Out of interest how many participants are you testing with? Could
>> you break the numbers down?
>> James
>> http://blog.feralabs.com
>>
>> 2009/3/12 Dana Chisnell <dana at usabilityworks.net>
>>
>>>
>>> Thanks for the prompt, Jared. There's no reason to limit the age range
>>> *at all.* As long as the behaviors are the same -- that is, the task goals
>>> of the users -- across age ranges, then it doesn't matter a bit how old the
>>> participants are.
>>>
>>> As members of UPA, people over 65 would very likely have the same tasks
>>> and goals in mind as someone younger: Maintain membership information, renew
>>> memberships, find out what's going on in the association, get in the
>>> consulting directory, find out who is on the board, find out where the
>>> conference is, etc.
>>>
>>> Limiting the age range wouldn't benefit the research. In fact, limiting
>>> may be a detriment.
>>>
>>> Dana
>>>
>>> :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: ::
>>> :: :: ::
>>> Dana Chisnell
>>> desk: 415.392.0776
>>> mobile: 415.519.1148
>>>
>>> dana AT usabilityworks DOT net
>>>
>>> www.usabilityworks.net
>>> http://usabilitytestinghowto.blogspot.com/
>>>
>>>
>>> On Mar 12, 2009, at 12:04 AM, Jared Spool wrote:
>>>
>>> So? Why limit the age range? How does that benefit the research?
>>>>
>>>> On Mar 11, 2009, at 10:20 PM, Todd Zaki Warfel wrote:
>>>>
>>>> Perhaps because the core audience isn't older than 65? Not to say that
>>>>> there aren't any, but I'd imagine, based on the meetings and conferences
>>>>> that I've been to, that the number of people over 65 are statistically quite
>>>>> small.
>>>>>
>>>>> On Mar 11, 2009, at 11:54 AM, Dana Chisnell wrote:
>>>>>
>>>>> May I ask why the age range limits to 65?
>>>>>>
>>>>>
>>>>
>>> ________________________________________________________________
>>> Welcome to the Interaction Design Association (IxDA)!
>>> To post to this list ....... discuss at ixda.org
>>> Unsubscribe ................ http://www.ixda.org/unsubscribe
>>> List Guidelines ............ http://www.ixda.org/guidelines
>>> List Help .................. http://www.ixda.org/help
>>>
>>
>>
>>
>
>

12 Mar 2009 - 9:08am
Dana Chisnell
2008

I agree absolutely that there comes a time to prioritize problems that
surface in user research.

In my experience, the teams that create the best experiences are not
concerned necessarily with the percentage of users that had the
problem (and I'd argue that in the small tests that most of us do
percentages are bogus).

Instead, they're asking, What do we want the user experience to be?
What are the constraints of the technology? What are the priorities of
the business? Which usability issues prevent us from reaching the
vision we have for this design?

Dana

On Mar 12, 2009, at 11:00 AM, James Page wrote:

> @dana
> Just get a mix of participants who do and are motivated to do what
> you're interested in observing.
> agree.....
>
> If one or two participants in your mix have the issue, you want to
> fix that because you don't want *anyone* to have it.
> Totally agree in the ideal world all issues should be fixed. With
> Webnographer some of our clients are so overwhelmed by a long list
> of issue, what they are wanting is prioritisation of issues. Many
> times there is a very long list of usability issues that have been
> waiting for years in some cases. By being able to say X% of users
> experienced this issue starts helping the solution to be
> prioritised, and fixed. Often there is an argument between one camp
> and another camp if a usability issue is real or not.
>
> Prioritisation it seams is very important to Agile operations.
>
> James
> http://blog.feralabs.com
>
> 2009/3/12 Dana Chisnell <dana at usabilityworks.net>
>
> Ah, I meant with regard to age. If the sample is 8, say, in 99.9% of
> cases, age won't matter. Just get a mix of participants who do and
> are motivated to do what you're interested in observing.
>
> If you're testing 30 or 50 or 100 participants, you might want to
> pay attention to make sure you have participants from all the age
> ranges you care about, but you shouldn't be selecting or screening
> on age as long as they do and are motivated to do the same kinds of
> tasks with your designs.
>
> Most demographics don't matter in usability testing, most of the
> time. (Of course, there are exceptions.) Why? Because the purpose of
> usability testing is not to generalize preferences to a larger
> audience but instead to identify problems with a design that cause
> frustration and confusion. If one or two participants in your mix
> have the issue, you want to fix that because you don't want *anyone*
> to have it.
>
> Dana
>
>
> On Mar 12, 2009, at 10:21 AM, James Page wrote:
>
>> @dana
>> I am bit confused here by your question "What difference does it
>> make how many you're testing?"
>>
>> Surely factors such as "margin of error", and "statistical power"
>> are important, or are they not?
>>
>> The point of testing is to find out if your wrong, or right. How do
>> you know if your wrong or right based on a small sample.
>>
>> @jenrandolph
>> On remote usability testing we get more behavioural differences by
>> machine configuration, then by age. What I mean by machine
>> configuration is manufacture and screen size. Mac users are
>> different, why - I don't know. And we get allot of behavioural
>> differences by culture - (place of birth vs residence). Also
>> environment seams to have quite a large impact. People in the lab,
>> and at home spend more time to trying to complete a task before
>> giving up, then people at work. This is of course impacts success/
>> failure rates.
>>
>> We are doing more research here.
>>
>> James
>> http://blog.feralabs.com
>>
>>
>> 2009/3/12 Dana Chisnell <dana at usabilityworks.net>
>>
>> What difference does it make how many you're testing? By breaking
>> the sample into groups, you're just creating extra work. Are you
>> going to compare the data by age group? Why would you do that? The
>> only reason I can think of is if you're creating different sites.
>> You're not.
>>
>> Dana
>>
>>
>>
>> On Mar 12, 2009, at 9:21 AM, James Page wrote:
>>
>>> Out of interest how many participants are you testing with? Could
>>> you break the numbers down?
>>>
>>> James
>>> http://blog.feralabs.com
>>>
>>> 2009/3/12 Dana Chisnell <dana at usabilityworks.net>
>>>
>>> Thanks for the prompt, Jared. There's no reason to limit the age
>>> range *at all.* As long as the behaviors are the same -- that is,
>>> the task goals of the users -- across age ranges, then it doesn't
>>> matter a bit how old the participants are.
>>>
>>> As members of UPA, people over 65 would very likely have the same
>>> tasks and goals in mind as someone younger: Maintain membership
>>> information, renew memberships, find out what's going on in the
>>> association, get in the consulting directory, find out who is on
>>> the board, find out where the conference is, etc.
>>>
>>> Limiting the age range wouldn't benefit the research. In fact,
>>> limiting may be a detriment.
>>>
>>>
>>> Dana
>>>
>>> :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: ::
>>> Dana Chisnell
>>> desk: 415.392.0776
>>> mobile: 415.519.1148
>>>
>>> dana AT usabilityworks DOT net
>>>
>>> www.usabilityworks.net
>>> http://usabilitytestinghowto.blogspot.com/
>>>
>>>
>>> On Mar 12, 2009, at 12:04 AM, Jared Spool wrote:
>>>
>>> So? Why limit the age range? How does that benefit the research?
>>>
>>> On Mar 11, 2009, at 10:20 PM, Todd Zaki Warfel wrote:
>>>
>>> Perhaps because the core audience isn't older than 65? Not to say
>>> that there aren't any, but I'd imagine, based on the meetings and
>>> conferences that I've been to, that the number of people over 65
>>> are statistically quite small.
>>>
>>> On Mar 11, 2009, at 11:54 AM, Dana Chisnell wrote:
>>>
>>> May I ask why the age range limits to 65?
>>>
>>>
>>> ________________________________________________________________
>>> Welcome to the Interaction Design Association (IxDA)!
>>> To post to this list ....... discuss at ixda.org
>>> Unsubscribe ................ http://www.ixda.org/unsubscribe
>>> List Guidelines ............ http://www.ixda.org/guidelines
>>> List Help .................. http://www.ixda.org/help
>>>
>>
>>
>
>

12 Mar 2009 - 9:36am
James Page
2008

@todd

How do you work around that w/Webnographer?
>

Try it and find out :)

Webnographer? With the absence of a moderator, you lose the richness of the
> conversation and ability to probe.

Yes, you lose some ability to probe, but you gain by having a higher number
of participants. We advice our clients to always have open ended questions
after each task. You would be surprised by the richness of feedback. If an
error has occurred you will normally get multiple feedback for the same
error, and therefore you end up with rich feedback, coming
from multiple participants.

We also use other techniques for spotting user errors, as we have found both
in the lab and remotely people don't always verbalise an issue.

When we tested in the Lab vs Webnographer, with 8 participants in the lab
and 60 in the wild both studies came back with the same number of issues.
And the formative answers from Webnographer where as rich, as from the lab.

> I'm just wondering about the limitations.
Biggest challenge is people doing the test just for the reward, but
these participants can normally be found as they only visit one page, and
they use saticficing behaviour on answering questions, which is
easily discoverable.

The advantage of a tool like Webnographer is that you can easily
prioritise issues,
and get the results back faster than lab methods.

any good researcher knows how limiting surveys are.

It depends on the design. You can have badly done qualitative studies, as
well as poorly designed quantitative studies. At the moment the tool
anonymizers
the data, so that may be a hindrance, but there is nothing stopping you
from interviewing participants after they have taken the test.

James
http://blog.feralabs.com

2009/3/12 Todd Zaki Warfel <lists at toddwarfel.com>

>
> On Mar 12, 2009, at 10:21 AM, James Page wrote:
>
> Mac users are different, why - I don't know.
>
>
> We've found the same over the years and contributed it to the different
> environment of the MacOS to Windows OS.
>
> And we get allot of behavioural differences by culture - (place of birth
> vs residence).
>
>
> Absolutely. Culture has a significant impact on behavior.
>
> Out of curiosity, how does your Asynchronous model work w/Webnographer?
> With the absence of a moderator, you lose the richness of the conversation
> and ability to probe. Instead, you're left with a participant filling out a
> survey. And any good researcher knows how limiting surveys are.
>
> How do you work around that w/Webnographer? Great concept, I'm just
> wondering about the limitations.
>
>
> Cheers!
>
> Todd Zaki Warfel
> Principal Design Researcher
> Messagefirst | Designing Information. Beautifully.
> ----------------------------------
> *Contact Info*
> Voice: (215) 825-7423Email: todd at messagefirst.com
> AIM: twarfel at mac.com
> Blog: http://toddwarfel.com <http://toddwarfel/>
> Twitter: zakiwarfel
> ----------------------------------
> In theory, theory and practice are the same.
> In practice, they are not.
>
>
>
>
>

12 Mar 2009 - 11:32am
Caroline Jarrett
2007

James Page:

> It depends on the design.
> You can have badly done qualitative studies,
> as well as poorly designed quantitative studies.

True, but it's so much *easier* to mess up on a survey.

Designing a good quant study is hard, and any reputable book on survey
design will tell you that you should start by running a qual study on the
same topic. They usually describe this stage as 'interviewing' or
'piloting', but a look under the covers reveals: voila, good old usability
testing.

Cheers
Caroline Jarrett

12 Mar 2009 - 11:45am
Todd Warfel
2003

On Mar 12, 2009, at 11:36 AM, James Page wrote:

> @todd
>
> How do you work around that w/Webnographer?

I didn't find a place on the website to try it out.

> Yes, you lose some ability to probe, but you gain by having a higher
> number of participants.

I've always been a believer in quality over quantity myself, but
realize the value in both approaches.

> We advice our clients to always have open ended questions after each
> task. You would be surprised by the richness of feedback.

In my experience, open ended questions are less likely to get
answered. Additionally, you lose the ability to "see" things that as a
researcher you would be aware of, but the participant would not.

Prime example. Cheskin (the research firm) was telling a story about
some work they were doing for a beauty care product. They were sitting
at the kitchen table having a discussion with a woman and asked her
what kind of "pampering" or luxury care she does for herself.

Woman responds, "None."

Cheskin looks down at her hands and asks, "What about that French
manicure you have?"

Woman, "Oh, that's not a luxury. That's a necessity."

These are insights that only a human encounter will capture.

> When we tested in the Lab vs Webnographer, with 8 participants in
> the lab and 60 in the wild both studies came back with the same
> number of issues. And the formative answers from Webnographer where
> as rich, as from the lab.

Guess it depends on how much and what kinds of things your moderators
are recording.

I think it's another decent tool to have in your chest, but am a bit
skeptical of how valuable the data is. Better than nothing, but I'd
still prefer in person or even watching remotely to automated remote.

Cheers!

Todd Zaki Warfel
Principal Design Researcher
Messagefirst | Designing Information. Beautifully.
----------------------------------
Contact Info
Voice: (215) 825-7423
Email: todd at messagefirst.com
AIM: twarfel at mac.com
Blog: http://toddwarfel.com
Twitter: zakiwarfel
----------------------------------
In theory, theory and practice are the same.
In practice, they are not.

12 Mar 2009 - 11:45am
Todd Warfel
2003

On Mar 12, 2009, at 1:32 PM, Caroline Jarrett wrote:

> True, but it's so much *easier* to mess up on a survey.

So true.

Cheers!

Todd Zaki Warfel
Principal Design Researcher
Messagefirst | Designing Information. Beautifully.
----------------------------------
Contact Info
Voice: (215) 825-7423
Email: todd at messagefirst.com
AIM: twarfel at mac.com
Blog: http://toddwarfel.com
Twitter: zakiwarfel
----------------------------------
In theory, theory and practice are the same.
In practice, they are not.

12 Mar 2009 - 1:09pm
James Page
2008

>
> How do you work around that w/Webnographer?
>
> I didn't find a place on the website to try it out.

The self service module for Webnographer is coming soon.

I've always been a believer in quality over quantity myself,

We believe very strongly in both qualitative and quantitative. Just the like
the designer on the team uses photoshop, she still sketches, and paints.

James
http://blog.feralabs.com

2009/3/12 Todd Zaki Warfel <lists at toddwarfel.com>

>
> On Mar 12, 2009, at 11:36 AM, James Page wrote:
>
> @todd
>
> How do you work around that w/Webnographer?
>
>
> I didn't find a place on the website to try it out.
>
> Yes, you lose some ability to probe, but you gain by having a higher number
> of participants.
>
>
> I've always been a believer in quality over quantity myself, but realize
> the value in both approaches.
>
> We advice our clients to always have open ended questions after each task.
> You would be surprised by the richness of feedback.
>
>
> In my experience, open ended questions are less likely to get answered.
> Additionally, you lose the ability to "see" things that as a researcher you
> would be aware of, but the participant would not.
>
> Prime example. Cheskin (the research firm) was telling a story about some
> work they were doing for a beauty care product. They were sitting at the
> kitchen table having a discussion with a woman and asked her what kind of
> "pampering" or luxury care she does for herself.
>
> Woman responds, "None."
>
> Cheskin looks down at her hands and asks, "What about that French manicure
> you have?"
>
> Woman, "Oh, that's not a luxury. That's a necessity."
>
> These are insights that only a human encounter will capture.
>
> When we tested in the Lab vs Webnographer, with 8 participants in the lab
> and 60 in the wild both studies came back with the same number of issues.
> And the formative answers from Webnographer where as rich, as from the lab.
>
>
> Guess it depends on how much and what kinds of things your moderators are
> recording.
>
> I think it's another decent tool to have in your chest, but am a bit
> skeptical of how valuable the data is. Better than nothing, but I'd still
> prefer in person or even watching remotely to automated remote.
>
> Cheers!
>
> Todd Zaki Warfel
> Principal Design Researcher
> Messagefirst | Designing Information. Beautifully.
> ----------------------------------
> *Contact Info*
> Voice: (215) 825-7423Email: todd at messagefirst.com
> AIM: twarfel at mac.com
> Blog: http://toddwarfel.com <http://toddwarfel/>
> Twitter: zakiwarfel
> ----------------------------------
> In theory, theory and practice are the same.
> In practice, they are not.
>
>
>
>
>

12 Mar 2009 - 1:40pm
James Page
2008

@caroline
>
> > It depends on the design.
> > You can have badly done qualitative studies,
> > as well as poorly designed quantitative studies.
>
> True, but it's so much *easier* to mess up on a survey.

Depends on if you create your own questions or use ones that have been
tested before. There is allot of literature on what works. All the
standard surveys
have been tested, some work better than others.

On the other hand interviewing well takes allot of skill, and the correct
methods.

With both methods a bad question, is a bad question. It is very easy to
prime people. Would you not say it is more difficult to make a mistake with
a pre tested standard survey question, that has been tested many times
before than a novice interviewing somebody?

Margaret Mead (who some consider to be the mother of ethnography) managed to
spend 9 months in Samoa and as the anthropologist Derek Freeman pointed out
got it very wrong.

In regards to asking people, there is the issue that if something is non
verbalised, then verbalising it will change the decision making. (see: Herb
Simon, and more recently Ariely et al)

I am all for interviewing and observations but it is hard. My dad an
Anthropologist, and some of the Anthropologists who I worked with in Africa
where very very good at it, but most people are not.

As I have said before we employ a mix of both qualitative and quantitative
methods in discovering and fixing usability problems—there are inherent
risks to both methods. We also test one against the other.

James
http://blog.feralabs.com

2009/3/12 Todd Zaki Warfel <lists at toddwarfel.com>

> On Mar 12, 2009, at 1:32 PM, Caroline Jarrett wrote:
>
> True, but it's so much *easier* to mess up on a survey.
>>
>
> So true.
>
>
> Cheers!
>
> Todd Zaki Warfel
> Principal Design Researcher
> Messagefirst | Designing Information. Beautifully.
> ----------------------------------
> Contact Info
> Voice: (215) 825-7423
> Email: todd at messagefirst.com
> AIM: twarfel at mac.com
> Blog: http://toddwarfel.com
> Twitter: zakiwarfel
> ----------------------------------
> In theory, theory and practice are the same.
> In practice, they are not.
>
>
>
>
> ________________________________________________________________
> Welcome to the Interaction Design Association (IxDA)!
> To post to this list ....... discuss at ixda.org
> Unsubscribe ................ http://www.ixda.org/unsubscribe
> List Guidelines ............ http://www.ixda.org/guidelines
> List Help .................. http://www.ixda.org/help
>

12 Mar 2009 - 2:09pm
Caroline Jarrett
2007

James Page said:
> It depends on the design.
> You can have badly done qualitative studies,
> as well as poorly designed quantitative studies.

I replied:
> True, but it's so much *easier* to mess up on a survey.
 
James replied:
> Depends on if you create your own questions
> or use ones that have been tested before.

Nope, it doesn't. It depends on what those questions mean to your users at
the time that you ask them, and how relevant they are to the topic that you
want to research.

Just one example: my brother wanted to use a survey instrument for his
master's research that had supposedly been well-validated for the same topic
and the same users. Apparently. Then we went through it for his actual topic
(which was close, but not precisely the same) and for his actual users (who
were close, but not precisely the same). About 30% of it survived.

>There is allot of literature on what works.

But very few people read it. And those that do, become highly familiar with
the concept that you have to test your survey (gasp) by yes, guess what, as
I already said: usability testing it.

> All the standard surveys have been tested,
> some work better than others.

Nope. For example, even the most commonly-used survey in the usability
world, SUS, is rarely used exactly in its original format. And it's
well-known that one word in it, "cumbersome", routinely causes difficulty
for users. If you haven't tested your exact survey with your actual users,
you're toast. And if you're doing that, you may as well do some usability
testing at the same time.

> On the other hand interviewing well takes allot of skill, and the correct
methods. 

Not really. I've had huge success in teaching people 'hey you' usability
testing (see the extremely short chapter in my book if you're not sure what
this means. www.formsthatwork.com). Typically, I get people doing good
beginner-level usability testing in about half an hour, and the second half
an hour is enough to get them starting on being reflective practitioners who
will improve. It's genuinely quite easy to do adequately, and then to
improve.

> With both methods a bad question,
> is a bad question. 
> It is very easy to prime people.  
> Would you not say it is more difficult 
> to make a mistake with a pre tested standard
> survey question, that has been tested
> many times before than a novice interviewing somebody?

When people are face to face, the normal rules of conversation mean that
mistakes get rapidly repaired and clarified. This can't happen in a survey.
It is *definitely* much, much easier to screw up a survey question than an
face to face interview. Even easier for a novice, who is likely to have no
understanding that what was a good question last week in *that* survey is
rubbish in this one.

<snip - background in anthropology>

I'm not talking about anthropology, I'm talking about the normal everyday
work of the interaction designer.

> As I have said before we employ
> a mix of both qualitative and
> quantitative methods in discovering
> and fixing usability problems

A mix is good. I do that too. I just know that quant methods can be a lot
harder.

Cheers
Caroline

12 Mar 2009 - 2:23pm
Todd Warfel
2003

On Mar 12, 2009, at 3:09 PM, James Page wrote:

> We believe very strongly in both qualitative and quantitative.

As do we. For certain measures, we'll want quantitative and for others
qualitative, but I'll take quality over quantity just about any day.

Cheers!

Todd Zaki Warfel
Principal Design Researcher
Messagefirst | Designing Information. Beautifully.
----------------------------------
Contact Info
Voice: (215) 825-7423
Email: todd at messagefirst.com
AIM: twarfel at mac.com
Blog: http://toddwarfel.com
Twitter: zakiwarfel
----------------------------------
In theory, theory and practice are the same.
In practice, they are not.

13 Mar 2009 - 4:42am
James Page
2008

@dana
>
> I'd argue that in the small tests that most of us do percentages are
> bogus).
>
Totally agree. Most of our tests are 30 plus, but for most
small studies they are meaningless.

You said

> Instead, they're [firms] asking, What do we want the user experience to be?
> What are the constraints of the technology? What are the priorities of the
> business? Which usability issues prevent us from reaching the vision we have
> for this design?

Our research show that most firms (and I am not talking about the
exceptional) know they have issues. But they either are overwhelmed by the
long list, or don't believe that they matter enough compared to adding an
extra feature, or sorting out another issue. In some cases there is a
conflict within the team.

@Caroline,
I said:-

On the other hand interviewing well takes allot of skill, and the correct
> methods.

You said ;-

> Not really. I've had huge success in teaching people 'hey you' usability
> testing (see the extremely short chapter in my book if you're not sure what
> this means. www.formsthatwork.com). Typically, I get people doing good
> beginner-level usability testing in about half an hour, and the second half
> an hour is enough to get them starting on being reflective practitioners
> who
> will improve.

Study after study has found massive variations in the Usability issues found
by different evaluators. If you where right usability studies would be
replicable, which current research shows they are not. On methods see
*Imre* Lakatos
criticism of Milton Friedman.

There are many books on the market on how to do Usability Testing, but has
anybody tested the books?

We have done quite a bit of our own field research on people doing usability
studies. (We don't just believe in one technique, but in the right tool, for
the right purpose). The subjects ranged from the professional to the person
just picking up a book. Standards are pretty much all over the place.

The evaluators we have observed range from somebody just picking up the text
book, to people who have done courses, to people with degrees in Usability
related subjects.

We have observed some pretty strange practices, such as participants been
tested behind a glass wall (not a one way mirror), with about 6 people
looking in. We have seen test participants been given a very detailed script
(i.e. go to the page, go to the X form item, enter Y, then go to the next
item, enter Z). Another time five participants been tested simultaneously in
one room, with the only evaluator present running behind the participants.
Another time a video of test sessions been posted to YouTube without
the participants consent.

Those where some of the strangest examples, but in most cases we have
observed, we have seen leading questions been asked, or priming. The good
evaluators are the exception. Clients of Usability firms seem to echo these
findings. "X Agency has a very good evaluator, but don't use anybody else
there", Or "we had Y who was terrible, then we found Z who was brilliant".

I said :-
> All the standard surveys have been tested, some work better than others.

You said

> Nope.

See Tullis et Al, Good summery here
http://www.upassoc.org/usability_resources/conference/2004/UPA-2004-TullisStetson.pdf

You said

> <snip - background in anthropology>
>

> I'm not talking about anthropology, I'm talking about the normal everyday
>
work of the interaction designer.

Research methods are research methods, and unless they are carried out with
some diligence will lead to the wrong conclusions.

James
http://bog.feralabs.com

2009/3/12 Caroline Jarrett <caroline.jarrett at effortmark.co.uk>

> James Page said:
> > It depends on the design.
> > You can have badly done qualitative studies,
> > as well as poorly designed quantitative studies.
>
> I replied:
> > True, but it's so much *easier* to mess up on a survey.
>
> James replied:
> > Depends on if you create your own questions
> > or use ones that have been tested before.
>
> Nope, it doesn't. It depends on what those questions mean to your users at
> the time that you ask them, and how relevant they are to the topic that you
> want to research.
>
> Just one example: my brother wanted to use a survey instrument for his
> master's research that had supposedly been well-validated for the same
> topic
> and the same users. Apparently. Then we went through it for his actual
> topic
> (which was close, but not precisely the same) and for his actual users (who
> were close, but not precisely the same). About 30% of it survived.
>
> >There is allot of literature on what works.
>
> But very few people read it. And those that do, become highly familiar with
> the concept that you have to test your survey (gasp) by yes, guess what, as
> I already said: usability testing it.
>
> > All the standard surveys have been tested,
> > some work better than others.
>
> Nope. For example, even the most commonly-used survey in the usability
> world, SUS, is rarely used exactly in its original format. And it's
> well-known that one word in it, "cumbersome", routinely causes difficulty
> for users. If you haven't tested your exact survey with your actual users,
> you're toast. And if you're doing that, you may as well do some usability
> testing at the same time.
>
> > On the other hand interviewing well takes allot of skill, and the correct
> methods.
>
> Not really. I've had huge success in teaching people 'hey you' usability
> testing (see the extremely short chapter in my book if you're not sure what
> this means. www.formsthatwork.com). Typically, I get people doing good
> beginner-level usability testing in about half an hour, and the second half
> an hour is enough to get them starting on being reflective practitioners
> who
> will improve. It's genuinely quite easy to do adequately, and then to
> improve.
>
> > With both methods a bad question,
> > is a bad question.
> > It is very easy to prime people.
> > Would you not say it is more difficult
> > to make a mistake with a pre tested standard
> > survey question, that has been tested
> > many times before than a novice interviewing somebody?
>
> When people are face to face, the normal rules of conversation mean that
> mistakes get rapidly repaired and clarified. This can't happen in a survey.
> It is *definitely* much, much easier to screw up a survey question than an
> face to face interview. Even easier for a novice, who is likely to have no
> understanding that what was a good question last week in *that* survey is
> rubbish in this one.
>
> <snip - background in anthropology>
>
> I'm not talking about anthropology, I'm talking about the normal everyday
> work of the interaction designer.
>
> > As I have said before we employ
> > a mix of both qualitative and
> > quantitative methods in discovering
> > and fixing usability problems
>
> A mix is good. I do that too. I just know that quant methods can be a lot
> harder.
>
> Cheers
> Caroline
>
> ________________________________________________________________
> Welcome to the Interaction Design Association (IxDA)!
> To post to this list ....... discuss at ixda.org
> Unsubscribe ................ http://www.ixda.org/unsubscribe
> List Guidelines ............ http://www.ixda.org/guidelines
> List Help .................. http://www.ixda.org/help
>

13 Mar 2009 - 7:40am
Jared M. Spool
2003

On Mar 13, 2009, at 6:42 AM, James Page wrote:

> @dana
>>
>> I'd argue that in the small tests that most of us do percentages are
>> bogus).
>>
> Totally agree. Most of our tests are 30 plus, but for most
> small studies they are meaningless.

30+? Wow. Sounds like huge amounts of wasted resources if you need
those numbers on a regular basis. You might want to rethink how you do
your own work there. Looks like many opportunities to be much more
efficient.

Just sayin'

And as for:

> We have observed some pretty strange practices, such as participants
> been
> tested behind a glass wall (not a one way mirror), with about 6 people
> looking in. We have seen test participants been given a very
> detailed script
> (i.e. go to the page, go to the X form item, enter Y, then go to the
> next
> item, enter Z). Another time five participants been tested
> simultaneously in
> one room, with the only evaluator present running behind the
> participants.
> Another time a video of test sessions been posted to YouTube without
> the participants consent.

Sounds like proof for something that I say so often these days, my
staff has named it:

Spool's First Law of Competency: It takes absolutely zero skills to do
a crappy job at anything you put your mind to.

It just sounds like you hang around with incompetent people. I would
use the world of incompetence to discard the method. That would be
throwing the baby out with the bath water.

Jared

14 Mar 2009 - 5:33am
Caroline Jarrett
2007

>From James Page:
>Study after study has found massive variations
> in the Usability issues found by different evaluators.
> If you where right usability studies would be replicable,
> which current research shows they are not. 

So what? I've never claimed that my beginner usability testers do brilliant
work, or that their work is replicable. That's not what it's for. I'm
getting them started on usability testing to help them get ideas about how
to design better for their actual users.

> There are many books on the market on how to do Usability Testing, but has
anybody tested the books? 

Yes.

> We don't just believe in one technique, but in the right tool, for the
right purpose.

I never said that I only believe in one technique. The discussion was about
which technique was more difficult.

<snip - some bad practices>

I never said that any technique was immune from being done badly. I said
that it was easy to do surveys badly, and easy to do usability testing
competently.

James said:
> All the standard surveys have been tested, some work better than others.

I replied:
> Nope.

James said:
>See Tullis et Al, Good summery
here http://www.upassoc.org/usability_resources/conference/2004/UPA-2004-Tul
lisStetson.pdf

And as I said, not many people actually read the published papers in survey
methodology. If you read the original paper from which this presentation was
extracted, you will see that these researchers did not test the original
SUS: they tested an amended version of it. Exactly the point that I made in
my previous post.

That's all folks. I see that James and I won't agree on this point and it's
probably become tedious for other people, so I'm going to shut up now.

Best
Caroline Jarrett

Syndicate content Get the feed