Evaluating off-the-shelf products

16 Jun 2009 - 8:21am
5 years ago
4 replies
600 reads
Craig Melbourne
2009

Has anybody on the list had any experience of evaluating off-the-shelf
products for usability and quality of user experience.

I'm working with a company whose philosophy is to, where possible,
buy a product rather than build. The evaluation of such products is
based on supporting business processes and conforming to technology
infrastructure (and, of course, cost). They are now seeing the need
to include UX in the evaluation process but are not sure how.

I've done a bit of work with them on techniques for gathering user
research and how that data informs an iterative design process.
However, 'user centered procurement' is not something I have real
experience with.

I have recommended expert reviews and usability exercises during the
evaluation phase, but this isn't neccesary viable at the early
stages when there are a large number for products.

Is there a standard set of questions for vendors?

Is anyone aware of any good books/papers on a methodology around this
subject?

Comments

16 Jun 2009 - 12:18pm
Lorenzo Chavez
2009

I have spent a lot of my career working with companies that want and have incorporated commercial off-the-shelf (COTS) products into their solution. What is crucial, is to make sure that the development team is able to either modify the COTS coding or be able to apply a presentation layer to the product.

No amount of usability will be able to change something that is hard coded into a product that is not able to be modified. In some instances the company developing the COTS product will work with a vendor, but the changes are subject to their business model and their timeframe causing changes not to be implemented in their entirety.

User Experience can take a look at the COTS product and run their own heuristic evaluation and perform a gap analysis of the product to determine how similar it is to the companies UX standards.

No COTS product will be perfect, but there are acceptable deviations. The deviations need to be documented and evaluated to determine if the actual costs of using a particular COTS product. It all goes into the product selection.

Hope this helps

17 Jun 2009 - 6:05pm
Anonymous

In the past we have developed a matrix of "needs" and compared those
to the product. - like a comparative feature list. Then done informal
usability testing on the demo's.

Whilst that's useful for comparing features/cost etc it doesn't
necessary reveal if there are fundamental assumptions within the
product design about workflow or other important limitations built
into the design. (I.e the product assumes that you have to do x
before y, but it turns out your company never knows x till the last
minute - or something).

It does help you narrow down to a smaller list though.

A possible way to get to some of other aspects is to treat the
products as you might a prototype, and test them against personas and
scenarios developed to represent the needs of your team and the
characteristic of how they work. (This suggestion comes from
chatting about this topic with my supervisor Dr Toni Robertson whose
past research covered "shopping" for COTS" at one time though the
focus was different).

By developing personas and scenarios you develop a clearer idea of
the work that the software has to do, and a better understanding of
the way that people work - then you can evaluate the different
products using those resources in decision making. Apologies if this
is kind of stating the obvious, but I thought it worth sharing
nonetheless.

Cheers
Penny

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=42857

17 Jun 2009 - 6:05pm
Anonymous

In the past we have developed a matrix of "needs" and compared those
to the product. - like a comparative feature list. Then done informal
usability testing on the demo's.

Whilst that's useful for comparing features/cost etc it doesn't
necessary reveal if there are fundamental assumptions within the
product design about workflow or other important limitations built
into the design. (I.e the product assumes that you have to do x
before y, but it turns out your company never knows x till the last
minute - or something).

It does help you narrow down to a smaller list though.

A possible way to get to some of other aspects is to treat the
products as you might a prototype, and test them against personas and
scenarios developed to represent the needs of your team and the
characteristic of how they work. (This suggestion comes from
chatting about this topic with my supervisor Dr Toni Robertson whose
past research covered "shopping" for COTS" at one time though the
focus was different).

By developing personas and scenarios you develop a clearer idea of
the work that the software has to do, and a better understanding of
the way that people work - then you can evaluate the different
products using those resources in decision making. Apologies if this
is kind of stating the obvious, but I thought it worth sharing
nonetheless.

Cheers
Penny

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=42857

18 Jun 2009 - 7:12am
Jen Hocko
2006

I presented a methodology for this at the UPA conference last week.
I'd be happy to share my slides and information. If anyone is
interested, just send me an email.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=42857

Syndicate content Get the feed