We use RUP to manage our development lifecycle which means our requirements are written in the form of "use cases". These are short user-centric functional deliverables.
Does anybody have any experience of using RUP documentation, and specifically use cases, as a structure for usability testing? It seems like it would be a good fit and tidy way to organise things.
Yes, I have used them several times for paper testing. As use cases describe the optional paths that a person may take to achieve a goal, they are very effective as a baseline to determine: 1) if the person actually takes those steps (and can find their own way through them); 2) if the steps to achieve the goal are not clear; 3) if error scenarios have not been flushed out. What we did was create a set up for a user, then establish the goal we want them to achieve and have them set out to do it. The person who wrote the use cases (in both cases,me) sat in on all the usability tests with an IM connection to the tester, helping to answer questions that they didn't know the answer to, or to advise if the person is skewing so far off track as to mark the test a failure and gather insights from them about how it might be improved, rather than continue to watch them struggle.
I think it is very important to be an observer and supporter of the testing process, but have an objective third party administer the test. In our case, with paper, we had a test "guide" moving the person through the tests and answering questions only when appropriate, or turning questions back to the person ("what do you think should happen if you click that?"). And we had a person act as the "computer" placing paper mockups of the screens in front of the person as they moved through the scenario. Using the use cases, we were able to prepare all the "pages" as they should be for each step in the scenario (using post it notes for dynamic information the person provided, e.g. their name). So, reading over the use cases and being very familiar with them was important for the "computer" role as well.
Another advantage of having the use cases as the basis for the testing was to determine which things were the most risky, or which areas were ones the interaction designer was struggling with and make them priorities to test.
I have an excell template that I use for documenting use cases that is very helpful for testing, because each use case is on one tab, with hyperlinks so that it is easy to move around them.
The use cases also lead very well into quality assurance testing, especially if the QA lead was not involved in the early design work on the project. It gives them a very quick view into the way the system is designed to work. Use cases, in combination with personas, which help QA testers to think of different scenarios to test are very helpful at that stage as well.
I am happy to chat with you in person about my experience with them, and to share my excel template with you if you are interested. You can email me at LindaFrancis at Telus dot net.
One other thought is that I find that use cases work well for goal oriented interactions, and not so well for usability testing that is more content based. I am in favour of more open ended observation when testing websites that are more informational than interactive.
What we did was illustrating the Functional analysis including Use Cases with wireframes designs, next to that it's easy to show how your use cases work in a UI Prototype allowing your customer to "experience" the written documentation. From experience with customers this has been a very good process validating the project analysis phase.
Usually a problem with use cases is that they are not user-centered design instruments. Instead, they're developer-oriented specificiations, often written by people outside of the UX practice. In those cases, the use cases themselves aren't good for creating usability tests. Too often they are too discrete and too functionality oriented.
In your case, though, they might be written at a high enough level as to provide scenarios of use. In that situation, they could prove useful.
Thanks for the great feedback guys.
We are developing expert software, so it is all goal oriented. The BA team are pretty good at writing properly abstracted use cases. Requirements rather than solutions, and I hope that hanging this kind of function on the use cases will help to reinforce that. We already use use cases as the basis for QA but I hadn't thought of how personas could help create test scenarios.