(I hope this isn't a duplicate message, but I didn't see the first one
I'm fleshing out a structural definition of "mental model," and I've
posted what I have so far:
If any of you are particularly interested in the subject, I'd love to
hear some feedback on whether you think I'm dead-on, dead-wrong,
crazy, or have muddled things when I was trying to make them clear.
Thinking & Making: IA, UX, and IxD
austin at grafofini.com
I thought the user's "mental model" was the model the user built while
he uses the interface.I mean the user build a model that describes how
the product works from its experience with the product. It is opposed to
the designer's model (which is the one you has defined as designer).
But perhaps my terminology is wrong....
Austin Govella wrote:
>[Please voluntarily trim replies to include only relevant quoted material.]
>I'm fleshing out a structural definition of "mental model," and I've
>posted what I have so far:
I think I disagree with your idea that a mental model is a goal,
message, and expectation.
Mental models are probably constructed by testing goals and
expectations against the reality of the thing being used, but goals
and expectations do not form a mental model.
Mental models have already been defined at at a high level by Don
Norman. See the Design of Everyday Things, p. 189-90. Your talk of
goals, messages, and expectation seems to be getting at what he calls
the "gulf of execution".
Am I missing something in your definition?
(Sorry, I don't have time to type the relevant bits of his book into
this message right now.)
g.torenvliet at gmail.com
I agree with Gerard about how the mental model of an interface
must be constructed by the user via experience, however, there
is often an "a priori" mental model or expectation that users have
for behaviors of a product prior to experiencing them (e.g., one
expects a digital camera to behave more or less like a film camera),
which is perhaps closer to what Austin is trying to define (that's
how I read it, anyway). These models are built up from past
experiences of performing similar actions on similar objects with
similar goals in mind.
So, to modify my previous modification of Austin's equation,
where E() means "experience of":
Goal + (E(Action[1..n]) + E(Object[1..n]) + E(Result[1..n])) =
This describes an "a priori" mental model or expectation of behavior
which a user comes to an interface with, based on prior experiences
with similar actions/objects.
The mental model that the user then constructs of a new interface could
be described this way, where I() means "experience of the interface":
Goal[1..n] + (E(Action[1..n]) - I(Action[1..n])) + (E(Object[1..n]) -
(E(Result[1..n]) - I(Result[1..n])) = interface mental model
In other words, the interface mental model is constructed from the
between the experience of prior actions and objects engaged to achieve
particular goals and those represented in the interface to achieve the
difference also (I think) describes Norman's "gulf of execution".
Designers should try to minimize the difference between the user's
of "a priori" actions and objects and those analogous representations in
the interface, while also minimizing overall work (cognitive,
etc. - e.g., you don't need to manually advance film on a digital
Or... maybe I've just had too much coffee this afternoon! :^)
discuss-interactiondesigners.com-bounces at lists.interactiondesigners.com
[mailto:discuss-interactiondesigners.com-bounces at lists.interactiondesign
ers.com] On Behalf Of Gerard Torenvliet
Sent: Thursday, February 24, 2005 9:27 AM
To: Austin Govella
Cc: discuss at interactiondesigners.com
Subject: Re: [ID Discuss] Defining the mental model
[Please voluntarily trim replies to include only relevant quoted
Mental models are probably constructed by testing goals and expectations
against the reality of the thing being used, but goals and expectations
do not form a mental model.
Finally got to your link about User Mental Model ... And sorta following the
So it sounds like to me, that you have come up w/ a formula for organizing
data around user research. To that end, that sounds great.
I do agree with some who have said that while goal and expectation sound
about right, there is something not sitting well with me about "message".
I do like the idea that there is a conversation going on, but that metaphor
seems to be overplayed in this regards. I think what is missing to me is
that the user's mental model will be significantly changed by what is
presented in front of them. You allude to this as "context" but it sounds
like, and maybe I'm not getting it, you are saying that context is separate
from the mental model itself. To me it defines it. AND there are two aspects
of the mental model (if we are to think about this in terms of presenting a
1. There are those aspects to context that exist w/o the solution and effect
the goal itself, and most likely the manner of action or message
2. There are those aspects to context that exist within the solution that
effect the manner of action or message, and definitely re-define
Basically, the formula you are creating can be severely manipulated by
general context, but also by tactics of captology (reading BJ Fogg in prep
for the IA Summit right now) which create contexts of persuasion. Even a
simple affordance is a type of persuasive context that will effect the
message and expectations (may even effect the goal).
My last question is around, how do you use this formula to apply to your
David Heller <dave at ixdg.org> wrote:
> I do agree with some who have said that while goal and expectation sound
> about right, there is something not sitting well with me about "message".
A medium agnostic term like "action" would probably be more useful.
> I think what is missing to me is
> that the user's mental model will be significantly changed by what is
> presented in front of them.
Experience is never one distinct experience, but always more like a
river, a continuous flow. Any model of experience can only capture a
single slice in time, an instant.
Of that instant, we can compare what we know about the experience with
the slice we've carved out. We shouldn't be afraid of our inability to
acquire all experience in one tremendous bucket where we can sort,
study, and evaluate. The sciences have this same problem, and they do
To that end, a user's mental model is *constantly* changing. What does
not change is the *structure* of the mental model.
> You allude to this as "context" but it sounds
> like, and maybe I'm not getting it, you are saying that context is separate
> from the mental model itself.
Yes. I like how Robert, added the context into the equation. It
definitely drives the mental model. One's context affects the range of
goals one would have, as well as the range of possible actions and
their likely result. Dan Brown (et. al.) call this context the
"personal information space."
> Basically, the formula you are creating can be severely manipulated
Severely makes this sounds bad, but I'm not sure what you're trying to
say. Could you offer an example? Do you mean bad designers could
design bad items for bad ends?
> but also by tactics of captology ... which create contexts of persuasion. Even a
> simple affordance is a type of persuasive context that will effect the
> message and expectations (may even effect the goal).
I'd especially agree here: design persuades. Design is a context of persuasion.
> how do you use this formula to apply to your
First, it's not meant to apply to designs. It's only meant to convert
a fuzzy idea, like "mental models," to something more specific.
But I do apply it: First, I determine the client's goals, message, and
expected results. Secondly, I determine the user's goals, message, and
expected results. Then I design a solution that overlaps the two
mental models as much as possible. But this isn't unique to how I
define mental models. We all do this in one way or another.
Successful design has the user do what the client wants in the way the
user already wanted to do it.
Thinking & Making: IA, UX, and IxD
austin at grafofini.com
> ...a user's mental model is *constantly* changing. What does not
change is the *structure* of the mental model.
This is an interesting comment. I think that mental models must
be inherently structural in nature (describing objects and
So if they are changing all the time (which I agree with), but the
structure is fixed, then what is changing?
I've just got round to reading this, its been an interesting thread.
> Robert said:
> Austin said:
>> ...a user's mental model is *constantly* changing. What does not
> change is the *structure* of the mental model.
> This is an interesting comment. I think that mental models must be
> inherently structural in nature (describing objects and
> relationships). So if they are changing all the time (which I agree
> with), but the structure is fixed, then what is changing?
Mental models have been quantified as both structural (i.e. schema
objects and relations) and functional (i.e. runnable productions - much
like a program). Whether a model is one or the other is not that
important -- its been argued that structural and functional models can
mimic each other. It seems likely that the model is used in a
structural or functional way as required.
Johnson-Laird suggested that changes to mental models occurs through
the following procedure:
1) If the new information fits an existing model then apply the
2) If the new information is like an existing model then extend the
3) If the new information does not fit an existing model then create a
Each successive step requires more time and effort -- when I travel
outside the UK I have great difficulty looking in the right direction
when crossing the road! -- its easier to use the existing wrong model.
You can see how this change procedure would fit with both Austin and
Norman didn't go into detail in POET, but one important thing that he
has stated is that mental models are often incomplete and incorrect. We
basically make do with bad but workable models.
For further reading there's quite a bit of work on mental models.
Strangely, over time the definition seems to get vaguer.
Its a bit old but Genter & Stevens "Mental Models" has some good
You can read a good part of Norman's forward with the 'look inside' on
the Amazon site.
As regards new mental model research, most of it seems to be funded by
the military (US Navy in particular) but I haven't had the time to look
into it. If anyone knows of a recent roundup of mental model theory I'd
love to hear about it.