Behaviour undone -- The fatal inversion in IxDs definition (w as RE: PID: Personal Interface Definitions)

13 Sep 2004 - 11:50am
9 years ago
2 replies
490 reads
Robert Reimann
2003

Nick is saying (I think) that system behaviors and models should reflect
human behaviors and mental models, rather than vice versa, a sentiment I
agree with entirely (as I imagine many people on the list do).

As one of the proponents of casting the field of IxD as
centering on the design of system behaviors, I want to be clear about
this: to adequately design system behaviors, the human behaviors and goals
that the system is attempting to facilitate must be deeply understood
and addressed in the design.

I must also disagree that "behaviors are for people, ...not software",
which Nick concludes. As an analogy, the use of the word "behavior"
is quite powerful, and the analogy allows us to move in the direction
Nick desires. Humans expect and require humane behaviors, whether
received from humans or from non-human sources. Interaction designers
need to remember that human behavior comes first, and must dictate
at some level the behaviors of the tools, artifacts, and environments
that humans desire to interact with. The trick is this: to create
truly desired/desirable interactions, a significant recasting of the
problem from the top down is often required. This remains the greatest
challenge for interaction designers, and represents the greatest reward
when the design is successful.

Robert.

-----Original Message-----
From:
discuss-interactiondesigners.com-bounces at lists.interactiondesigners.com
[mailto:discuss-interactiondesigners.com-bounces at lists.interactiondesigners.
com] On Behalf Of Andrei Herasimchuk
Sent: Monday, September 13, 2004 2:06 AM
To: 'Interaction Designers'
Subject: Re: Behaviour undone -- The fatal inversion in IxDs definition (was
RE: [ID Discuss] PID: Personal Interface Definitions)

[Please voluntarily trim replies to include only relevant quoted material.]

On Sep 12, 2004, at 9:39 PM, Nick Ragouzis wrote:

> And once we have that unique bit, that *others* recognize
> as valuable and interesting, then the road forward (and
> an appropriate defining manifesto) will be appropriately clearer.

Not to sound snide or sarcastic... but... Mind restating that email in
language that mere mortals can understand? Obviously, I lack the proper
smarts. I have no freaking clue what you just said.

Andrei

Comments

13 Sep 2004 - 1:19pm
Robert Reimann
2003

Dave Heller writes:

> I think where Robert would also agree is that this compliment [complement]
can only come from
> understanding the context of use around motivations, goals and task flows.

I think I would too. :^) Yes, "complement" is a better word than "reflect"
for what I was trying to get at.

> Later on, Robert suggests that these should be "humane". Also true ... But
what does
> "humane" mean. The word itself implies that there are a set of rights, as
the word
> "humane" implies the opposite is true if it is not, which is "inhumane" or
oppressive.
> It would be interesting to put together a "bill of rights" of IxD.

Another word for "humane" is "considerate". Chapter 14 of _About Face 2.0_
outlines
some high-level principles for making software UI's considerate.

Robert.

-----Original Message-----
From:
discuss-interactiondesigners.com-bounces at lists.interactiondesigners.com
[mailto:discuss-interactiondesigners.com-bounces at lists.interactiondesigners.
com] On Behalf Of David Heller
Sent: Monday, September 13, 2004 1:02 PM
To: 'Interaction Designers'
Subject: RE: Behaviour undone -- The fatal inversion in IxDs definition (was
RE: [ID Discuss] PID: Personal Interface Definitions)

[Please voluntarily trim replies to include only relevant quoted material.]

> Nick is saying (I think) that system behaviors and models
> should reflect human behaviors and mental models, rather than
> vice versa, a sentiment I agree with entirely (as I imagine
> many people on the list do).

"reflect" is an interesting word choice. It implies a "mirror" which implies
that if a human behavior is to do X, then the system/product behavior should
be X.

I would maybe edit that (maybe this is a bit of word smithing) and switch
the world "reflect" for the word "compliment" ... Both the system and the
humans have behaviors and an ideal system is one where the system behaviors
compliment human ones.

I think where Robert would also agree is that this compliment can only come
from understanding the context of use around motivations, goals and task
flows.

The other great reason that I like "compliment" is b/c it gives room for
innovation in a way where "reflect" doesn't. As current human behavior may
change in the presence of good innovative complimentary system behaviors. So
our user research needs to be deeper than understanding current mental
models and behaviors, and more to the point of understanding the motivations
and goals of the behaviors at a higher level so that we can adjust current
behaviors through innovations where possible. Of course there is a balance
here b/c behaviors won't change easily and we also need to reflect the
roadblocks to change so we can compliment where innovation is not allowed
(at least not in such large jumps).

Later on, Robert sugests that these should be "humane". Also true ... But
what does "humane" mean. The word itself implies that there are a set of
rights, as the word "humane" implies the opposite is true if it is not,
which is "inhumane" or oppressive. It would be interesting to put together a
"bill of rights" of IxD.

-- dave

_______________________________________________
Interaction Design Discussion List
discuss at ixdg.org
--
to change your options (unsubscribe or set digest): http://discuss.ixdg.org/
--
Questions: lists at ixdg.org
--
Announcement Online List (discussion list members get announcements already)
http://subscribe-announce.ixdg.org/
--
http://ixdg.org/

15 Sep 2004 - 12:50pm
Robert Reimann
2003

> It's also wrong, IMO, to think of the
> anthropomorphizing phenomenon as a -direct-
> design index and license to/in the concrete systems
> domain. This phenomenon (it is only that) moves
> us in the other direction, to its root on the
> human size of the interface.

If I understand you, I think you're saying that conceptualizing
system-generated "behavior" will inevitably lead you back to the
human behavior. That is actually (part of) the point of the
exercise. My experience indicates that the greatest interaction
problems with digital systems arise from not properly considering
the behavior (in context) of the humans using it. This analogy
of behavior forces the designer to consider the human behavior
as part of a man-machine dialogue.

> once you've focused on behavior as an attribute
> uniquely on the human side of the interface,
> only then can you integrate the result and draw
> conclusions about the license for the subject
> systems.

I'm not sure why you believe this is the case.
You may be right; I just don't see any argument for it
in your text.

> (IMO the use of "complementary" here, in place
> of "subject" is probably not helpful. It is
> easy to fall into, and one can observe the
> resulting human-systems paring as such, but at
> design time this probably contributes as much
> to incorrect concretizing as does the behavior
> metaphor itself. These systems are not aligned
> in the way it suggests ... because in properly
> designed systems there are vast non-complementary
> aspects when considering any one type of
> human-systems paring. One can go further in
> locating this distinction, even to specific
> transactions. Sure, there's an exchange, but
> it's amazing how non-complementary is the
> mapping, and how potent is the chasm.)

Granted that the pairing can't in most cases
be perfect. However, I maintain that using this as
a starting point, and with the understanding that
some domains/contexts/constraints will not permit
as close a pairing as others, I still believe that
the behavior analogy is a powerful tool in user-centered
design for the reasons I've already mentioned.

> In fact, I rather think it a signal of a potential
> problem when a solution to something presented as
> an interaction design issue can 'only' be solved
> by dramatic scope expansion.

I never said "only". As we know, when dealing with
qualitative issues, solution space isn't binary: there
are poor solutions, good solutions, and better solutions.
My assertion is that *when* there is an opportunity to
reexamine the larger context of a problem, there is a
better chance of arriving at a satisfactory solution.

Robert.

-----Original Message-----
From: Nick Ragouzis [mailto:nickr at radicalmode.com]
Sent: Wednesday, September 15, 2004 12:13 PM
To: 'Interaction Designers'
Cc: Reimann, Robert
Subject: RE: Behaviour undone -- The fatal inversion in IxDs definition (was
RE: [ID Discuss] PID: Personal Interface Definitions)

[ Apologies to the list. I'm mostly unavailable
until mid-week next week. Normally I wouldn't have
put this out there with such prospects. But facing
that perfect conjunction I couldn't resist.]

My first thoughts in response, though, are this:

The magnitude of the error in invoking the
"behavior" metaphor, on both sides of this
system is so large, and so fundamental, that
it is merely round-off difference to say that
chocolate cake has behavior. (And I choose
that metaphor carefully.)

Using behavior as analogy (as suggested), going
beyond metaphor to the implication that an
identity on some level grants latitude in
assuming meaningful similarity at another ...
well this is for interaction design, and IxD,
an even more blinding error.

It's a compelling one ... granted ... but not
knowing the difference has cast interaction design
as a kind of industrial-design-with-personality-
cum-organizational-design.

It's also wrong, IMO, to think of the
anthropomorphizing phenomenon as a -direct-
design index and license to/in the concrete systems
domain. This phenomenon (it is only that) moves
us in the other direction, to its root on the
human size of the interface.

To echo Jef's comment about modelessness, once
you've focused on behavior as an attribute
uniquely on the human side of the interface,
only then can you integrate the result and draw
conclusions about the license for the subject
systems.

(IMO the use of "complementary" here, in place
of "subject" is probably not helpful. It is
easy to fall into, and one can observe the
resulting human-systems paring as such, but at
design time this probably contributes as much
to incorrect concretizing as does the behavior
metaphor itself. These systems are not aligned
in the way it suggests ... because in properly
designed systems there are vast non-complementary
aspects when considering any one type of
human-systems paring. One can go further in
locating this distinction, even to specific
transactions. Sure, there's an exchange, but
it's amazing how non-complementary is the
mapping, and how potent is the chasm.)

The surprising thing, in my experience, is that
at that point, to address the interaction design
issues, you need *much less* breadth in the
solution domain (more properly, much more depth
in dramatically fewer domains). And that this
produces much more stable and leveragable solutions.
In fact, I rather think it a signal of a potential
problem when a solution to something presented as
an interaction design issue can 'only' be solved
by dramatic scope expansion.

Well here I'm starting to get off into the weeds
when I should be addressing Gerard, Andrei, and
Robert's comments. I'll do that later, with your
patience.

Best,
--Nick

Syndicate content Get the feed