PID: Personal Interface Definitions

9 Sep 2004 - 9:36am
10 years ago
23 replies
1569 reads
Dave Malouf
2005

http://www.lukew.com/ff/entry.asp?96

This blog entry by Luke Wroblewski talks about a take it w/ ya style sheet
for all interfaces that you work with. Since he explains it better than me,
I won't go into detail, but it seems like a definite topic with a lot of
interesting ramifcations worthy of discussion here.

The political ramifications between corporations is the most interesting to
me. How to get beyond it that is.

-- dave

David Heller
<mailto:dave at interactiondesigners.com> dave (at) ixdg (dot) org
<http://www.ixdg.org/> http://www.ixdg.org/

AIM: bolinhanyc \\ Y!: dave_ux \\ MSN: <blocked::>
hippiefunk at hotmail.com

Comments

9 Sep 2004 - 9:41am
Gerard Torenvliet
2004

Dave,

Isn't the other interesting issue here around innovation (and perhaps marketing)? If products are reduced to a set of standard functionality for which there are standard, OS dependent interface definitions, why would I as a consumer choose one company's product over another?

In fact, I think that the this sort of thinking would just promote mediocrity. Instead of designing to make real innovations, we'd get companies designing towards de facto standards more than they do today. Yikes.

Think different?

-Gerard

:: Gerard Torenvliet / gerard.torenvliet at cmcelectronics.ca
:: Human Factors Engineering Design Specialist
:: CMC Electronics Inc.
::
:: Ph - 613 592 7400 x 2613
:: Fx - 613 592 7432
::
:: 415 Legget Drive, P.O. Box 13330
:: Ottawa, Ontario, CANADA, K2K 2B2
:: http://www.cmcelectronics.ca

9 Sep 2004 - 2:25pm
Listera
2004

> http://www.lukew.com/ff/entry.asp?96

"Personal Interface Definitions are a ways off..."

No kiddin' and thank God.

Ziya
Nullius in Verba

10 Sep 2004 - 2:59pm
Dave Malouf
2005

Interesting that very few wanted to bite at this topic. It seemed that the
early and few posts just dismissed it as a bad idea without any interest in
engaging why the idea is proposed at all.

I think there are lots of good criticism of the idea, so I won't dwell
there. I'm wondering if we could explore what problem it is trying to solve
that isn't being done successfully by relying on software vendors by
themselves.

The problem as I see it is that personal preference is a huge part of
usability. Our individuality is much greater than our ability to generalize
amongst ourselves especially around something as psychological as cognitive
behavior. (To me this speaks to the monotanous issue in the other thread,
partially.)

If not through a PID how else can we deal with the above stated problem.
Some might suggest that through science we can find the right generalization
and create a monotanous system, but I disagree with that notion. The mind is
just too unique in this regards.

Ok, I will address one issue directly. It was suggested that software
vendors wouldn't be able to compete if there were PIDs. But that is like
saying that vendors of Linux can't compete b/c there are so many add-ons
through the open source movement. The vendors have their own add-ons that
they promote, but other people might just specialize in creating, selling
and supporting their own PIDs. The word "personal" I think is a tad
confusing, b/c who in their right man is going to personally create their
very own interface for applications. No one wants to do that, so just as
today there are free and for-pay "skins" of many different types of
applications, people both professionally and out of hobby will create PIDs
of their own and share them. You also have to remember that there are still
patents that protect specific interface behaviors (for good or for bad) so
the big boys with lots of legal and R&D money can protect their intellectual
property as they do from each other today.

-- dave

10 Sep 2004 - 7:04pm
CD Evans
2004

This is quite similar to what I recommended for a recent client in The Netherlands, V2.nl.

I've been meaning to write up an article on it for Boxes 'n Arrows but haven't had the time, though a basic deisgn
philosohpy for it is in my thesis on my site. I'd be happy to publish something in this area as I've had a good time spent
thinking and consulting on it. Our general conclusion at V2 was that with such customization and modulization of the
technical components (CSS, RSS, XML, etc) it will be inevitable to individuals to enable their own skinning sooner or later.

I like the term PID, I think it works for me and applies all three of the needed requirements: Creative, Technical and
Individual.

Anyone else have experience in individualized interfaces??

CD Evans

(ps. this topic rocks)

On Fri, 10 Sep 2004 15:59 , 'David Heller' <dave at ixdg.org> sent:

>Ok, I will address one issue directly. It was suggested that software
>vendors wouldn't be able to compete if there were PIDs. But that is like
>saying that vendors of Linux can't compete b/c there are so many add-ons
>through the open source movement. The vendors have their own add-ons that
>they promote, but other people might just specialize in creating, selling
>and supporting their own PIDs. The word "personal" I think is a tad
>confusing, b/c who in their right man is going to personally create their
>very own interface for applications. No one wants to do that, so just as
>today there are free and for-pay "skins" of many different types of
>applications, people both professionally and out of hobby will create PIDs
>of their own and share them. You also have to remember that there are still
>patents that protect specific interface behaviors (for good or for bad) so
>the big boys with lots of legal and R&D money can protect their intellectual
>property as they do from each other today.
>
>-- dave

10 Sep 2004 - 4:17pm
Listera
2004

David Heller:

> Ok, I will address one issue directly. It was suggested that software
> vendors wouldn't be able to compete if there were PIDs.

David, are you old enough to remember OpenDoc?:-)

I personally talked to the then-CEOs of Quark and Adobe about the notion of
ISVs selling "components" which users can then pick and choose to put
together like Lego blocks to create highly customized apps/workflows.
Admittedly, this is not the same as PID, but the general proposition is the
same: sell pieces, not the whole experience. Both CEOs laughed at the notion
and wowed that they'd never do that. In fact, in the following decade,
Adobe, Quark and Macromedia spent millions to create their own
company-specific UI *platforms* across two platforms. Users weren't running
Mac or Win (it didn't matter), they were familiarly cocooned in the Adobe or
MM platform; hence, the CS and MX suites.

As I mentioned previously, one of the biggest strikes against any Photoshop
competitor was in fact that its UI wasn't "familiar". Virtually any product
reviewer would mention that right upfront. How easy do you think it'd be for
Adobe or MM to give that up?

But we can conduct a small experiment. Ask around you to see how many has
bothered to re-skin their web browser and how many created a custom/default
CSS template for viewing sites?

Ziya
Nullius in Verba

10 Sep 2004 - 4:21pm
Dave Malouf
2005

<< But we can conduct a small experiment. Ask around you to see how many has
bothered to re-skin their web browser and how many created a custom/default
CSS template for viewing sites? >>

Web Browser ... No ... But I have sold 2 products that required skinning
capabilitites at their core. AND weren't the first versions of MP3 players
actively skinned and many apps like Trillian and even Mozilla have skinning
as a core component and are used quite extensively?

I'm not big on skinning personally, but I could see having alternative
application behaviors being something very interesting to me. Skins don't
change behavior, just the emotional intent.

-- dave

10 Sep 2004 - 4:28pm
Listera
2004

David Heller:

> ...used quite extensively?

I'd love the some stats on re-skinning in the non-geek population before I'd
go along with "extensively."

> I'm not big on skinning personally,

Neither am I.

> but I could see having alternative application behaviors being something very
> interesting to me.

What do you mean by " alternative application behaviors"?

Ziya
Nullius in Verba

10 Sep 2004 - 4:51pm
Clay Newton
2004

> ISVs selling "components" which users can then pick and choose to put
> together like Lego blocks to create highly customized apps/workflows.
> Admittedly, this is not the same as PID, but the general proposition is the
> same: sell pieces, not the whole experience.

The Eclipse Rich Client Platform is providing exactly this type of
framework. Major vendors (IBM, Motorola, Nokia) as well as groups
such as Apache are participating in developing new applications within
Eclipse, as well as evolving the platform itself.

http://www.eclipse.org/

10 Sep 2004 - 9:44pm
Listera
2004

Clay Newton:

> The Eclipse Rich Client Platform is providing exactly this type of
> framework.

OpenDoc was intended for *endusers* to utilize/assemble conceptually
replaceable components, Eclipse is a platform for *developers* to create
integrated tools, is it not?

Ziya
Nullius in Verba

10 Sep 2004 - 11:11pm
Clay Newton
2004

> OpenDoc was intended for *endusers* to utilize/assemble conceptually
> replaceable components, Eclipse is a platform for *developers* to create
> integrated tools, is it not?

Historically this has been true of Eclipse. This year, Eclipse 3.0 was
released, and with this release, the Eclipse Rich Client Platform was
introduced. This is pluggable architecture allows for the deployment
of stand-alone applications using Eclipse.

Why would you try to do such a thing? Eclipse is built in SWT, so it
is fully platform independent; plugins, the most atomic deployable
unit in Eclipse, are written in Java; plugins can be linked using an
XML manifest to create larger tools or applications; Eclipse supports
the sharing of a large amount of contextual information between
plugins, so plugins can be relatively freely stitched together into
new configurations.

The Eclipse notion of "Views" and "Perspectives" allows users to
assemble "Workbenches" that fit the needs of their particular process.
This is really important for users that deal with complex business
processes that constitute multiple sources of data, types of data, and
integration with remote or hosted systems. In cases such as this,
interface personalization can be a boon to productivity. An example
might be a securities analyst using a messaging client, a client to
perform trades, a business intelligence system, and a news feed in one
UI composed of integrated components arranged in the fashion best
suited to her needs.

The Eclipse platform is a perfect candidate for the development of PIDs.

-Clay

10 Sep 2004 - 11:21pm
Listera
2004

Clay Newton:

> This year, Eclipse 3.0 was released...

Interesting development.

> In cases such as this, interface personalization can be a boon to
> productivity.

How/where is the UI definition stored?

Ziya
Nullius in Verba

12 Sep 2004 - 1:28am
Jef Raskin
2004

OpenDoc unfortunately kept the applications separate, the menus changed
as you clicked here and there in the combined document. But you usually
were not looking at the menus. Also, you had to invoke the applications
as such to create sub-documents so the user was not freed from the need
to understand that there are incompatible applications.

On Sep 10, 2004, at 7:44 PM, Listera wrote:

> [Please voluntarily trim replies to include only relevant quoted
> material.]
>
> Clay Newton:
>
>> The Eclipse Rich Client Platform is providing exactly this type of
>> framework.
>
> OpenDoc was intended for *endusers* to utilize/assemble conceptually
> replaceable components, Eclipse is a platform for *developers* to
> create
> integrated tools, is it not?
>
> Ziya
> Nullius in Verba
>
>
>
> _______________________________________________
> Interaction Design Discussion List
> discuss at ixdg.org
> --
> to change your options (unsubscribe or set digest):
> http://discuss.ixdg.org/
> --
> Questions: lists at ixdg.org
> --
> Announcement Online List (discussion list members get announcements
> already)
> http://subscribe-announce.ixdg.org/
> --
> http://ixdg.org/
>

12 Sep 2004 - 1:40am
Jef Raskin
2004

I strongly disagree. It does not make sense for me to repeat here what
I have written of at some length elsewhere, but most interface design
does not take advantage of what we know to be universal in terms of
cognitive behavior. Personal preference is small compared to what is
uniform when you are creating interfaces.

"The Humane Interface" as a whole addresses this question (but
especially Chapter Two).

What is your evidence to support your contention?

On Sep 10, 2004, at 12:59 PM, David Heller wrote:

> personal preference is a huge part of
> usability. Our individuality is much greater than our ability to
> generalize
> amongst ourselves especially around something as psychological as
> cognitive
> behavior. (To me this speaks to the monotanous issue in the other
> thread,
> partially.)

12 Sep 2004 - 7:38am
Dave Malouf
2005

Jef Raskin wrote:

<< I strongly disagree. It does not make sense for me to repeat here what
I have written of at some length elsewhere, but most interface design
does not take advantage of what we know to be universal in terms of
cognitive behavior. Personal preference is small compared to what is
uniform when you are creating interfaces.

"The Humane Interface" as a whole addresses this question (but
especially Chapter Two).

What is your evidence to support your contention? >>

You first ... Show me real-world applications of an interface that takes
full advantage of cognitive behavior that is successful (financially, and
using usability criteria). The answer is probably you can't b/c of your
quote. So neither of us have evidence, right?

I think Jef, that you and I differ in what types of information we use and
how we choose to assimilate it into practice. I take practice first and then
research second. If I get good results out in the field then I feel that is
the most important criteria. What are "good" results though. Well, success
is based on sales, and incorporation into context. I have sold many things
that I feel are failures, but I do have a couple of successes where I feel
the design not only sold (met the market perceived need) but also fit into
the market's context of use. Maybe I just don't believe in a "humane"
interface b/c that context is just too small. Humanity does not exist
outside of cultural and societal (read economic & linguistic) contexts.
(please note I was a culture & personality anthropologist; before being a
designer).

Psychology cannot exist without sociology or anthropology. So cognition is
culturally bound. I've even written a paper that showa that dreams vary
widely in purpose, content, and interpretation around the world, and some
people change their dreaming based on the culture that they are living in.

You say that, "most interface design does not take advantage of what we know
to be universal in terms of cognitive behavior". And you say this to mean,
it is universally "known" that people prefer a single option for a behavior
as opposed to multiple.

But how can you accept that? I almost don't care what any lab study has to
say on the matter. In so much of our universe there are examples where
choice wins out over singularity. I'll reference "The Matrix" ... The only
way it could survive (read as work) is if at least on some level a choice
was given to everyone inside. That's my translation of the mumbo jumbo of
"The Architect".

That isn't just made-for-hollywood mumbo jumbo. That is the very basis of
the success of capitalism and all the economic and non-economic pieces that
go along with that.

Are there examples where ONE option has stayed the cource of time? Probably.
But there are so, so many more real-world examples inside and outside of the
computer world that show that the cultural NEED for options predisposes any
effect that the efficiency that lack of options might give a user.

But I also think you ignored a part of my posting. I clearly outlined how
personal choice in having multiple options actually increases efficiency
because it is impossible for any designer to predict effectively how ONE
option is going to be used within a system that is used through multiple
contexts of motivation/task/goal.

-- dave

12 Sep 2004 - 3:18pm
CD Evans
2004

Personal spaces are the only peace of mind we have in this world... and working with a computer doesn't seem to provide
that simple neccessity.

For example, I've used a tablet on my machine for eight years, as I'm prone to pain from mice. This is one of the numerous
reasons why I find working on company standard equiptment intolerable. This was my thought in the other thread on
tablets.

In order to create a personalized and personal space system, sueing Xerox, Apple and Microsoft for 'inhumane interfaces'
might be the only way to actually get healthy systems on the market. I'm not sure, but every time I start thinking about
alternatives it seems like a rediculous proposition, which is a big eye opener to how much personal space we actually
have.

Humans don't typically need much, a few bookshelves, a carpet and a cat will suffice for some. Some need a bit more, but
I'm interested in how to recreate this simple feeling of comfort within the computer. It doesn't have to be cozy, but the
market for square lightbulb watching might start to dwindle if we don't step on it.

Speaking of which, I'd be delighted to make microsoft, xerox and apple fund my research and development toward a
circluar interface, which I've come up with. This would be a novel option going forward for me, that is instead of trying to
compete with them, or worse yet, having to sue them for invasion of personal space.

CD Evans

>[Please voluntarily trim replies to include only relevant quoted material.]
>
>Jef Raskin wrote:
>
>I have written of at some length elsewhere, but most interface design
>does not take advantage of what we know to be universal in terms of
>cognitive behavior. Personal preference is small compared to what is
>uniform when you are creating interfaces.
>
>"The Humane Interface" as a whole addresses this question (but
>especially Chapter Two).

On Sun, 12 Sep 2004 08:38 , 'David Heller' <dave at interactiondesigners.com> sent:

>But I also think you ignored a part of my posting. I clearly outlined how
>personal choice in having multiple options actually increases efficiency
>because it is impossible for any designer to predict effectively how ONE
>option is going to be used within a system that is used through multiple
>contexts of motivation/task/goal.
>
>-- dave

12 Sep 2004 - 2:55pm
Jef Raskin
2004

On Sep 12, 2004, at 5:38 AM, David Heller wrote:

> [Please voluntarily trim replies to include only relevant quoted
> material.]
>
> Jef Raskin wrote:
>
> << I strongly disagree. It does not make sense for me to repeat here
> what
> I have written of at some length elsewhere, but most interface design
> does not take advantage of what we know to be universal in terms of
> cognitive behavior. Personal preference is small compared to what is
> uniform when you are creating interfaces.
>
> "The Humane Interface" as a whole addresses this question (but
> especially Chapter Two).
>
> What is your evidence to support your contention? >>
>
> You first ... Show me real-world applications of an interface that
> takes
> full advantage of cognitive behavior that is successful (financially,
> and
> using usability criteria). The answer is probably you can't b/c of your
> quote. So neither of us have evidence, right?

Wrong. When I was designing mouse interaction for the Mac, I used the
best information then available. If we had used the PARC mouse methods,
I think that success would have been far less likely as those methods
were much harder to learn and use. A product that's still around a
quarter century after it was conceived, and whose methods have become
nearly universal, and which has brought in billions of dollars to Apple
is what I'd call successful.

Wrong again. My own company got millions of dollars (literally) from
licensing the interface technologies that I (with, as always, help from
the crew I am working with) invented at Information Appliance.

That's what most people (except Bill Gates) might call financial
success.

But I don't rate financial success as a major arbiter of goodness. If
so, my calling might be in selling illicit drugs. Besides, you know
that I am in the process of productizing my current work, which is
built on even better understanding of cognetics than was available in
the past (science ever moves onward), so it can't show financial
success, which by the criterion you set up allows you to dismiss it.

>
> I think Jef, that you and I differ in what types of information we use
> and
> how we choose to assimilate it into practice. I take practice first
> and then
> research second. If I get good results out in the field then I feel
> that is
> the most important criteria. What are "good" results though. Well,
> success
> is based on sales, and incorporation into context. I have sold many
> things
> that I feel are failures, but I do have a couple of successes where I
> feel
> the design not only sold (met the market perceived need) but also fit
> into
> the market's context of use. Maybe I just don't believe in a "humane"
> interface b/c that context is just too small. Humanity does not exist
> outside of cultural and societal (read economic & linguistic) contexts.
> (please note I was a culture & personality anthropologist; before
> being a
> designer).

Do you assume that we ignore "cultural and societal (read economic &
linguistic) contexts." This is obviously false (for example, I insisted
that the Mac system software, from the first, be designed to
accommodate all written languages, right-to-left, left-to-right,
top-to-bottom, alphabetic or ideographic, and even mathematic's and
music notations. I always take such contexts into account. This was not
possible on pre-Mac personal computers (and precious few pre-Mac
non-personal computers!).

I deliberately hire people from different cultures and linguistic
backgrounds. My present tiny crew has people who speak Hindi, Japanese,
Korean, Armenian, Swahili, and many European languages and are familiar
with the associated cultures. I did this on the Mac project, hiring a
cultural anthropologist early on (and that was in the early 1980s, way
ahead if its becoming a widely-recommended practice).

I hope you will forgive me a spot of annoyance at being lectured on a
topic that I helped pioneer.

>
> Psychology cannot exist without sociology or anthropology. So
> cognition is
> culturally bound. I've even written a paper that showa that dreams vary
> widely in purpose, content, and interpretation around the world, and
> some
> people change their dreaming based on the culture that they are living
> in.
>
> You say that, "most interface design does not take advantage of what
> we know
> to be universal in terms of cognitive behavior". And you say this to
> mean,
> it is universally "known" that people prefer a single option for a
> behavior
> as opposed to multiple.

No. I did not say that with that meaning.

>
> But how can you accept that?

I didn't. But, to name one example, habit formation is universal. And
people do simultaneous tasks through habituating all but one of them.
Thus it is culturally independent (and even somewhat species
independent!) to take advantage of this trait and to design so that
habits that cause errors will not form. That's the kind of cultural-,
economic-, and linguistic- independent psychological fact I am talking
about. Do not confuse that with those traits that are culturally
dependent.

What I am saying, without fear of contradiction because it is readily
demonstrated, is that most interfaces (and all GUIs) do not properly
take into account such universal traits. If you do not accommodate
these universal traits, then you are guaranteed a worse interface than
if you had. We have no choice about habituation or the kinds of designs
that work correctly with it. It is built into our nervous systems.

Choice comes in at other levels, but that stuff is better known, I read
and use that literature and research, and I have little to add there.
So I am trying to educate interface designers on areas which are not as
well known, and which are generally ignored, to the detriment of users.

> I almost don't care what any lab study has to
> say on the matter. In so much of our universe there are examples where
> choice wins out over singularity. I'll reference "The Matrix" ... The
> only
> way it could survive (read as work) is if at least on some level a
> choice
> was given to everyone inside. That's my translation of the mumbo jumbo
> of
> "The Architect".
>
> That isn't just made-for-hollywood mumbo jumbo. That is the very basis
> of
> the success of capitalism and all the economic and non-economic pieces
> that
> go along with that.
>
> Are there examples where ONE option has stayed the cource of time?

The meaning of red and green in traffic lights.

> Probably.

Certainly, as demonstrated. There are others.

> But there are so, so many more real-world examples inside and outside
> of the
> computer world that show that the cultural NEED for options
> predisposes any
> effect that the efficiency that lack of options might give a user.
>
> But I also think you ignored a part of my posting. I clearly outlined
> how
> personal choice in having multiple options actually increases
> efficiency
> because it is impossible for any designer to predict effectively how
> ONE
> option is going to be used within a system that is used through
> multiple
> contexts of motivation/task/goal.

But you did not show (nobody could show) that there wasn't some other
interface that might have permitted one method to work best all the
time. Unless a brilliant designer came and just did it.

>
> -- dave
>
>
> _______________________________________________
> Interaction Design Discussion List
> discuss at ixdg.org
> --
> to change your options (unsubscribe or set digest):
> http://discuss.ixdg.org/
> --
> Questions: lists at ixdg.org
> --
> Announcement Online List (discussion list members get announcements
> already)
> http://subscribe-announce.ixdg.org/
> --
> http://ixdg.org/
>

12 Sep 2004 - 3:12pm
Dave Malouf
2005

Jef,

I meant no disrespect ... Please accept my apologies.
I think there is more agreement than disagreement, but we are both the types
of people who go for differences in our discussions instead of similarities.

Where I think we agree:
1. That cognitive sciences are invaluable to the improvement of interfaces
2. That there are successes in interfaces throughout history
3. That habit does exist (though I would contend that habit exists at both
personal and cultural levels)

Where I think we disagree:
1. That having a monotonous system is best.
I do not believe that in ALL circumstances this is best. There are many
where it is, but that does not translate to all. I more often than not
believe that these types of things fall on a continuum based on context of
use. It sounds like you are arguing for an absolute which is 1-way for all
interactions. I'm saying that there needs to be accommodation for various
interactions across various contexts of use.

2. I think we also disagree about culture & personality theory. Culture and
personality is not about linguistics or color interpretation only, it is
about how culture effects all aspects of psychology and visa versa.

-- dave

12 Sep 2004 - 8:32pm
Jef Raskin
2004

On Sep 12, 2004, at 1:12 PM, David Heller wrote:

> [Please voluntarily trim replies to include only relevant quoted
> material.]
>
> Jef,
>
> I meant no disrespect ... Please accept my apologies.
> I think there is more agreement than disagreement, but we are both the
> types
> of people who go for differences in our discussions instead of
> similarities.
>
> Where I think we agree:
> 1. That cognitive sciences are invaluable to the improvement of
> interfaces
> 2. That there are successes in interfaces throughout history
> 3. That habit does exist (though I would contend that habit exists at
> both
> personal and cultural levels)

Can you explain this last? Anything that is done repeatedly the same
way becomes a habit; the origin of the repeated action is irrelevant.
Some examples might help clarify the distinction you are making.
>
> Where I think we disagree:
> 1. That having a monotonous system is best.

No, I say that you should strive make a system monotonous if you can.
it is not always possible or desirable. There are no absolutes.
However, most designers I have seen at work are not at all aggressive
and accept an amonotonous solution without sufficient effort.

Monotony can arise in two ways: if there is one gesture for a given
action, the system is monotonous for that action. If there are multiple
ways of performing an action, but each is used in a different context
by a user, then the system has been monotonized by the user. The
important thing is that given a particular stimulus (which can be a
complex of various elements) if the user has a particular response that
he or she always uses in response, then the system is monotonous, and
habituation is a consequence. This is the case where what is seen as
the same action by the system designer is not seen as such by the user.
You do not have monotony when, given the same circumstance, the user
sometimes chooses one method, and sometimes another.

I say this specifically in my THI book (first full sentence on pg. 68)
, but I have found that few people pay attention to the details of
either modelessness or monotony, and so argue for points I have long
since made. Then I make the mistake of thinking that the person
discussing the issue with me has read and understood the definition
and, as here, find that we are arguing at cross-purposes; in this case
we were defining the term differently. Considering that, as far as I
know, there is only one published definition (I coined the term), there
should be little confusion on the extension of the term. (I use the
term "extension" in its linguistic sense).

> I do not believe that in ALL circumstances this is best.
> There are many
> where it is, but that does not translate to all. I more often than not
> believe that these types of things fall on a continuum based on
> context of
> use. It sounds like you are arguing for an absolute which is 1-way for
> all
> interactions. I'm saying that there needs to be accommodation for
> various
> interactions across various contexts of use.
>
> 2. I think we also disagree about culture & personality theory.
> Culture and
> personality is not about linguistics or color interpretation only, it
> is
> about how culture effects all aspects of psychology and visa versa.

Culture affects some aspects of psychology, and not others. And
certainly how our minds work affects what we do, which makes the vice
versa obviously true.

>
> -- dave
>
>
> _______________________________________________
> Interaction Design Discussion List
> discuss at ixdg.org
> --
> to change your options (unsubscribe or set digest):
> http://discuss.ixdg.org/
> --
> Questions: lists at ixdg.org
> --
> Announcement Online List (discussion list members get announcements
> already)
> http://subscribe-announce.ixdg.org/
> --
> http://ixdg.org/
>

12 Sep 2004 - 9:51pm
Dave Malouf
2005

> > 3. That habit does exist (though I would contend that habit
> exists at
> > both
> > personal and cultural levels)
>
> Can you explain this last? Anything that is done repeatedly the same
> way becomes a habit; the origin of the repeated action is irrelevant.
> Some examples might help clarify the distinction you are making.

A personal habbit is one that I invent, and it does not get communicated to
any outside source and thus does not become a new meme in any way as part of
the culture. A cultural habit is one that through time has been spread or
otherwise propogated across a culture. Sometimes, especially with the types
of communications we have today, some become trans-cultural, but there are
many that just become sub-cultural.

An example of a sub-cultural habit is typing "IMHO" in e-mails and IM
messages (or more generally using informal abbreviations in formal and in
formal communications). What I've noticed here is that early adopter
sub-culture accepted this habbit, but that when the rest of the folks joined
in to the medium, these types of "efficiencies" were not propogated to them
and thus the habbit was not passed on. Also, this is cultural b/c in some
languages abbreviations/achronyms are not as widely used, even by early
adopters as a new form of jargon.

An example of a personal habbit is that I use my thumb to hit the "enter"
key that is on the number-pad, when I have am using my mouse & keyboard in
tandem. It is easier (for me) than moving my mouse down to a submit button.
;)

> > Where I think we disagree:
> > 1. That having a monotonous system is best.
>
> No, I say that you should strive make a system monotonous if you can.
> it is not always possible or desirable. There are no absolutes.
> However, most designers I have seen at work are not at all aggressive
> and accept an amonotonous solution without sufficient effort.

Good, then we are in agreement. ;)

> Monotony can arise in two ways: if there is one gesture for a given
> action, the system is monotonous for that action. If there
> are multiple
> ways of performing an action, but each is used in a different context
> by a user, then the system has been monotonized by the user. The
> important thing is that given a particular stimulus (which can be a
> complex of various elements) if the user has a particular
> response that
> he or she always uses in response, then the system is monotonous, and
> habituation is a consequence. This is the case where what is seen as
> the same action by the system designer is not seen as such by
> the user.
> You do not have monotony when, given the same circumstance, the user
> sometimes chooses one method, and sometimes another.

I must admit I found the above a bit confusing. For the most part I think I
am in agreement with you. I'm not sure how specific you mean by "same
circumstances"? Circumstances are never identitical at any given moment, so
I'm not sure that is every achievable. Let's go back to my example w/ the
"back" command in a web browser. I do this 4 different ways; the button, the
<alt>+<arrow>, the button on my mouse, and the context menu. I do not think
that the circumstances are the same in each instance of use, but at what
level of predictable variability is it ok to have each of these methods
available. I'm asking a particular question. When should a designer say,
"Oh! This makes sense to be redundant" vs. "This is redundant and a waste"?

-- dave

12 Sep 2004 - 10:11pm
Listera
2004

David Heller:

> When should a designer say, "Oh! This makes sense to be redundant" vs. "This
> is redundant and a waste"?

Good question.

There are multiple points of triggering application actions, via:

Menu
Contextual menu
Keyboard
Input device (mouse, digital tablet, joystick, eye/gesture tracker, etc.)
Speech
Etc

Of course, if the user is capable of scripting, he'd be able to create
permutations of these as well; see, for instance:

PreFab UI Actions
<http://www.prefab.com/uiactions/>

which can automate cascading, scripted triggers.

So, one way is to grant the user specific access to specific actions through
specific access methods. This is a lot of guessing/testing. The other is to
enable the minimum, but implement as many access modalities as possible so
that the user can put together his own combinations. Fortunately, from
speech to handwriting recognition, the OSes are providing more and more of
these facilities, so that for each specific application the cost of multiple
input modalities is coming down.

Ziya
Nullius in Verba

13 Sep 2004 - 7:59pm
Gerard Torenvliet
2004

Dave wrote:

> Interesting that very few wanted to bite at this topic. It seemed
> that the early and few posts just dismissed it as a bad idea without
> any interest in engaging why the idea is proposed at all.

Dave - I don't have trouble with the idea of personal interface
definitions if that means end-user customization of applications.
Customization is the only way that users can, in an intelligent way, put
chinks in the cracks that the designer left between their designs and
users' idiosyncratic ways of doing things. Not inappropriately, Jens
Rasmussen (the Danish pioneer of cognitive engineering) calls this
"finishing the design."

What I was reacting to was the idea of software companies coming up with
standard sets of functionality that could then be customized with
different user interfaces. For many software companies, the interface is
the differentiator - even if they don't market it that way. The
interface is 80% of the code of a typical app (or so I hear). Photoshop
and Fireworks both create jpeg's; their market differentiator is the
interface between user and jpeg.

What's more, in the best apps, that 80% of interface is tightly
integrated with the 20% of purely back-end stuff, so that the end
product is hoped to be more than the sum of its parts. The 20% of
back-end doesn't generally leave enough design degrees of freedom open
to design just any old interface around it.

Giving users ways to customize is a double-edged sword. It is very
difficult to make something easily and usefully customizable without
adding a lot of complexity to an interface. Contrary to what you wrote
later in your post, Linux is a case in point: It is loved by its
followers because it is endlessly customizable, but that customizability
comes at a very steep price in terms of complexity.

Still, I'm all for giving users the opportunity to customize their
software, if done properly.

Regards,
-Gerard

P.S. The idea of Personal Interface Definitions doesn't really go
against the concept of a monotonous interface. The holy grail would be
to design some form of techology that is monotonous at the level of the
individual user, but that has customizable variety across users.

P.P.S. I wonder how much of the customization currently available in
commercial software has come about because, instead of going out and
doing a user study to settle on the single best alternative, design and
implementation teams instead said it was best to give users two ways.
Some customization we see today is good, some is a cop out.

16 Sep 2004 - 8:31am
Martyn Jones BSc
2004

First off I'd like to say thanks to participants such as Jef Raskin, David
Heller, Andrei Herasimchuk, Nick Ragouzis, Listera and others for spending
so much time constructing their arguments. Having multiple points of view,
so well argued - is providing a very rich learning environment for me.

Jef Raskin wrote:
> No, I say that you should strive make a system monotonous if you can.
> it is not always possible or desirable. There are no absolutes.
> However, most designers I have seen at work are not at all aggressive and
> accept an amonotonous solution without sufficient effort.

> Monotony can arise in two ways: if there is one gesture for a given
> action, the system is monotonous for that action. If there are multiple
> ways of performing an action, but each is used in a different context
> by a user, then the system has been monotonized by the user. The
> important thing is that given a particular stimulus (which can be a
> complex of various elements) if the user has a particular response that
> he or she always uses in response, then the system is monotonous, and
> habituation is a consequence. This is the case where what is seen as
> the same action by the system designer is not seen as such by the user.
> You do not have monotony when, given the same circumstance, the user
> sometimes chooses one method, and sometimes another.

Roughly then...
Action: user's goal
Gesture: how the user achieves their goal

I acknowledge that I have monotonized Mozilla Firefox interactions, in that
I close browser windows by following different gestures in different
contexts (but always the same gesture in the same context). I am aware that
if I have been typing an email - I am more likely to use the TAB key to
navigate the page, and close the browser window using Ctl-W. If I have been
using the mouse for an extended period - and have been interacting with the
browser's drop-down menus - I am more likely to close the browser window by
clicking 'File' - 'Exit'. I have also installed a simple
gesture-recognition extension, which allows me to close the browser window
by drawing an 'L' shape. I am more likely to use this method if I have been
using the mouse to interact with the central, right and bottom areas of the
screen.
(Gesture Extension:
http://perso.wanadoo.fr/marc.boullet/ext/extensions-en.html)

If a certain application is to be flexible enough to complement / adapt to a
particular user's work flow, then it seems that you have to provide multiple
ways of doing things (gestures), e.g. closing window action (I assume the
various gestures I can use to close a browser window are the result of user
testing, and identifying the fact that the same user may wish to close a
window by performing different gestures in different contexts).

The application becomes less flexible / adaptable, if there are too few
gestures to execute a certain action (or - the most frequently occurring /
popular / efficient contexts for a given action are not addressed).
However, the application is in danger of becoming amonotonous if there are
too many variations (or - if a particular context for a given action,
supports multiple gestures).

Jef, is this in-line with what you are suggesting (have I understood you
correctly-ish)?

David Heller wrote:
> When should a designer say, "Oh! This makes sense to be redundant" vs.
> "This is redundant and a waste"?

If a monotonous solution is the ideal, then it sits somewhere between an
inflexible / non-adaptable solution and an amonotonous solution. How do you
aim for the middle of this scale? How do you know you are there?

I am very fond of the gesture-recognition extension I have added to my
browser, and use it for closing / minimising / maximising / opening new
windows etc - far more frequently than I use the traditional alternatives.
In this instance, would you say that the gesture-recognition gesture has
created an amonotonous solution - or has it created new contexts, retaining
the monotonous solution?

Regards,
Martyn

----------------------
Martyn Jones BSc
Interaction Designer
Kode Digital Ltd.
----------------------

16 Sep 2004 - 6:37pm
Listera
2004

Martyn Jones BSc:

> If a certain application is to be flexible enough to complement / adapt to a
> particular user's work flow, then it seems that you have to provide multiple
> ways of doing things (gestures)...

Yes, but who's doing the "providing"?

Fortunately, OS vendors (and third parties) are providing basic frameworks
for developers to plug into: scripting, speech, input device control,
handwriting recognition, etc. Theoretically, as a developer, if you follow
common APIs you get these facilities for 'free'. In OSX, for example,
Services allow Cocoa (and some Carbon) apps to take advantage of common
functionalities. Microsoft didn't spend a dime for it, but Entourage can
speak its text thanks to a system-wide service. If you have a digital
tablet, Inkwell allows handwritten text input in any app, without the
developers having to provide it. This factoring is a good thing.

Ziya
Nullius in Verba

Syndicate content Get the feed