Making technical terms more descriptive

17 Nov 2010 - 10:39am
3 years ago
6 replies
1025 reads
tobr
2010

How do you find the best name or title for a function in an application? For example, if creating software for sound editing - many traditional audio terms are very technical, such as "normalization", "amplitude envelope", "dynamic compression" etc. For a novice user, these terms don't say much about what will happen to the sound. Might it be desirable to come up with new, more descriptive terms for these things, despite the risk of confusing advanced users who already know the technical terms? How would one find a term that as many users as possible can understand? User testing seems difficult to apply to this, because there are so many potential wordings that you would want to test. Interviews perhaps?

Comments

17 Nov 2010 - 3:46pm
Philip Levy
2010

tobr,

I think the examples you mention in this case emphasize the problem because they all affect perceived loudness, but do it in very different ways.

I have a background in music production, so I'm biased, but what if you went with the technical names for the actual menu items or labels, but included a contextual tool tip (something like Picnik does, that users could turn on or off) that provided a short, plain-English description of the function?

17 Nov 2010 - 4:05pm
burlapdesign
2010

What is the intended audience? Is it for the absolute beginner to pick up the product and go to town? If it is aimed at beginners the application could be arranged around common/baseline tasks, these usually can me simplified in their wording. e.g "remove background noise" as opposed to "dynamic threshold filtering." At the same time sound editing is technical and requires learning, Perhaps the focus should be how to facilitate that (wizards, great help file, built in tutorials (a la inkscape) etc.)

17 Nov 2010 - 8:05pm
Sean Bentley
2008

Having seen some sound programs I assume this a dashboard with a lot of controls and not much bank space for explanatory text ("Adjust the amount of phasing you want the selected track to have.") or task-based labels ("Adjust High Frequency" or whatever).

If you expect novice users then yes, tooltips that succinctly explain the effect would be good. This educates the novice while not frustrating the expert. "Reduce Background Noise" is very friendly - but what about that person who might want to INCREASE background noise? They too would need that control, so I guess that's what makes "dynamic threshold filtering" less misleading, if otherwise meaningless...

Personally my issue with sound software is it's often too busy - too many controls, and too small to read, much less accurately adjust.

Sean Bentley UX WRITER Microsoft Lync Design

 

17 Nov 2010 - 8:05pm
Daniel Gross
2009

Perhaps the following might be useful.

I happen to have written earlier this year a paper on clarifying ambiguous terminology during software architecture design, which uses an organizational semiotic approach [1].

In the paper I applied and adapted an existing approach to defining terminology in information system design that was proposed by Ronald Stamper [2], Kecheng Liu [3] and others.

In an nutshell, Stamper and Liu propose defining terminology visually using an "ontological dependence map". For each term preceding terms are identified that must (ontological) exist (i.e. be defined) for the current term to be definable. Also, a repertoire of action that describes what one can do with a term is included in the map to fully define a term -- an approach based on Gibson's theory of affordance [4].

In practice, perhaps instead of trying to simplify terms, more context can be provided. For each term used in an application, a visual definitional "dependence map" can be pulled up, which shows the broader definitional context of a term, and what the term affords to do in the application, and perhaps also outside of the application.

Daniel

p.s. Hmm, sounds to me like an interesting mini research project to try out ...

[1] Gross, D., & Yu, E. (2010). Resolving artifact description ambiguities during software design using semiotic agent modeling. The 12th International Conference on Informatics and Semiotics in Organisations.

[2] Stamper, R. (2006). Exploring the semantics of communication acts. Paper presented at the Proceedings of the tenth international conference on the language action perspective, Kiruna: Linköping University.

[3] Liu, K. (2000). Semiotics in Information Systems Engineering: Cambridge University Press.

[4] Gibson, J. J. (1977). The Theory of Affordances. In R. E. Shaw & J. Bransford (Eds.), Perceiving, Acting, and Knowing. Hillsdale, N.J.: Lawrence Erlbaum Associates.

On Wed, 2010-11-17 at 10:33 -0600, tobr wrote: > How do you find the best name or title for a function in an application? For
> example, if creating software for sound editing - many traditional audio
> terms are very technical, such as "normalization", "amplitude envelope",
> "dynamic compression" etc. For a novice user, these terms don't say much
> about what will happen to the sound. Might it be desirable to come up with
> new, more descriptive terms for these things, despite the risk of confusing
> advanced users who already know the technical terms? How would one find a
> term that as many users as possible can understand? User testing seems
> difficult to apply to this, because there are so many potential wordings that
> you would want to test. Interviews perhaps? > > (

18 Nov 2010 - 8:29am
tobr
2010

Thank you all for your great answers.

Philip Levy: I also have a background in music production! I just took a look at Picnik, and their informal tooltips seem like a smart way to explain things. On the other hand, wordy tooltips might make the user think that they need to find and read them all through before they do anything, or risk doing something "wrong". If something seems to require at least one and a half sentence of explanation before you're "allowed" to touch it, that might only reinforce the scaryness and hinder exploration! But who knows, it would have to be tested.

burlapdesign: In this case there is no actual product, I am asking more to learn how such a problem would be solved best and to have an interesting discussion. Sound editing can be very technical, but does it have to be? I think it's an interesting challange to make something technical easier to understand and use. A deep technical understanding of sound isn't required to do simple edits, so maybe the editor shouldn't act like it is.

Sean Bently: I definitely agree - with few exceptions, the state of user interfaces in current music and audio software is horrible in so many ways, which is why I'm so interested in these questions. Most music software developers seem to think that the only thing that counts in a user interface is how much skeuomorphism can be crammed in, preferably references to old, outdated analog equipment. Musicality is such a powerful, positive force, a potential that lies in every human being... and with computers, for the first time in history, we may have a chance at creating a sophisticated musical "instrument" that anyone could use. But I don't see anyone trying seriously - almost all products aim at nerds, engineers, powerusers.

Daniel Gross: I'm not sure I understand what your saying, but it sounds interesting - can I find the paper online?

------

I think this problem can be encountered in image editors, video editors, 3d modeling software and other complex, fundamentally technical software as well. But it might be even more pronounced in audio tools, simply because sound is not a visual medium, so icons or other visualisations don't work well and may only make things more confusing.

Let's say that you wanted to create a synthesizer, with non-technical users as the target audience. A non-technical persons first contact with a common synthesizer today probably tends to be somewhat scary, since terms like "pulse width", "cutoff frequency", "envelope decay", "LFO rate", etc, make it seem like you need to be an engineer to figure it out. And any icons used to illustrate the functions are usually simplified waveforms, frequency curves, signal paths etc, which also rely on a technical understanding. But as soon as you learn what the controls do to the sound (the subjective sound experience, not the waveform) a simple synthesizer isn't too difficult to use.

Synthesizers are also interesting because even a fairly simple synth can produce an almost infinite number of different sounds. So even an experienced user would require some amount of experimentation before they'd find the exact sound they were after. Since sound exists solely in the time domain, this search always requires a little time. So creating an interface that makes it immediately clear what the possibilities are is probably impossible without limiting the options excessively. But maybe the initial learning curve could be eased if the user could get just some little idea of what a certain control will do before he or she tries to manipulate it, to encourage them to take the leap into the unknown and explore what's possible.

But this is not simple. What does changing the pulse width of an oscillator sound like, in a word or two? I would say it makes the sound feel thinner or wider, but those words could equally well describe detuning or a chorus effect, so they may not give the user the right expectations. You could just name the control "timbre", but doesn't the cutoff frequency change the timbre as well? In professional audio and music production circles it's common to hear evocative descriptions like warm and cold, soft and harsh, clarity, muddyness, air, aliveness, smear, bloom, grunge, distance, sweetness, etc. But most of these terms are not well defined and they're more useful as a kind of nerdy poetry than actionable instructions, because they mean very different things to different people. So apparently it's difficult even for professionals to describe sound with words in a way that can be universally understood.

Another interesting example is the words "wet" and "dry". Many years ago, when I first heard someone describe a recording as "dry", I had no idea what they intended. But this is actually an example of non-technical terms that are very well defined, and yet they're difficult to understand! (A dry sound is an unprocessed sound, while a wet sound is passed through some kind of effect [esp echo or chorus]. So a dry mix is one which mostly consists of raw, unprocessed recordings and which doesn't feel spacious. Once you know it, it might make sense, but it's hard to guess at.)

So maybe what I'm saying is that there's no great solution, but perhaps a "least bad one".

In iMovie '11, Apple has a completely different take on this whole problem. To hear what various effects sound like, the user simple hovers the mouse cursor over different icons in a grid. Seems like a great way to invite users to try things out, but I'm not sure how well it would work in situations where several parameters affect the sound interdependently. Example just after 2:30 in this video: http://www.youtube.com/watch?v=PHnZqXGEwiQ

19 Nov 2010 - 10:05am
burlapdesign
2010

tobr,
its tricky isn't it. you chose sound editing as the example, i simply continued that in the discussion. In the end whatever the product the target audience is going to be the driving factor and interfaces professionals and novices have differing comfort levels and expectations from interfaces.

On Fri, Nov 19, 2010 at 7:14 AM, tobr <tor.bruce@gmail.com> wrote:

Thank you all for your great answers.

*Philip Levy:* I also have a background in music production! I just took a look at Picnik, and their informal tooltips seem like a smart way to explain things. On the other hand, wordy tooltips might make the user think that they /need/ to find and read them all through before they do anything, or risk doing something "wrong". If something seems to require at least one and a half sentence of explanation before you're "allowed" to touch it, that might only reinforce the scaryness and hinder exploration! But who knows, it would have to be tested.

*burlapdesign:* In this case there is no actual product, I am asking more to learn how such a problem would be solved best and to have an interesting discussion. Sound editing can be very technical, but does it have to be? I think it's an interesting challange to make something technical easier to understand and use. A deep technical understanding of sound isn't required to do simple edits, so maybe the editor shouldn't act like it is.

*Sean Bently:* I definitely agree - with few exceptions, the state of user interfaces in current music and audio software is horrible in so many ways, which is why I'm so interested in these questions. Most music software developers seem to think that the only thing that counts in a user interface is how much skeuomorphism can be crammed in, preferably references to old, outdated analog equipment. Musicality is such a powerful, positive force, a potential that lies in every human being... and with computers, for the first time in history, we may have a chance at creating a sophisticated musical "instrument" that anyone could use. But I don't see anyone trying seriously - almost all products aim at nerds, engineers, powerusers.

*Daniel Gross:* I'm not sure I understand what your saying, but it sounds interesting - can I find the paper online?

------

I think this problem can be encountered in image editors, video editors, 3d modeling software and other complex, fundamentally technical software as well. But it might be even more pronounced in audio tools, simply because sound is not a visual medium, so icons or other visualisations don't work well and may only make things more confusing.

Let's say that you wanted to create a synthesizer, with non-technical users as the target audience. A non-technical persons first contact with a common synthesizer today probably tends to be somewhat scary, since terms like "pulse width", "cutoff frequency", "envelope decay", "LFO rate", etc, make it seem like you need to be an engineer to figure it out. And any icons used to illustrate the functions are usually simplified waveforms, frequency curves, signal paths etc, which also rely on a technical understanding. But as soon as you learn /what/ the controls do to the sound (the /subjective/ sound experience, not the waveform) a simple synthesizer isn't too difficult to use.

Synthesizers are also interesting because even a fairly simple synth can produce an almost infinite number of different sounds. So even an experienced user would require /some/ amount of experimentation before they'd find the /exact/ sound they were after. Since sound exists solely in the time domain, this search always requires a little time. So creating an interface that makes it immediately clear what the possibilities are is probably impossible without limiting the options excessively. But maybe the initial learning curve could be eased if the user could get just some little idea of what a certain control will do before he or she tries to manipulate it, to encourage them to take the leap into the unknown and explore what's possible.

But this is not simple. What does changing the pulse width of an oscillator /sound/ like, in a word or two? I would say it makes the sound feel thinner or wider, but those words could equally well describe detuning or a chorus effect, so they may not give the user the right expectations. You could just name the control "timbre", but doesn't the cutoff frequency change the timbre as well? In professional audio and music production circles it's common to hear evocative descriptions like warm and cold, soft and harsh, clarity, muddyness, air, aliveness, smear, bloom, grunge, distance, sweetness, etc. But most of these terms are not well defined and they're more useful as a kind of nerdy poetry than actionable instructions, because they mean very different things to different people. So apparently it's difficult even for professionals to describe sound with words in a way that can be universally understood.

Another interesting example is the words "wet" and "dry". Many years ago, when I first heard someone describe a recording as "dry", I had no idea what they intended. But this is actually an example of non-technical terms that /are/ very well defined, and yet they're difficult to understand! (A dry sound is an unprocessed sound, while a wet sound is passed through some kind of effect [esp echo or chorus]. So a dry mix is one which mostly consists of raw, unprocessed recordings and which doesn't feel spacious. Once you know it, it might make sense, but it's hard to guess at.)

So maybe what I'm saying is that there's no great solution, but perhaps a "least bad one".

In iMovie '11, Apple has a completely different take on this whole problem. To hear what various effects sound like, the user simple hovers the mouse cursor over different icons in a grid. Seems like a great way to invite users to try things out, but I'm not sure how well it would work in situations where several parameters affect the sound interdependently. Example just after 2:30 in this video: http://www.youtube.com/watch?v=PHnZqXGEwiQ [1]

(((P
Syndicate content Get the feed