This is a bit of a Hail Mary, but has anyone come across user testing or studies on the use of bezel gestures (i.e. swiping in from the left bezel onto the screen, or Microsoft's Edge UI)? I'm guessing MS must have done some testing but my Googling has proven useless in finding anything of use.
I'm trying to determine just how discoverable these types of gestures are, and trying to make a case for adding some kind of on-screen UI to surface functionality relegated to a bezel menu.
Slashdot.org has passed along news that the University of Electro-communications has created an artificial, Internet-enabled tongue. You manipulate a control with your tongue, and a paired control somewhere else mimics your tongue's movements inside someone else's mouth.
There's also discussion of Ubuntu's new user interface, which seems to have suffered from the rush to put out the latest version of the linux-based operating system. One user claims "Ubuntu is doing a great job throwing away years of UI experience." Ouch.
for all you hardware interaction designers, this bit was just an
amazing piece on Core77 about Kinetic Design. I think some of us
might think this is just "interaction design", but to me the
discussion of aesthetics of motion really touches my heart string in
a wondrous way. There was this Pratt ID student I met a couple of
years ago who did his thesis on this, and it also really resonated
with me back then.
Don't know if anyone of you saw the announcements in London two days ago? I
know its outside the US so it doesn't really count, but here is the heads up
A mobile, open development platform with touchscreen, haptic feedback,
presence, GPS location, WiFI/VoIP and an AJAX capable web browser. Only
Google Gears is missing. Nokia showed a touch UI phone with a development
environment. It is not multi-touch, darn! The touch screen in the demo was a
resistive type single point touch screen.