The Human Body as Touchscreen Replacement
Summary: When you touch your own body, you feel exactly what you touch — better feedback than any external device. And you never forget to bring your body.
Many new user interfaces go beyond a flat computer screen as the locus of the user experience. For example, Microsoft Kinect makes the entire room into the interaction space, with the user's body movements becoming the system commands. Augmented reality
systems (such as some Google Glass applications) project their output
onto real-world objects — such as using temperature readouts to
color-code airplane engine parts so that they’re easier for repair
technicians to see.
At the recent CHI 2013 research conference in Paris, I was particularly impressed with two ideas to use the human body itself as an integrated component of the user interface. The user's own body is unique relative to all other devices in one key aspect: you can feel when your body is being touched.
This sense of being touched (or not) provides important feedback that continuously informs users about what is happening. Because feedback is one of the oldest and most important usability heuristics, enhancing it is usually good.

Touching specific spots on your own hand enters the commands.
For this system to actually work, there must be some way for users to
hear the computer, such as using an earbud. More important, there must
be a way for the computer to know what part of the hand the user is
touching. In Gustafson's research prototype, this was done with an
optical motion tracker that seemed too clunky for real-world
applications. But that’s okay; we can easily imagine advances in gesture
recognition that would be more portable and less intrusive.
Although you won’t be able to buy a hand-phone anytime soon, it's definitely interesting to consider the potential of interfaces where users touch themselves instead of a screen.
Gustafson and colleagues performed several interesting experiments to determine how well people can use their self-touch UI. Under normal use, people were about equally fast selecting functions from a regular touchscreen phone and from the palm-based system. However, blindfolded users were almost twice as fast when touching themselves as when touching the glass surface of the phone.
Obviously, we don't want to blindfold users, though information about nonsighted use is interesting for accessibility reasons and for conditions in which people can't look at the phone.
The most interesting aspect of the finding about blindfolded use is that there is something special about touching a hand — rather than a phone — that makes users depend less on their sight. To tease out why, the researchers tested several additional conditions:
But the ear’s surface can also be used for input to communicate commands from the user to the computer.
Roman Lissermann, Jochen Huber, Aristotelis Hadjakos, and Max Mühlhäuser from the Technical University of Darmstadt (also in Germany) presented a research prototype called "EarPut" to do just that. Among other benefits, your ear is always in the same place; touching your ear is also slightly less obtrusive than touching your hand.
Possible interactions include:
Although 63% accuracy sounds good — after all, it’s better than half — it’s unacceptable for most user interface commands. Just think about using the ear to activate the 3 most common email commands: reply to sender, reply to all, and forward. Would you want to send your message to the wrong recipients 1/3 of the time?
As this research shows, ear-driven input is best for situations with an extremely limited number of commands. It might also be useful for applications in which accidentally executing a neighboring command is not a big deal; even when dividing the ear arc into 6 regions, users still achieved fairly high accuracy.

EarPut prototype from the Technical University of Darmstadt.
As the above image shows, in this early research, the prototype
hardware is somewhat reminiscent of the Borg from Star Trek, and most
people wouldn't want to wear it on their ears unless they were paid
study participants. But it’s easy to imagine smaller, lighter, and more
elegant hardware in the future.
Yes, people often carry their mobile phones, but they'll never be without their hands or their ears. Thus they'll never be without system functions that have been assigned to their hands or ears.
Of course, this statement is true only if users are within range of a sensor that lets the computer know when they’re touching a designated body part. So, maybe you do have to carry around a small device attached to your ear — or maybe in the future, body-based interaction could be mediated through nanobots that you swallow once and for all. Another option is to saturate the environment with surveillance cameras, though (currently, at least) many people would oppose this for privacy reasons.
Although these technical obstacles remain to be solved, it's reasonable to expect that user interfaces might be at least partly body-based in 20 or 30 years.
Source: nngroup.com
At the recent CHI 2013 research conference in Paris, I was particularly impressed with two ideas to use the human body itself as an integrated component of the user interface. The user's own body is unique relative to all other devices in one key aspect: you can feel when your body is being touched.
This sense of being touched (or not) provides important feedback that continuously informs users about what is happening. Because feedback is one of the oldest and most important usability heuristics, enhancing it is usually good.
The Hand as Input Device
Sean Gustafson, Bernhard Rabe, and Patrick Baudisch from the Hasso Plattner Institute in Germany designed a so-called imaginary interface situated within the palm of the user's hand. This UI is "imaginary" in the sense that there's nothing actually there beyond the naked hand. The photo below shows how an imaginary "mobile phone" could be fitted onto the user's left hand. As each point is touched, a specific mobile function would be activated and announced by a computerized voice.Touching specific spots on your own hand enters the commands.
Although you won’t be able to buy a hand-phone anytime soon, it's definitely interesting to consider the potential of interfaces where users touch themselves instead of a screen.
Gustafson and colleagues performed several interesting experiments to determine how well people can use their self-touch UI. Under normal use, people were about equally fast selecting functions from a regular touchscreen phone and from the palm-based system. However, blindfolded users were almost twice as fast when touching themselves as when touching the glass surface of the phone.
Obviously, we don't want to blindfold users, though information about nonsighted use is interesting for accessibility reasons and for conditions in which people can't look at the phone.
The most interesting aspect of the finding about blindfolded use is that there is something special about touching a hand — rather than a phone — that makes users depend less on their sight. To tease out why, the researchers tested several additional conditions:
- A phone that provided tactile feedback when touched rather than a stiff pane of glass. Using the tactile phone, users were 17% faster, though the difference wasn't statistically significant given the study's sample size.
- Having users wear a finger cover to remove the finger’s sense of touch. This made no appreciable difference.
- Having users touch a fake hand, rather than their own, to remove the palm’s sense of touch. This condition slowed users down by 30%.
The Ear as Input Device
Usually, we use our ears to listen. In the terminology of human–computer interaction, this means that the ears are used to consume output from the computer.But the ear’s surface can also be used for input to communicate commands from the user to the computer.
Roman Lissermann, Jochen Huber, Aristotelis Hadjakos, and Max Mühlhäuser from the Technical University of Darmstadt (also in Germany) presented a research prototype called "EarPut" to do just that. Among other benefits, your ear is always in the same place; touching your ear is also slightly less obtrusive than touching your hand.
Possible interactions include:
- Touching part of the ear surface, with either single- or multi-touch.
- Tugging an earlobe. This interaction is particularly suited for on–off commands, such as muting a music player.
- Sliding a finger up or down along the ear arc. This might work well for adjusting volume up or down.
- Covering the ear — certainly a natural gesture for "mute."
Although 63% accuracy sounds good — after all, it’s better than half — it’s unacceptable for most user interface commands. Just think about using the ear to activate the 3 most common email commands: reply to sender, reply to all, and forward. Would you want to send your message to the wrong recipients 1/3 of the time?
As this research shows, ear-driven input is best for situations with an extremely limited number of commands. It might also be useful for applications in which accidentally executing a neighboring command is not a big deal; even when dividing the ear arc into 6 regions, users still achieved fairly high accuracy.
EarPut prototype from the Technical University of Darmstadt.
Ubiquitous User Interfaces
In addition to offering nearly fail-proof feedback, using body parts as input devices also has another distinct advantage: the device is literally always with you — because your body is you.Yes, people often carry their mobile phones, but they'll never be without their hands or their ears. Thus they'll never be without system functions that have been assigned to their hands or ears.
Of course, this statement is true only if users are within range of a sensor that lets the computer know when they’re touching a designated body part. So, maybe you do have to carry around a small device attached to your ear — or maybe in the future, body-based interaction could be mediated through nanobots that you swallow once and for all. Another option is to saturate the environment with surveillance cameras, though (currently, at least) many people would oppose this for privacy reasons.
Although these technical obstacles remain to be solved, it's reasonable to expect that user interfaces might be at least partly body-based in 20 or 30 years.
Source: nngroup.com
0 comentarios :
Publicar un comentario