Finger moves: A Microsoft research project, called Digits, makes gestural commands mobile.
Credit: Microsoft Research
In a few short years, the technologies found in today's mobile
devices—touch screens, gyroscopes, and voice-control software, to name a
few—have radically transformed how we access computers. To glimpse what
new ideas might have a similar impact in the
next few years, you need only to have walked into the Marriott
Hotel in Cambridge, Massachusetts, this week. There, researchers from around the world demonstrated new ideas for computer interaction at the
ACM Symposium on User Interface Software and Technology.
Many were focused on taking mobile devices in directions that today
feel strange and new but could before long be as normal as swiping the
screen of an iPhone or Android device.
"We see new hardware, like devices activated by tongue movement or
muscle-flexing, or prototypes that build on technology we already have
in our hands, like Kinect, Wii, or the sensors built into existing
phones," said
Rob Miller, a professor at MIT's
Computer Science and Artificial Intelligence Lab (CSAIL) and the chair of the conference.
One of the most eye-catching, and potentially promising, ideas that
was on show makes it possible to perform complex tasks with a flick of
the wrist or a snap of the fingers.
The interface, called Digits, created by
David Kim,
a U.K. researcher at both Microsoft Research and Newcastle University,
is worn around the wrist and consists of a motion sensor and an infrared
light source and camera. Like a portable
version of Microsoft's motion-sensing device for the Xbox Kinect, Digits
can follow arm and finger movements with enough accuracy to replicate
them on screen or allow control of a complex computer game. "We envision
a smaller device that could be worn like a watch that allows users to
communicate with their surroundings and personal computing devices with
simple hand gestures," said Kim (watch
a video of Digits in action).
Projects like Kim's could be a glimpse into the future of mobile
computing. After all, prior to the iPhone's launch, multi-touch
interfaces were found only at this kind of event. Researchers believe
that mobile computers are still being held back by the limitations of
existing control methods, without which they could become even more
powerful.
"We have an increasing desire and need to access and work with our
computing devices anywhere and everywhere we are," Kim said. "Productive
input and interaction on mobile devices is, however, still challenging
due to the trade-offs we have to make regarding a device's form factor
and input capacity."
The advance of mobile technology has
also given researchers easy ways to experiment. Several groups at the
conference showed off modifications of existing mobile interfaces
designed to give them new capabilities.
Hong Tan, a professor at
Purdue University
currently working at Microsoft Research Asia, demonstrated a way to add
the feel of buttons and other physical controls to a touch screen:
vibrating piezoelectric actuators installed on the side of a normal
screen generate friction at the point of contact with a finger. The
design, dubbed SlickFeel, can make an ordinary sheet of glass feel as if
it has physical buttons or even a physical slider with varying levels
of resistance. Such haptic feedback could help users find the right
control on compact devices like smartphones, or enable the use of a
touch screen without looking at it, for example while driving.
Who's that? A touch screen that recognizes different people's fingers, developed by Chris Harrison and colleagues at Disney Research.
Credit: Chris Harrison
In another effort to make more of the touch screen,
Chris Harrison of Disney Research presented a way for devices to recognize the swipes and presses of particular people. His interface, a
capacitive touch
screen with a resistance sensor attached, identifies the unique
"impedance profile" of a person's body through his or her fingers. Users
need to hold a finger to the device for few seconds the first time they
use it, after which subsequent presses are attributed to them. That
could allow apps to do things like track modifications to a document
made by different people as a tablet is handed around a table (see
a video of the screen).
"It's similar to the technology that is already in smartphones," said
Harrison. "There are lots of implications for gaming—no more split
screens—and for collaborative applications."
The motion and touch sensors in current phones were another target for experimentation.
Mayank Goel,
a PhD student the University of Washington, and colleagues, modified
the software on an Android phone to automatically determine in which
hand a person is holding it. The software figures this out by monitoring
the angle at which the device is tilted, as revealed by its motion
sensor, and the precise shape of pressure on its touch screen. Goel says
this can allow a keyboard to automatically adjust to whether a person
is using the
left or right hand, an adjustment that cut typos by 30 percent in his experiments.
Touchy feely: A malleable interface made by Sean Follmer and colleagues at MIT's Media Lab.
Credit: Sean Follmer
Other prototypes on display were less obviously connected with the
gadgets in your pocket today. One was a malleable interface that can be
shaped somewhat the way clay can, developed by a team at MIT's Media
Lab.
Sean Follmer, a PhD student in the lab of Professor
Hiroshi Ishii,
demonstrated several versions, including a translucent bendable touch
screen laid flat on a table. This was made from a plastic material
containing glass beads and oil, with a projector and a 3-D sensor
positioned below. Pinches and twists made to the pliable screen changed
the colors displayed on it, which were also shown on a 3-D model of the
material on a computer screen nearby.
It's hard to imagine such an interface in your pocket. However,
Desney Tan, a who manages Microsoft's
Computational User Experiences group in Redmond, Washington, and the company's
Human-Computer Interaction group
in Beijing, China, believes that being able to choose from multiple
modes of interaction will be an important part of the future of
computing. "We will stop thinking about mobile
devices, and instead focus on mobile
computing," said Tan, who was winner of Technology Review's
35 Innovators under 35 Award
in 2011. "As I see it, no one input or output modality will dominate
quite in the same way as visual display and mouse and keyboard has so
far."
Comments[ 0 ]
Post a Comment