BERKELEY, Calif. — Would you like to drive a car with no steering wheel? How about play a video game with no controller? Such activities sound like nonsense at first, but a new device designed by a team at the University of California, Berkeley may make touch-free interactions with electronic devices a reality.
The device can recognize various hand gestures solely via electrical signals detected in the forearm. This innovative piece of technology is the result of combining wearable biosensors with artificial intelligence. In the future, researchers say this invention may be useful for several functions, including controlling prosthetics.
“Prosthetics are one important application of this technology, but besides that, it also offers a very intuitive way of communicating with computers,” says Ali Moin, who helped design the device as a doctoral student in UC Berkeley’s Department of Electrical Engineering and Computer Sciences, in a university release. “Reading hand gestures is one way of improving human-computer interaction. And, while there are other ways of doing that, by, for instance, using cameras and computer vision, this is a good solution that also maintains an individual’s privacy.”
AI learning from your movements
A collaboration with electrical engineering Professor Ana Arias helped the rest of the research group develop a flexible armband capable of reading electrical signals from 64 different spots on the forearm. From there, those signals are sent to an electrical chip that uses an AI algorithm to match forearm patterns with specific hand gestures and movements.
On a trial basis, study authors successfully “taught” the algorithm to recognize 21 unique hand gestures including a thumbs up, a flat hand, a fist, counting numbers, and moving individual fingers.
“When you want your hand muscles to contract, your brain sends electrical signals through neurons in your neck and shoulders to muscle fibers in your arms and hands,” Moin explains. “Essentially, what the electrodes in the cuff are sensing is this electrical field. It’s not that precise, in the sense that we can’t pinpoint which exact fibers were triggered, but with the high density of electrodes, it can still learn to recognize certain patterns.”
Similar to other forms of AI software, the algorithm must learn how different electrical signals in the arm relate to individual hand gestures. So, that means each user must first wear the device while making each different hand signal. That being said, this technology is also quite flexible. For instance, if a user’s arm gets sweaty one day during an action, the AI will recognize this difference and incorporate that new information into its model.
“In gesture recognition, your signals are going to change over time, and that can affect the performance of your model,” the UC Berkeley student says. “We were able to greatly improve the classification accuracy by updating the model on the device.”
Learning and remembering human behavior
Another big plus is the fact that all computing takes place locally on the electrical chip. That means personal data stays private and the whole process is faster.
“When Amazon or Apple creates their algorithms, they run a bunch of software in the cloud that creates the model, and then the model gets downloaded onto your device,” comments senior study author Jan Rabaey, the Donald O. Pedersen Distinguished Professor of Electrical Engineering at UC Berkeley. “The problem is that then you’re stuck with that particular model. In our approach, we implemented a process where the learning is done on the device itself. And it is extremely quick: You only have to do it one time, and it starts doing the job. But if you do it more times, it can get better. So, it is continuously learning, which is how humans do it.”
Hand gesture device coming to a store near you?
Today, study authors say the device isn’t quite ready for store shelves. But, it is only a few “tweaks” away from commercial rollout.
“Most of these technologies already exist elsewhere, but what’s unique about this device is that it integrates the biosensing, signal processing and interpretation, and artificial intelligence into one system that is relatively small and flexible and has a low power budget,” Rabaey concludes.
The study is published in Nature Electronics.