
Headsets may be one of the main features of virtual and augmented reality, but your hands play quite a big role too.
That’s why Purdue University researchers have developed a new system, DeepHand, that uses deep learning to represent the human hand in VR, understanding its complex joint angles and contortions in real time.
The researchers behind the project say there will be a need for improved hand recognition for computer systems that allow people to interact with virtual environments in future virtual and augmented reality.
DeepHand uses a depth-sensing camera to capture the user’s hand and specialised algorithms then interpret hand motions.
“It’s called a spatial user interface because you are interfacing with the computer in space instead of on a touch screen or keyboard,” Dr Karthik Ramani of the School of Mechanical engineering says.
“Say the user wants to pick up items from a virtual desktop, drive a virtual car or produce virtual pottery. The hands are obviously key.”
The researchers trained DeepHand with a database of 2.5 million hand poses and configurations. Key hand angles were identified and examined, and configurations represented in the system by a set of numbers.
Then, from the database the system selects the ones that best fit what the camera sees.
“The idea is similar to the Netflix algorithm, which is able to select recommended movies for specific customers based on a record of previous movies purchased by that customer,” Ramani said.
Although for training the system, the researchers need a powerful computer, once trained it can be run on a standard computer.
A research paper about DeepHand will be presented at a computer vision conference CVPR 2016 in Las Vegas next week.
Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located IoT Tech Expo, Blockchain Expo, AI & Big Data Expo and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam and explore the future of enterprise technology.