New research could put an end to queasy VR experiences

New research from the UK and Germany that was looking for a way to provide quicker and visually sharper experiences, may have uncovered something even more desirable: a path towards eliminating nausea-inducing VR experiences.

Conducted by researchers at Brunel University London and Bonn-Rhein-Sieg University of Applied Sciences and Saarland University in Germany, the project seems to have unearthed the “sweet spot” of image quality.

VR can make users sick, mainly due to latency between eye movement and visual display changes. The problem can be exacerbated when trying to display very high-res graphics, as the drain on computer resources can worsen the lag even further.  

The research team have pioneered a technique called ‘foveated rendering’ which makes use of the limitations of the human eye to reduce latency while maintaining image quality. The way it does this is by mimicking the way we see the world. For people with normal-functioning eyes, the centre of the field of vision is the sharpest, and this clarity of vision progressively reduces in the outer fields of vision.  

detail reduces from the user’s point of regard to the visual periphery

“We use a method where, in the VR image, detail reduces from the user’s point of regard to the visual periphery, and then our algorithm incorporates a process called reprojection,” said Thorsten Roth, a member of the London research team.

“This keeps a small proportion of the original pixels in the less detailed areas and uses a low-resolution version of the original image to ‘fill in’ the remaining areas.”

As you can see in the image below, the centre of the image is the sharpest, while the detail reduces the further out you get.

 

You can watch a video demonstration of the rendering technique here.

Correcting the perception misconception

Foveated rendering seems to be a success. The research tracked the movement of individual eyes of participants wearing an Oculus headset as they watched 96, eight second long VR videos. They were then asked about the quality of the videos they saw, with a particular focus on blurriness and flickering images.

The results show that the best response was for foveated rendering with an inner radius of 10⁰ and an outer radius of 20⁰.

Interestingly, the team found that adding more detail to the periphery of user’s field of vision had no noticeable improvement, and actually sometimes contributed to a perception of lower image quality. In Roth’s words, it is impossible for users to “make a reliable differentiation between our optimised rendering approach and full ray tracing, as long as the foveal region is at least medium-sized.”

The research also show that users tend to use ‘visual tunnelling’ when following a moving target, meaning that the mental load of keeping up with the object makes periphery visuals even less perceptible.

“Our method can be used to generate visually pleasant VR results at high update rates,” said Roth.

“This paves the way to delivering a real-seeming VR experience while reducing the likelihood you’ll feel queasy.”

https://www.iottechexpo.com/northamerica/wp-content/uploads/2018/09/all-events-dark-text.pngInterested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located IoT Tech Expo, Blockchain Expo, AI & Big Data Expo and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam and explore the future of enterprise technology.

Related Stories

Leave a comment

Alternatively

This will only be used to quickly provide signup information and will not allow us to post to your account or appear on your timeline.