How ray tracing is bringing disruption to the graphics market – and impacting VR

For an artist or developer to achieve any sense of realism, you need a renderer (the function that creates the visuals) that can simulate the light interactions happening in the scene: that means light reflections, light absorptions, refraction, etc. Those require a full knowledge of the scene when processing each individual pixel, which is not something that the most common rendering technique, know as a real-time rasterised renderer, really supports.

For example, the data available when rendering a pixel in a rasteriser are:

  • Surface data for a single point (e.g. where the point is, what colour it is)
  • Some generic global values (e.g. position of the 3 nearest light sources, current time, camera direction)
  • Some textures

With only such data available, one can easily see that estimating how the light may have bounced across the scene can be quite difficult. There are ways to approximate some of it, but they are often quite inefficient and often leaving the final users disappointed by the low render quality (and reduced framerate due to inefficient methods used for the approximation), and are also ridiculously convoluted, leaving developers with headaches. 

Rendering without any context (left) produces very artificial looking images. With context (right), images look more natural, with soft shadows, reflections, and light bounces

This is where ray tracing comes into play: when rendering a pixel, rays can be sent out into the scene to probe the surroundings: if a ray between the current point and a light is interrupted by some other piece of geometry, you get a shadow. If you use a ray to find the colour of a nearby object, you get reflections. Suddenly, graphical effects that were unachievable in traditional renderers become logical in their design, and trivial in their execution.

Trivial? Well, trivial in everything except the amount of processing capability required. Since the benefits are so evident, why is it that ray tracing isn’t used more often? It’s quite simple: firing rays into a scene and finding their intersection with the geometry is complex and computationally expensive. This is why ray tracing has historically been reserved to off-line renderers for films.

Making ray tracing a reality

The introduction of the PowerVR Wizard ray-tracing GPUs is a game changer. The PowerVR GR6500 is the first of our GPUs in the Wizard family, and can process more than a 100 million rays per second at 2 Watts, which is more efficient than any other existing solution by an order of magnitude.

Others have claimed to have solved real-time ray tracing with their hardware in the past, and failed. How is PowerVR Wizard different? First of all, the ray-tracing functionalities are tightly integrated into the real-time rendering APIs OpenGL ES and Vulkan, meaning that developers do not have to switch to some obscure proprietary renderer and be stuck with it. Furthermore, ray tracing can be mixed with rasterised graphics, which makes it easy to integrate new ray traced effects into existing games and applications. But most importantly, it’s completely programmable. Previous hardware ray tracing solutions usually involved a limited set of predefined effects that could be turned on or off, such as basic reflection and refraction, or hard shadows. With this solution, ray tracing is a tool that allows a brand new paradigm to be seamlessly integrated into existing engines, letting developers use it in any way they could possibly want, even for use cases that no one would have thought of.

Having an entirely programmable ray tracing system allows developers to adopt any rendering style they want

A new paradigm leading to new possibilities

The recent integration of our ray-tracing technology into the Unity game engine as a light pre-processing tool is a good example of the expected paradigm shift: new and better solutions to old problems, even when they were assumed to have been solved to a satisfactory level.

Lightmapping, the process of pre-computing complex light interactions when authoring a 3D environment (be it for a game or architecture visualisation), has traditionally been a very long and tedious process. Every change to the scene geometry or lighting must be followed by a waiting period ranging from minutes to hours, making it very difficult for artists to get the right result. But the very iterative nature of ray tracing can be used to solve this problem: the user can get an instant but noisy output that is then refined over time. This means that if something looks wrong, it can be corrected straight away, saving a considerable amount of time.

Imagination is working with multiple, leading games middleware suppliers, to help ensure we can solve the traditional “chicken and egg problem”, by getting the ray tracing features to be incorporated into these engines, making the inclusion of ray tracing features into applications, simple for developers.

Going beyond the obvious improvements to lighting quality, ray tracing can also be used in very different ways. For example, in virtual reality (VR), ray tracing makes it possible to counter the lens distortion at the very first stage of the rendering process, instead of moving and stretching some pixels at the end of the render like in rasterisers. Even better, the amount of rays sent per pixel can vary depending on the pixel position in the frame, which means that it is trivial to implement foveated rendering, which tracks the eye and only draws the highest detail images where you are looking, and add precision where it matters.

As VR experiences begin to mature, developers look for ways to further improve the sensation of presence. Physically-correct and context-aware sound design has become an essential target area for those improvements. For example, echo simulation helps users gain extra spatial awareness, improving the believability of the VR experience. It so happens that simulating sound interaction within an environment is actually pretty similar to rendering light, and that the same ray tracing technology can be used to improve immersion at a much lower processing cost.

https://www.iottechexpo.com/northamerica/wp-content/uploads/2018/09/all-events-dark-text.pngInterested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located IoT Tech Expo, Blockchain Expo, AI & Big Data Expo and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam and explore the future of enterprise technology.

Related Stories

Leave a comment

Alternatively

This will only be used to quickly provide signup information and will not allow us to post to your account or appear on your timeline.