Virtual reality is going to get a lot more real.With a vision of VR serving as the future of computer interfaces, NVIDIA has set its sights on refining the rendering process — to increase throughput, reduce latency and create a more mind-blowing visual experience for users.That effort was the subject of a presentation Tuesday at the GPU Technology Conference, where Morgan McGuire, a professor of computer science at Williams College who will soon join NVIDIA as a distinguished research scientist, told attendees that there are significant challenges to overcome.For instance, McGuire said that to match the capabilities of human vision, future graphics systems need to be able to process 100,000 megapixels per second, up from the 450 megapixels per second they’re capable of today.Doing so will help push the vastly higher display resolutions required and push the rendering latency down from current thresholds of about 20 milliseconds towards a goal of latency under one millisecond — approaching the maximum perceptive abilities of humans.“We’re about five or six magnitudes of order [between] modern VR systems and what we want,” McGuire said during a well-attended talk. “That’s not an incremental increase.”What makes latency an even more pressing problem is the fact that as VR systems seek to increase resolution by increasing pixel throughput, they need to avoid adding extra stages that fuel latency.“You can’t process the first pixel of the next stage until you’ve completed the final pixel in the previous stage,” McGuire said.To bring latency down enough, McGuire said the NVIDIA Research team is, and will be, experimenting in many areas:
- It starts with the renderer, which drives most of the latency. McGuire said that today the VR industry has largely moved to eliminating post-rasterization stages common in desktop games, such as deferred shading and post-processing effects. This reduces latency, but it also reduces image quality. NVIDIA Research is investigating renderers to achieve high image quality with fewer stages.
- Foveated rendering uses eye-tracking hardware, enabling VR systems to deliver the sharpest resolution to whatever parts of an image the user is looking at, and allowing the rendering process to produce lower-resolution imagery for the rest of the display.
- Rendering and displaying a light field, which extends a two-dimensional image into four dimensions by capturing many versions and angles of that scene — think of a bug’s-eye view of an image — can also bring down latency by allowing the VR display to react quickly as the viewpoint changes, but it requires enormous throughput, as it is effectively processing many images simultaneously.
- NVIDIA researchers also have been testing novel designs for HMDs, such as a design McGuire showed that replaces the bulky lens in traditional designs with a thin sheet of holographic glass, enabling the display to change focus as the user’s eyes move to different parts of an image.
Hi! I am a robot. I just upvoted you! I found similar content that readers might be interested in:
https://blogs.nvidia.com/blog/2017/05/09/whats-next-for-vr/
Congratulations @dangdca! You have completed some achievement on Steemit and have been rewarded with new badge(s) :
Award for the number of posts published
You got a First Reply
Award for the number of upvotes received
Click on any badge to view your own Board of Honor on SteemitBoard.
For more information about SteemitBoard, click here
If you no longer want to receive notifications, reply to this comment with the word
STOP
Congratulations @dangdca! You have completed some achievement on Steemit and have been rewarded with new badge(s) :
You published 4 posts in one day
Click on any badge to view your own Board of Honor on SteemitBoard.
For more information about SteemitBoard, click here
If you no longer want to receive notifications, reply to this comment with the word
STOP