Nvidia decided that this year, in light of the coronavirus it was safest to upload their keynote to YouTube so that the world could still be updated on what’s going on within the walls of Nvidia.
Jensen Huang chaired the live keynote and began by thanking key workers for what they are doing in the fight against COVID-19, and how Nvidia has been trying to help scientists, doctors, and others in their struggle against the virus. He then expanded on this point, saying that Nvidia’s assistance in the struggle against Coronavirus is actually at the heart of their company, and that their technology isn’t just used in computers, but supercomputers, computer graphics, autonomous machines, and even AI.
Interestingly, the Nvidia keynote featured a video that demonstrated just how Nvidia was assisting the world of science and industry with its technology, from computer graphics to workplace automation – but what grabbed my attention the most was the AI that apparently composed the music used in the video itself. Arranged by a composer, it wouldn’t be hard to mistake the soundtrack for one written originally for a video of this nature.
Then Jensen talked about accelerated computing and the importance of developers within the field. Also, Jensen highlighted the fact that in the future he expects data center scaled computing to be the new normal, and that data centers will, in fact, be the fundamental computing unit that is relied upon as the amount of accelerated computing tasks grows exponentially – and is, in fact, the reason that Nvidia has bought Mellanox – experts in this field.
Jensen then moved on to talk about SDKs, and how throughout the past year Nvidia has shipped 50 new SDKs, in three tiers – all resting on their CUDA architecture. Off the back of this, Jensen mentioned that there would be four new applications that Nvidia would be unveiling during this year's GTC, including a new chip and four new systems – and that each would further and expand the ever-growing user base of both Nvidia software and hardware.
Jensen started by saying that computer graphics are the backbone of Nvidia – and then he began to talk about ray tracing. Ray tracing, as we all know is the groundbreaking technology of real-time photorealistic lighting within a game, allowing light to behave naturally dynamically, allowing for a much more realistic and better-looking game environment.
The RTX series by Nvidia was one of the first technologies to allow ray tracing, and now they have taken it a step further. According to Jensen, AI technology has allowed them to take the next step when it comes to ray tracing – by using the GPU to generate a relatively high definition image with ray tracing (but no anti-aliasing), which is then enhanced by artificial intelligence deep learning to synthesize a high-resolution image from that initial render, which is what the user sees.
All of this has been done via supercomputing – which has allowed for this AI to learn just how to upscale images properly (sometimes with the results leading to 16K images), with the intent of downloading this software onto Geforce PCs for use in the future.
This process is being labeled DLSS – Deep Learning Super Sampling, and it's capable of taking a 720p image and upscaling it to 1080p with anti-aliasing, and even generating content where there was no content before thanks to the AI understanding what an image should look like, and improving said image to reflect these expectations. The example they gave showed off a comparison between the DLSS image and a native 1080p image – and thanks to the AI learning, and the addition of new content and details the AI image did in fact look better.
Then, Jensen revealed the Nvidia omniverse – a blanket technology that leverages all of Nvidia’s previous tech to make 3d modeling and creation much easier for the creators themselves. The technology will allow designers to work on the same project at the same time, and for changes and edits to me requested and made in real-time thanks to shared processing and GPU power of the Nvidia tech powering the Nvidia Omniverse.
A demo was shown depicting what the Omniverse was capable of – and it looked great. Powered by a single Quadro RTX 8000, the demo showed off a playable demo in which an almost photo-realistic marble set was played through, featuring the ray tracing and AI elements that Jensen had talked about coming together in a project worked on by developers over only a few months. Thanks to the collaborative and more powerful nature of this technology, it seems like designs like this are going to be much quicker and easier to make – designers and developers must be happy with this news.
So, the Nvidia omniverse will be made available to those looking to work on projects like this, and it will all be powered by RTX Quadro severs. They will be made available in most major outlets, and will also grant access to remote servers also powered by Quadro cards.
Outside of that, there wasn’t a lot announced regarding Nvidia’s involvement with the PC and gaming community. Whilst we are sure that there is a 3080Ti in the future, it wasn’t shown off today, but the steps Nvidia are taking when it comes to cloud-based computing and hosted servers means that we could be seeing a huge leap forward when it comes to combined projects, or even having more people work on a more powerful system.
See something in the keynotes you think could be vitally important for the future of personal computing? Let us know in the comments below.