Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next.
Nvidia is using its GTC conference, which kicked off today, to show how it is building out full-stack technologies to make access to AI and immersive experiences available to any enterprise with the ambition to take advantage of them.
In an advance briefing for the press, Nvidia executives sketched a vision for the emerging Omniverse. Nvidia is not only mapping out the future of virtual worlds but wants to populate them with real-time avatars that use natural language AI to serve real-world customers. Meanwhile, virtual cars driving on virtual roads can help perfect designs for real-world autonomous vehicles. Virtual robots can breed better physical robots. Pizza delivery can get faster. And Nvidia plans to deliver GPU chips, software, and cloud services to support these various use cases.
All this was scheduled to be announced as part of CEO Jensen Huang’s keynote speech at 9 a.m. CET.
“You will see the transformation of Nvidia into a full-stack computing company,” said Deepu Talla, VP and general manager of embedded and edge computing. For example, a restaurant could build a virtual waiter you interact with on a kiosk (perhaps on or next to your table), represented as a real-time avatar capable of carrying on a conversation, understanding a frown on your face, and making recommendations from the menu. Some of Nvidia’s products supporting this vision would be Metropolis computer vision, Riva conversational AI, the Merlin recommender system — and the Omniverse wrapping around all of it.
Nvidia has created a “unified compute framework,” which treats AI models as microservices that can be run together or in a distributed, hybrid architecture, Talla said.
The Omniverse, Nvidia’s concept for interoperable “metaverse” virtual worlds, is built on the foundation of Universal Scene Description, a specification originally developed by Pixar. “We think of USD [Universal Scene Description] as the HTML of 3D,” said Richard Kerris, VP of the Omniverse Platform. While it may not be governed by an organization like the W3C, a consortium of companies is working to advance USD, he said. For example, Universal Scene Description recently added a rigid body physics model Nvidia worked on with Apple and Pixar.
Meanwhile, Apple’s involvement means it’s possible to scan an object with your iPhone and import it into the Omniverse, Kerns said. One of today’s announcements is the availability of an enterprise version of Omniverse, which starts at $9,000 per year.
Talla promised an enterprise version of Riva will arrive in the first quarter of 2022, while a free version will remain available to small companies and individual developers. The conversational AI has improved to the point where it can produce synthetic speech based on any voice with just 30 minutes of training data, he said. That’s in addition to providing “world-class speech recognition” for seven languages, he said. Early adopters include a major insurance company and RingCentral, the cloud telephony, and Unified Communications as a Service company.
Robots and autonomous vehicles
Nvidia isn’t entirely about the virtual world, also offering the Jetson robotics platform built on the combination of its GPUs and Arm CPUs. The Jetson AGX Orin version scheduled for release in the first quarter of 2022 promises 6 times more processing power within the same form factor as the previous Xavier edition. Delivering 200 trillion operations per second, Jetson AGX Orin is like a GPU-enabled server you can fit in the palm of your hand, according to Nvidia.
But even work on physical robots connects back to the Omniverse. Nvidia recently announced a toolkit for integrating the open source Robotics Operating System (ROS) with Isaac Sim, its simulation environment for robotics applications. Data replication with Isaac makes it possible to test virtual instantiations of robots in worlds populated with synthetic data, Talla said. “Training robots in the physical domain is really hard. It’s far cheaper, safer, and faster to do it in simulation,” he added. Also, because the data is synthetic, you can skip the “labeling” step in training a machine learning model because the system already knows what the virtual world objects are supposed to be.
For similar reasons, Nvidia’s work with autonomous carmakers includes testing their designs in the Omniverse, first, before putting them on the road. “We leverage Omniverse for simulation about training and testing of the vehicles to ensure their safety,” said Danny Shapiro, VP of automotive for Nvidia. Using synthetic data to test autonomous driving software against simulated road conditions saves time and money, simplifies problems like labeling objects in the environment, and is ultimately reconciled with how the vehicle behaves in real-world conditions, he said.
Supply chain, pizza delivery, and enterprise
Meanwhile, Nvidia is working to make its technologies more accessible to enterprises that may not be building robots but do have practical business difficulties AI can help them solve. “Enterprises looking to apply AI to automate supply chain planning, cybersecurity, and conversational AI now have new frameworks to help them get started,” declared Justin Boitano, VP and general manager of enterprise and edge computing.
For supply chain optimization, Nvidia has its ReOpt framework of accelerated logistics and operations algorithms. ReOpt is particularly being applied to last-mile delivery, for example as part of a partnership with Domino’s Pizza to optimize the number of pizzas a driver should deliver on a single trip to a given list of addresses, Boitano said. “Time and cost-effective delivery of pizzas to satisfied customers is a great example that shows where the power of accelerated computing is because every minute that you spend calculating what to do is a minute that you lose to actually deliver those pizzas to customers.”
To promote cybersecurity, Nvidia is introducing DOCA 1.2, a more cloud-enabled update to its SDK for programming Nvidia DPUs to isolate and control datacenter traffic, and Morpheus, an AI-powered zero-trust application framework. Morpheus works by modeling every combination of interaction between applications and users to understand what normal behavior looks like, allowing it to flag or block abnormal behavior on the network, Boitano said. DOCA 1.2 is scheduled for release on November 30, while an early access version of Morpheus is available now.
Cybersecurity vendors partnering with Nvidia include Palo Alto Networks and Fortinet.
Yet another framework, NeMo Megatron, targets enterprises who want help building large language models with potentially trillions of parameters. The software is designed to run on Nvidia DGX SuperPod 20-node server arrays.
To make all this more accessible, Nvidia has partnered with Equinix to make pre-configured instances of the technology available from datacenters worldwide — two in Asia, three in the U.S., and four in Europe.
Health care and drug discovery
The final area covered in pre-briefings was health care, where Nvidia claimed advances in the fight against cancer, working with Memorial Sloan Kettering, and in fine-tuning radiological therapies for children in partnership with St. Jude’s Children’s hospital. Nvidia’s technologies are finding their way into surgical robotics.
Health care is experiencing the highest compound annual growth in data volumes of any industry, at 36%, with hospitals generating 50 petabytes of data per year, creating opportunities to make better use of that data, said Kimberly Powell, VP and general manager of health care.
With its Clara Holoscan medical device AI, available November 15, Nvidia is “providing an all-in-one computational infrastructure for scalable software-defined processing of streaming data from medical devices,” Powell said.
Meanwhile, Nvidia seeks to break the simulation bottleneck in drug discovery, where understanding of the molecular biology behind how drug candidates bind with proteins is rapidly accelerating but some simulations scientists would like to perform have been too computationally intensive to be practical. However, Nvidia discovered that by applying a different technique — physics modeling called density functions, rather than quantum methods — it could achieve 1,000 times greater performance on simulations of how molecular bonds are formed and broken, Powell said. As a result, a simulation that previously would have taken three months can be accomplished in three hours on a single GPU.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more