Pushing the Boundaries of Innovation with NVIDIA and Virtual Humans
2023 will be a year of continued exploration into the metaverse. Many corporations as well as individuals are interested in discovering how the metaverse can provide innovative solutions for a variety of workloads. As NVIDIA continues its push for its vision of the Omniverse, one such solution that comes to light is the virtual human.
What is a Virtual Human?
A virtual human is a computer-generated being resembling a realistic person and can be tailored to meet specific needs. A virtual human is frequently used in virtual environments to provide users with a more natural and immersive experience, creating simulations that mimic real-world situations. With the rise of the metaverse in 2021, a new generation of virtual humans with higher fidelity, hyper-realism, and stronger interactive capabilities has begun to emerge. These interactive virtual beings reside within the digital realms of the metaverse can be applied to various industrial scenarios on a large scale.
Virtual human production can be divided into 4 major parts: character modeling, character driving, character rendering, and perception/interaction design. AI greatly accelerates every part of this virtual human production process. Not only does AI greatly shorten the creation cycle of virtual humans, but it can also improve production efficiency and promote the rapid development of the virtual. human industry.
Development and Production Challenges
The virtual human solution, being a newer solution still in an early adoption stage, has larger and more diverse demands for computing power than traditional solutions. High-quality virtual humans need to seem realistic in both appearance and behavior. The virtual human development process, from designing facial expressions to programming AI behavioral algorithms, is extremely complex and involves the collaborative efforts of many designers. This requires the underlying platform to have strong virtualization and cloud collaborative computing capabilities.
AI-powered facial animation with NVIDIA Omniverse Audio2Face
Physical simulations, such as structural mechanics, elastic mechanics, multi-body dynamics, and immersive 3D rendering, such as ray tracing, rasterization, and DLSS, rely on huge graphics and image computing power for support. Virtual humans driven by AI often need to combine speech recognition, NLP, DLRM, and other AI algorithms to achieve high-quality interactive capabilities. These models require powerful AI computing power to support their training and reasoning.
In addition, large metaverses with high volumes of virtual people need to have the capacity to scale up efficiently and effectively. It can be very challenging computationally to simulate interconnected large worlds with so many virtual humans interacting together in real time, especially if the hardware used is not designed to support such demanding tasks.
How Inspur Supports Virtual Human Innovation
Inspur and Nvidia have jointly created the most powerful industry-wide metaverse software and hardware ecosystem, providing diverse computing power for virtual human creation. Inspur’s MetaEngine OVX System can provide physical accuracy, make full use of real-time path tracing and DLSS, simulate materials with NVIDIA MDL, simulate physics with NVIDIA PhysX, and seamlessly integrate with NVIDIA AI.
The Omniverse-based MetaEngine OVX solution supports 8x NVIDIA A40 or L40 GPUs in a single machine. 32 MetaEngine OVX servers can be combined into a clustered scalable unit with multiple units that can scale at ease for exceptional computing performance and ultra-high network bandwidth.
The MetaEngine can also support various professional software tools so that different users have the flexibility to use their preferred tools. The system supports NVIDIA Omniverse Enterprise, which enables users to build relevant metaverse applications for designing and optimizing 3D assets and environments, such as virtual humans. It also supports the NVIDIA® ConnectX®-7 network adapter connects seamlessly to third-party professional software tools, provides more than 20 kinds of Plugins, and eliminates any connection and interoperation difficulties between different software.
These updated systems greatly improve virtual human development and production. In the past, optical and inertial motion capture technologies required live actors and used many cameras and sensors, which made production difficult, costly, and time-consuming. Now, with the help of Inspur MetaEngine and NVIDIA Omniverse, the process is simplified to two-dimensional motion feature vectors of human skeleton points mapped to the virtual human’s body that can easily control the virtual human’s expressions and actions.
What the Future Holds for Virtual Humans
As industry leaders continue to create and innovate, real humans will become more and more connected to virtual humans. No longer will we be detached from characters shown on a computer screen. Eventually, the computer will be capable of having 2-way interactions with us on a meaningful and personal level.
According to research published in the journal Frontiers in Virtual Reality, a virtual human can already be as good as, if not better than, a real-life human at helping people practice and develop leadership skills. As technology advances, virtual humans will become key players in the advancement of human society and innovation.