The metaverse, caught off guard by fire.
In just a few months,Minecraft and Roblox have become more popular,GREE, Nvidia, Microsoft and others have released product solutions, and Korea and Japan have announced their plans for the metaverse track at the national level. Ready Player One portrays a scenario that seems like it could become a reality tomorrow.
Bloomberg Industry Research predicts the metaverse will reach an $800 billion market by 2024, while Pricewaterhousecoopers expects the market to reach $1.5 trillion by 2030. The market potential is limitless.
What exactly is a metaverse? In simple terms, the metaverse can be understood as a virtual world that is always online in parallel to the real world. In this world, in addition to eating and sleeping need to be completed in the real world, the rest including work, social interaction, entertainment, etc., can be realized in the virtual world.
The mystery, the unknown, the fantasy that was once only possible in dreams may be realized in the near future, and it's exciting to think about it.
However, we have to admit that the reality is that the virtual world with high degree of pseudo-truth has not been built, the image of people in the virtual world has not been established, the terminal cannot support the amount of data computation, and the interactive experience is not good enough...... A series of problems waiting to be solved, the door to the metaverse has not yet been opened.
The two core technologies of the metaverse: virtual beings and real-time interaction
As mentioned above, the ultimate goal of the Internet, the meta-universe, can break the boundary between space and time of human social activities. In a created virtual space, people from all over the world are engaged in real social activities, social, business, entertainment... "Virtual human" and "real-time interaction" are two essential core technologies.
Virtual people represent the recognizable image and identity of individuals. Real-time interaction can realize immersive real social activities, and finally blur the boundary between virtual and real.
In the meta-universe, virtual people refer to virtual characters with digital appearance. Different from robots with physical objects, virtual people rely on display devices to exist. In general, we divide virtual people into two categories: human-powered virtual people and AI-intelligent virtual people with artificial intelligence.
The meta-universe is another infinite expanse of real human social activities, so the technological realization of human-driven virtual people is the first step into the "new world". Of course, if the self-aware NPC virtual characters in Out of Control Players can also be realized in the meta-universe, it is even more tantalizing.
Therefore, the virtual human of the metaverse should have the following three characteristics:
One is the virtual human external image, the owner's appearance or cartoon and other interesting and vivid appearance, with a specific appearance, gender and personality characteristics;
The second is the expression ability of virtual human, owning human behavior, with the ability to express with language, facial expression and body movement;
The third is the ability of virtual human perception and interaction, owning the thoughts of people, with the ability to identify the external environment, and communicate with people.
Externality, expression, perception, which almost every real human has, requires a lot of technology and equipment.
First of all, everyone has the love of beauty. In real life, I still have a short distance from "prosperous beauty". Maybe in the new world, I can have the opportunity of "If I could live again" and walk towards the peak of life. But creating a "beautiful and good" avatar is not an easy task. AI and graphics have a high technical threshold, such as 3D modeling, high computational power and rendering requirements on equipment performance are all industry challenges.
Secondly, after the beauty of the prosperous age of "sinking fish and falling wild, beautiful and beautiful", light is a static image with no expression or body expression, and the world of the metaverse instantly becomes a world of "walking dead". Here, it involves voice interaction (TTS, ASR, NLP, etc.), animation synthesis (driving, rendering) and other AI-related technologies, and the threshold of technical requirements can be imagined.
Finally, the meta-universe is mapped to the real human world, so "interaction", the basic element of human social activities, is the most critical link in the construction of the meta-universe. In order to restore the real offline interaction experience without the sense of harmony, low delay and high-quality real-time communication service is needed to guarantee, but the current complex and changeable public network environment, terminal equipment and other factors are a big challenge to communication transmission.
In order to be able to "fly freely" in the "new world", the virtual image should be "beautiful", the expression should be "clear", the communication and interaction should be "smooth"... Many problems need to be solved, but at present, the majority of Internet users are still mobile phones, and the types of models are also complex and diverse. Therefore, the solution that does not need external devices, can solve the performance problems caused by the powerful computing power required to realize virtual human interaction, and at the same time can guarantee the real-time interaction effect in the complex network environment, is the best choice for the current reality and the best entry into the "meta-universe".
NetEase Yunxin, coming!
Industry first virtual image real-time interactive fusion SDK, take the first step into the metauniverse
In view of the current problems, NetEase Yunxin and NetEase Fuxi Laboratory jointly launched the industry's first "virtual image +RTC" fusion SDK, and based on the fusion SDK formed NetEase Yunxin virtual image real-time interactive solution.
The solution can not only vividly restore the virtual human image, but also combines with the real-time transmission capability of NetEase Yunxin WE-CAN(Communications Acceleration Network) to realize real-time interaction of virtual human and help enterprise customers 0 threshold can also realize the virtual image real-time interactive scene, taking the first step of the metauniverse.
Specifically, NetEase Yunxin's real-time interactive virtual image solution has six advantages:
First, image: highly restore, the ultimate smart.
NetEase Yunxin virtual image real-time interactive solution can detect users' facial expressions through cameras or uploaded videos, so as to drive 3D virtual characters to make the same expressions, including facial expressions, head gestures, eye movements, tongue sticking out, etc., which can be restored and tracked.
In order to reduce the performance requirements of equipment, some traditional methods are often at the expense of user experience. For example, animation matching method: when users speak or act, frame animation is matched in the preset "expression and action database", and finally a series of frames are matched and played to realize the "expression" of virtual human. But human behavior is diverse and random, and it is impossible to predict all behaviors, so you can imagine the so-called "facial paralysis" or "zombies." The NetEase Yunxin virtual image real-time interactive solution adopts the solution of "real-time capture on the end, real-time drive on the cloud", compared with the animation matching scheme, more real and smart.
2. Hardware: Mobile phones can be realized without wearing devices.
NetEase Yunxin virtual image real-time interactive solution supports the use of ordinary monocular camera for migration, without other dynamic capture equipment, simple and convenient. After NetEase Yunxin SDK is installed on ordinary mobile devices or PC devices, virtual people can be generated and driven, and real-time interaction can be conducted with virtual people driven by remote real people.
Three, performance: end - cloud collaboration, 1000 yuan machine can also play.
After audio and video collection through terminal devices (mobile terminal or PC terminal), action model data will be output through NetEase Yunxin's SDK, and the collected audio and video data will be transmitted to the cloud for virtual image reconstruction and synthesis.
Through the analysis, modeling and rendering of mobile capture data in the cloud, NetEase Yunxin's real-time interactive virtual image solution greatly reduces the performance pressure of dual-terminal algorithm, lowers the entry threshold of users, and enables more users to experience the fun of virtual interaction in advance and feel the benefits of the meta-universe.
Four, interaction: low delay, no lag, "face to face communication" in the meta-universe.
As a converged communication cloud expert, NetEase Yunxin's RTC capability has been in a leading position in the industry. For the essential "real-time interaction" scenario in the meta-universe, NetEase Yunxin WE-CAN global intelligent routing network escorts "zero-distance" communication.
In the face of complex and diverse network environments and terminals of varying quality and quality,WE-CAN can stably provide real-time interaction capability with millisecond delay worldwide, select the best route through intelligent routing network, reach hundreds of countries and regions in the world within 100 milliseconds, and provide voice and video services without delay for 99.9% of calls. Based on NetEase Yunxin WE-CAN global intelligent routing network with high reliability and low delay, NetEase Yunxin virtual image real-time interaction solution can realize real-time interaction between virtual people, just like face-to-face dialogue in the real world.
5. Convenience: One SDK can realize the two core technologies of the metaverse.
Faced with the two major problems of virtual image and real-time interaction in the meta-universe, NetEase Yunxin's integrated solution deeply combines and encapsulates virtual image and RTC at the technical level. Customers no longer need to connect with multiple suppliers, but only need one SDK to build a real-time interactive scene of virtual image full of daydream and high experience.
NetEase Yunxin-based "Virtual image +RTC" integration SDK undertakes lightweight work such as audio and video acquisition/pre-processing, data analysis, encoding transmission, decoding and rendering, transferring complex and high-computing work to the cloud. With the help of the integration SDK, an efficient end-to-end cloud collaborative working mode is formed, providing real-time interactive integration capability of virtual image. To cross deep technical barriers, just this SDK.
Vi. Scenario: Applicable to finance, e-commerce and other industries, reducing cost and improving efficiency faster.
It is worth mentioning that NetEase Yunxin's real-time interactive virtual image solution can not only be applied to the meta-universe world or entertainment and social industry, but also can be widely used in all walks of life. While helping enterprises to reduce costs and increase efficiency, it can also improve customer experience, thereby improving user retention and creating revenue.
Finance industry: Virtual digital human customer service, providing 7 x 24 hours warm service
E-commerce industry: virtual person live delivery + customer service, bringing double improvement of business revenue and customer experience
summary
Although entering the metaverse needs to solve one problem or another, it is undeniable that the realization of the metaverse scene and the maturity of the metaverse industry is only a matter of time. Ready Player One may take a few more years to imagine, but the convergence of virtual and real is already a big trend in the development of the Internet.
All along, NetEase Yunxin has been striving for excellence in polishing technology, standing at the forefront of the industry to explore the wind direction, hoping to help industry customers not to miss every opportunity. Now, welcome friends and NetEase Yunxin together, take the first step into the metauniverse.