Indonesia - Flag Indonesia

Incoterms:FCA (Shipping Point)
Duty, customs fees and taxes are collected at time of delivery.

Please confirm your currency selection:

Indonesian Rupiah
Payment accepted in Credit cards only

US Dollars
All payment options available

Bench Talk for Design Engineers

Bench Talk

rss

Bench Talk for Design Engineers | The Official Blog of Mouser Electronics


Mixed Reality Brings Complex Data to Life Stephen Evanczuk

(Source: chakisatelier - stock.adobe.com)

Built on cutting-edge hardware and software technologies, augmented reality (AR) and virtual reality (VR) systems transform complex digital information into something we can comprehend visually, aurally, or tactilely more effectively than through conventional representations on a page or screen. In a rapidly growing number of application segments, developers place virtual objects in the user’s real world in AR applications or place the user in a virtual world in VR applications—or in any combination along the real-to-virtual continuum collectively called mixed reality (MR)[1].

MR Leverages Our High-Bandwidth Sensor Pathways

MR applications deliver a highly immersive approach to entertainment, learning, or professional practice by taking advantage of the human sensory systems’ high information bandwidth. According to research estimates[2], human vision achieves a bit rate of over 550 gigabits per second, and hearing reaches 1.4 million bits per second (Mbps). With nearly 70,000 pressure points in the palm, a tactile response from a single palm alone is capable of operating at an equivalent bit rate of over 190Mbps with a latency of only 21 milliseconds.

Using these pathways, complex information from MR applications reaches the brain’s cognitive centers more efficiently, delivered in a way that puts the information in a familiar context so that users can more easily relate to their own environment or needs. As a result, according to researchers[3], students using AR “books” showed a higher level of reading comprehension and retention than students using conventional learning materials.

Key Technologies Enable MR Immersive Experience

To achieve a seamless impression of virtual objects or complete environments, MR systems depend on high-throughput processing of data streams, typically from diverse sensors. Fortunately, MR developers have a growing base of suitable subsystem solutions because the core technologies underlying their applications are common to some of the industry’s most active application areas, including machine-learning (ML) computer vision, robotics positioning methods, and automotive advanced driver assistance systems (ADAS), among others.

Advances in ML computer vision and rapid ML model development meet MR’s critical requirement for the detection and identification of objects in the user’s real-world surroundings. Similarly, the ability to precisely determine the distance to those objects relative to the user takes advantage of distance measurement technologies, including cameras, ultrasound, laser, Lidar, and radar that have been refined for use in ADAS collision detection, building automation, and more. Finally, technologies like simultaneous localization and mapping (SLAM) used in robotics and mapping systems let MR systems create a detailed “map” that places those real-world objects in the correct position, perspective, and orientation relative to each other and the user.

All of that technology is just the foundation for what happens next in MR systems. Although focused on maintaining the virtual world for the user, VR systems need this map to ensure the physical safety of users moving within their physical environment while they are immersed in their virtual world. For AR applications, the MR system uses its map to render visual objects at a position, orientation, and relative size that would seem natural to the user. Advanced forms of this capability even allow the virtual object to occlude real-world objects passing behind it relative to the observer and conversely allow the virtual object to be occluded by real-world objects passing between the observer and the virtual object.

Different MR Classes, Different Hardware Requirements

MR applications operate across a wide range of capabilities limited by the sensor complement and processing capacities of the user’s hardware. At a minimum, these applications rely on high-resolution cameras and Lidar-based precision measurement capabilities like those available in advanced smartphones and mobile devices. Indeed, these devices have proven quite effective in providing useful AR-based mobile apps like those that let users read virtual labels for buildings or translations of signs on their displays simply by pointing their mobile device’s camera at a building or sign.

With the inclusion of a head-mounted display (HMD) in consumer VR and professional MR applications, hardware design requirements have become much more involved in terms of packaging and performance. Although the fundamental enabling technologies remain the same across different classes of these more complex systems, processing requirements increase dramatically in moving from consumer VR entertainment headsets to MR headsets built for professional applications in medicine, construction, and industrial operations, among others.

For the hardware foundation in these professional systems, hardware designers typically need to incorporate a specialized high-performance processing pipeline. Although conceptually similar to a vision processing unit (VPU), MR pipelines can be significantly different from VPU pipelines due to the greater volume, velocity, and variety of data from the diverse sensor modalities typically required in MR applications. In its HoloLense MR system, for example, Microsoft addresses the throughput issue using a custom holographic processing unit built specifically to handle this complex processing workload.

Human Limits to MR Acceptance

Besides significant performance requirements, applications for HMD MR systems face more fundamental challenges. Developers must deliver a hardware solution capable of delivering the needed sensing and processing capabilities while remaining within the size, weight, and power (SWaP) constraints.

Indeed, SWaP characteristics may be more of a limitation to long-term MR adoption than hardware or software capabilities because the human anatomy was not designed to wear a heavy HMD for an extended period. Researchers working in aviation human factors and, more recently, in MR HMD systems have documented user fatigue[4] and increased stress on the musculoskeletal system of the head and neck[5] due to HMD weight and balance issues.

Beyond musculoskeletal limitations, the nature of the human visual system can make working with HMD-based MR applications difficult for some users due to the vergence-accommodation conflict (VAC) experienced in viewing three-dimensional (3D) images in HMDs. Our eyes converge on an object in the real world and then the lens of each eye accommodates to obtain clear vision. In contrast, HMDs take advantage of stereoscopic effects to create a 3D effect, so convergence on an object in the foreground or background of the image and accommodation to that object does not occur at the same distance. The result can be increased fatigue, headaches, and even nausea—the so-called virtual reality sickness[6]. In addition, due to concerns about long-term effects on developing visuomotor systems, health experts warn of extended use of HMDs by children.

Conclusion

Based on multiple enabling technologies, advanced MR systems could dramatically elevate information delivery, providing users with complex data in a more intuitive way than possible with conventional methods. For MR system developers, the unique demands of AR/VR/MR applications present significant challenges, not only for achieving the required processing performance from high throughput multimodal sensor data, but also for delivering their HMD-based systems in packages designed to minimize physical stress on their users.

 

Sources

  • 1. van Krevelen, D., & Poelman, R. (2010). A Survey of Augmented Reality Technologies, Applications and Limitations. International Journal of Virtual Reality, 9(2), 1–20. https://doi.org/10.20870/IJVR.2010.9.2.2767
  • 2. Kim, T. & Park, S. (2020). "Equivalent Data Information of Sensory and Motor Signals in the Human Body," in IEEE Access, vol. 8, pp. 69661-69670, https://doi.org/10.1109/ACCESS.2020.2986511
  • 3. Bursali, H. & Yilmaz, R. M. (2019). Effect of augmented reality applications on secondary school students' reading comprehension and learning permanency. Comput. Hum. Behav. 95, C (Jun 2019), 126–135. https://doi.org/10.1016/j.chb.2019.01.035
  • 4. Ito K., Tada M., Ujike H., & Hyodo K. (2021) Effects of the Weight and Balance of Head-Mounted Displays on Physical Load. Applied Sciences. 11(15):6802. https://doi.org/10.3390/app11156802
  • 5. Knight, J. F., & Baber, C. (2007). Effect of head-mounted displays on posture. Human factors, 49(5), 797–807. https://doi.org/10.1518/001872007X230172
  • 6. Chang, E., Kim, H.T., & Yoo, B. (2020). Virtual Reality Sickness: A Review of Causes and Measurements. International Journal of Human–Computer Interaction, 36, 1658 - 1682. https://10.1080/10447318.2020.1778351


« Back


Steven EvanczukStephen Evanczuk has more than 20 years of experience writing for and about the electronics industry on a wide range of topics including hardware, software, systems, and applications including the IoT.  He received his Ph.D. in neuroscience on neuronal networks and worked in the aerospace industry on massively distributed secure systems and algorithm acceleration methods. Currently, when he's not writing articles on technology and engineering, he's working on applications of deep learning to recognition and recommendation systems. 


All Authors

Show More Show More
View Blogs by Date

Archives