Infrastructure

The Maryland Blended Reality Center supports a dynamic visualization and immersion infrastructure that includes:

VR AND AR HEADSETS

Oculus Virtual Reality Headsets: The lab’s Oculus headsets are driven by a state-of-the-art GPU workstation, and displayed to a Samsung DM75D LED Panel.
MetaAR: The lab also employs a stereo MetaAR headset for experimentation with current augmented reality technologies, with 960 x 540 resolution on each eye, and a field of view of 23 degrees.
Vuzix: We’re also using the audio-enabled AR headset Vuzix, with 852 x 480 resolution on each eye, and a field of view of 35 degrees. Like the MetaAR, the Vuzix allows for gesture recognition and basic tracking.
In-House Design Augmented Reality Headset: Students in the lab have built an augmented reality system with the Oculus headsets and a machine vision camera. The prototype, OculAR, includes two Point Grey Flea3 3.2 MP Color USB 3.0 cameras, a custom 3D-printed bracket, and an Oculus headset.

LIGHTFIELD ARRAYS

Multi-Camera Arrays for Stereoscopic and Virtual Reality Video Capture: Students in the lab have fabricated several portable multi-camera arrays for capturing stereoscopic and virtual reality videos. Two of our prototype arrays are built from 12 GoPro Hero3+ cameras. One array has been used in a collaboration with surgeons from the University of Maryland Medical Center to capture video at the R Adams Cowley Shock Trauma Center. We are also experimenting with Raspberry Pi camera arrays.

INTERACTION SENSORS

VR and AR require new paradigms for effective interaction with projection-based displays and spatially-augmented reality objects. We are exploring a number of modalities for sensing user actions. These include:
Infrared lighting: We are exploring how infrared lighting and cameras can be used to facilitate touch-interactions with very large-area projection-based displays.
Depth Cameras: Several researchers in human-computer interaction and graphics have reported that kinesthetic memory and physical navigation is very important in interacting with large area displays. When a user walks around, turns his or her head, and changes visual field and focus, mechanoreceptors in the skin, as well as receptors in muscles and joints, are activated. We are exploring the use of Kinect and Leap Motion sensors for interacting with large-area immersive displays.
Eye-tracking: We use eye-tracking to understand attention-driven user interfaces for VR and AR, and to quantify over-attention. We are also interested in assessing the impact of saliency-driven rendering to guide visual attention in VR displays.
EEG: Our research is advancing and assessing the impact of visual rendering on brain activity. We are developing, validating, and using EEG interfaces, such as the 14-channel Emotiv headset, to understand brain response to immersive visual stimuli.

IMMERSIVE DISPLAY

Large-Area, High-Resolution Stereoscopic Display: Our Augmentarium features a 24-foot wide by 8-foot high rear-projection screen that curves along 180 degrees. This is rear-projected by a three-high and five-wide array of projectors along the curved screen. The 15 projectors are driven by a Mercury GPU408 4U server with dual E5-2620 v2 2.10 GHz six-core 80W processors, 64GB DDR3 memory, 512GB Samsung 850 pro SSD, four K6000 GPUs, and a Quadrosync card running the Windows operating system. We have merged the 15 projectors into a single usable stereoscopic desktop interface using NVIDIA’s Mosaic software and 3D Vision Pro kits. The main advantage of this approach is that the display is controlled as a single desktop on a single powerful workstation, on which our researchers can employ the latest graphics and visualization technologies without any significant customization.