Visualization in Virtual Reality

Visualizing Life and Medical Science research images in VR

arivis InViewR sets new standards for visualizing Life and Medical Science research images. Breaking through limitations of viewing 3D/4D images on a 2D desktop screen, like constantly having to turn the image with a mouse, it is possible to enter the data, positioning one’s viewpoint within the data itself. With natural movements of the head and body a user can move freely to inspect and interact with an image from any angle and position without limitation. Users are instantly able to comprehend and internalize information about important relationships between structures within an image. These new insights, many impossible to perceive on a desktop system, can be used to evaluate existing hypotheses, create new ones, and generate appropriate data analysis strategies. » download brochure

Notable InViewR visualization capabilities:

Immerse yourself in your data & interact with data in your peripersonal space
» more

Direct volume render the original image data with no conversion
» more

Reveal structures of interest with a choice of image clipping tools
» more

Overlay surface renderings on volume renderings to proofread segmentation or compare a model to the real object
» more

Highlight structures of interest and suppress noise with flexible control over color mapping and opacity
» more

Save points of interest (POI’s) to share your insights with peers and collaborators

» more

Export snapshots, movies and 360 degree data for publication and presentation 
» more

Immersive VR without compromises

As simple as looking at your surroundings in the real world, InViewR restores the context of tissues and interconnected structures in the virtual world. By positioning data in your peripersonal space, the space within arm’s length in all directions, InViewR provides enhanced depth perception, the ability to recognize relational sizes, and the ability to transfer information about objects within the visual field to memory compared to alternative viewing methods.

peripersonal space
  • Experience smooth and responsive viewing because of high frame rates and low latency rendering
  • Intitutively navigate utilizing hand gestures or hand held controllers to any position
  • Pan & rotate the image, walk around in the image, or turn your head to look around
  • Change your size to observe structures with their correct positional, relational & aspect ratios

Direct Volume Rendering of images

Surface renderings, typically used in VR, requires that users segment data by making difficult choices about thresholds, take time to compute, and often fail to produce results for complex microscopy images. Specialists argue that surface renderings, which only display the exterior or shell of objects of interest, are insufficient to see inner structures of cells or intensity changes that occur along the timeline of typical microscopy images. InViewR uses direct volume rendering to allocate every single data point of the original 3D image to a voxel within the rendered object. The user can thus be assured the final rendering is representative of ALL the real data and resolution collected with their valuable hardware.


Color Mapping and Opacity

InViewR provides users with complete control of how their data is rendered in VR. A user can choose from a maximum intensity projection (MIP) or direct volume rendering with user definable parameters. Completely custom color and opacity mapping can be applied to highlight structures of interest while suppressing noise and can be saved for later use. Color channels can be independently switched on / off and mapped to show / hide morphology.

Surface Rendering and Volume Rendering combined

Surface renderings may be imported into InViewR and overlaid on volume rendered data. The surface renderings can be translated, rotated, and scaled so that they appropriately match the volume data in size and position. This allows users to take computer aided design models (CAD) or simulation data and compare them to the real image. Surface renderings are often used to visually identify regions in an image that either have been manually or automatically algorithmically segmented on a desktop computer. InViewR allows for free exchange of these segmented surfaces from desktop programs like Vision4D into and out of VR. Segment proofreading, identifying locations where over and under segmentation has occurred, is effortless in VR since it is easy to see the relationship between original and segmented data.

arivis lumos neurons surfaces


Dense datasets are often a challenge to render in 3D and in VR space because some parts of the image obscure other equally important parts of the image. InViewR provides a choice of a clipping plane, clipping sphere, or triple orthogonal slicer that can be interactively placed and positioned within an image. These tools allow for the selective visualization of only a portion of the image revealing structures of interest. The clipping tools can be selectively applied to the volume data, segments, measurements, markers, and / or overlays in any combination to reveal otherwise hidden structures.


Points of Interest (POI)

While in VR you likely will observe structures of interest from a particular vantage point with in the image. With the press of a button on the controller, InViewR will record and store your viewing position, complete with a thumbnail, so that you can revisit that precise location and viewing angle at a later time. The recorded POI’s are accessible in VR space so a peer or collaborator fly to the correct position observe what you have seen.

Export of Data

InViewR provides several possibilities for exporting data for publication and presentation. A high resolution snapshot of any viewpoint in the image can be generated where the user can select the quality and resolution of the output 2D image. A user can also record their movement through the image. The recording of the movement can be saved, reloaded, and utilized by others. The user can then be taken on a pre-determined flight path through the dataset, just like riding on an airplane, with the ability to freely look around while flying down the path. The same moment recording can be utilized to create a standard movie where the point of view is fixed along the flight path or a 360-degree movie where that users can look around while on the flight path. A 360-degree movie can be played on a standard desktop computer using a mouse to look around in the image or simply turning a cell phone or tablet to look around because the internal gyroscope is used. Stereo 360-degree movies can also be created to then use devices like Google Cardboard or Gear VR for viewing. » download brochure




Office Germany
Headquarters Imaging

Office USA
Dependence North America

Disclaimer: All specified trademarks belong to their respective owners and are subject to the respective regulations.