arivis VisionVR is a fully integrated software solution that displays real image data in Virtual Reality (VR) by utilizing patent-pending direct volume rendering techniques with no need for complicated manual conversion of data or cumbersome creation of surface models. With arivis VisionVR you can directly use your hands to move, rotate, scale, and shape your digital image data. Freed from being tethered to your keyboard and mouse, and with depth perception equivalent to the real world, your hands are finally able to simply reach into your data to precisely and intuitively mark, measure, classify, edit, and segment. arivis VisionVR enables efficient, accurate, and interactive proofreading, editing, tracking and de-novo segmentation of multi-dimensional images from a multitude of supported imaging instruments and systems.
Talk with an Expert
In initial development for more than two and a half years, arivis VisionVR is constantly being refined to provide all users with the most natural and comfortable experience. When viewing images in VR it is important to supply the user with frame rates of at least 75 frames per second per eye and react to head movements in under 20ms to fully immerse you in the data and avoid the typical "motion sickness" that often occurs when using other commercial solutions. arivis VisionVR utilizes patent-pending volume rendering techniques to achieve this performance even with large datasets. Additionally, arivis VisionVR positions you in a virtual theater to feel grounded and provides spatial context to “walk-around” in. If you should get lost in the virtual environment, a simple push of a button resets your view. Advanced users can even control their environment, changing its visibility, transparency, and background color.
Full support of the OpenXR(TM) standard makes VisionVR the most future-proof VR solution for scientific imaging.
arivis VisionVR offers you a choice of control options. Even without advanced VR controllers you can utilize Leap Motion(TM) hand tracking to control movement with your hands and finger gestures. If you want to to take advantage of advanced functions, a variety of hand controllers with extremely precise tracking and option buttons is supported. Regardless of control choice, learning to move and interact with data is an intuitive process with a short learning curve. arivis VisionVR also provides the possibility to enable/disable tools and gestures to match the skill level or use case of the end-user.
VR menus, including context-sensitive help, have been designed to allow full control of all operations and tools while in the VR space to provide a seamless user experience. Users can interact with menus or VR objects (segments, measurements, and markers) by laser pointing at them or reaching out to touch them in VR space. Menus are designed to have a consistent layout and consistent placement to allow easy use.
All functions to interact with image data are located in one unobtrusive menu that sticks to one hand and is always accessible. Icons on the controller in VR clearly indicate what tool is in use and what mode the tool is currently in. In tool-specific menus, designed for each tool, users can adjust tool properties and find out additional information about their data seamlessly while working in VR space. It also allows users to globally toggle visibility on and off for clipping planes, measurements, markers, overlays and segments and enables to skip along the time dimension of their data.
The main menu provides all the necessary controls to work in VR space once a user has loaded a dataset. The arivis VisionVR main menu provides full control over the virtual room, rendering settings, tools, and the properties of all measurements, markers, overlays, and segments. It is also possible to change menu themes her or switch between light and dark modes.
The Tracking Module enables you to import, visualize and edit 3D tracks. Imported Tracks can originate from any automatic analysis operation you create in arivis Vision4D and are visualized just as you know from our desktop software. With arivis VisionVR , you now can interact with those tracks using your own hands. In Virtual Reality, you can cut, merge or prolong tracks if you are not satisfied with the result from the automatic tracking algorithm. This is especially convenient for images where 2 objects are very close together giving tracking algorithms a hard time to differentiate two separate objects.
Besides editing existing tracks, de-novo tracking is also possible. Simply point your hand to the 3D object you want to follow and press a button. The image will automatically jump to the next frame, where you can point at the object again. Of course, these tracks can also be handed over to arivis Vision4D, enabling you to analyze your result statistically and create meaningful data.
If you rely on automatic segmentation results but automatic tracking is not possible, we also have a solution for you. By separating the two tasks tracking and segmentation, you can now get to a result, too. Simply segment your image in arivis Vision4D (or an Open Source Program such as ImageJ or Phython) and just connect these segments afterwards to tracks in arivis VisionVR. Like this, you have still access to object features along their path as well as accurate tracking results.
Visualization Tools to immerse yourself, interact with, color, selectively visualize, & share
Read More Analysis Analysis Tools to precisely and intuitively mark, measure, classify, edit, and segment Read More Collaboration A collaboration tool to work together on the same dataset in a virtual environment Read More