arivis VisionVR - Analysis in Virtual Reality
Is there value in VR beyond the fun?
A recent paper shows that proof-editing of segmentation results is a promising new application for Virtual Reality. Algorithmically generated results often have lots of errors that need to be fixed and VR provides efficiency advantages versus the traditional tools of mouse and keyboard. The movie below shows that complex models of mitochondria (derived from FIB-SEM images) can now be imported, edited, and exported from the arivis Imaging Science environment.
The resulting high-fidelity models of mitochondria can be the basis of understanding disease and could be used to test effectiveness of therapies. For more information about this exciting science, please email Kedar Narayan at kedar.narayannih.gov
Image courtesy of Conrad, R., Ruth, T., Löffler, F., Hadlak, S., Konrad, S., Götze, C., . . . Narayan, K. (2020). Efficient Skeleton Editing in a VR Environment Facilitates Accurate Modeling of Highly Branched Mitochondria. Microscopy and Microanalysis; doi: 10.1017/S1431927620017158
Notable arivis VisionVR Analysis Capabilities:
Import, edit and proofread automatically segmented data while comparing against the raw image data
De-novo segment data automatically, semi-automatically, as well as manually
Export segments & statistics (position, intensity, size, classification) for further analysis
Custom label & count structures of interest
Perform distance & angle measurements
Interact seamlessly with other platform products
Images - especially fluorescence images - can be difficult to accurately segment because of a host of reasons:
- poor signal to noise ratio
- staining that does not fill objects completely or uniformly
- the varied intensity of signal between the “same” objects
- objects that appear to touch one another and really need to be separated because of packing and/or lack of spatial resolution
arivis VisionVR goes beyond simple proofreading to identify areas where automatic segmentation on a desktop program has gone wrong. Preserving the correct portions of the original segmentation, arivis VisionVR provides sculpting and painting tools to interactively grow or shrink segmented objects transforming them to a 100% fit to the original image data. Joining, splitting and deleting objects where automatic segmentation has gone wrong is equally as intuitive. arivis VisionVR significantly increases the efficiency of the proof-editing process.
Productivity and accuracy gains provided by arivis VisionVR are not just limited to pre-segmented data. De-novo segmentation can be performed semi-automatically by pointing to a local ROI and pulling the trigger to run a choice of automatic algorithms, thus rapidly segmenting objects literally at your fingertips. When all else fails, segmentation can be performed by sculpting a generic object to fit the original data or by manually painting from scratch.
Performing 3D distance measurements on a desktop is a cumbersome process that involves a lot of guesswork. In VR, this task is a breeze. Measurement Points can be interactively dropped at any location in VR space. The user can choose from point to point, multi-point, and angle measurements. The position of Measurement Points can be interactively modified making the whole process very intuitive. Users can control the size, visibility and if measurement points are subjected to clipping or not. Measurements are recorded for viewing in VR or export to other programs and are calibrated based upon the original voxel (pixel) sizes.
When image segmentation is not necessary or simply not possible, manual workflows are needed to analyze images. For counting and classifying objects, arivis VisionVR provides an easy, manual workflow that speeds up simple tasks enormously: Simply point at an object to count and press a button. The structure will be labeled with a clearly visible marker, making double counts impossible. Markers can be named and classified as well, making a very common task to count and categorize objects very simple in 3D.
arivis VisionVR can be used to manually segment structures otherwise impossible to segment automatically Structures that cross each other would normally confuse automatic algorithms. However, our brain is adept at figuring out which piece is part of the correct structure. Because it is possible to visually follow the path of the structure of interest in VR we are able to paint the structure to segment it independently of other parts of the data. Because this segment can be transferred to our desktop arivis Vision4D we can undertake operations like masking out the original data to create a color channel for each segmented region. Then we have the ability to interactively color and turn on and off portions of the original data for presentation and analysis.
Credit: Fly Light Team Project Janelia Research Campus