A recent paper shows that proof-editing of segmentation results is a promising new application for Virtual Reality. Algorithmically generated results often have lots of errors that need to be fixed and VR provides efficiency advantages versus the traditional tools of mouse and keyboard. The movie below shows that complex models of mitochondria derived from FIB-SEM images can now be imported, edited, and exported from the arivis Imaging Science environment.
The resulting high-fidelity models of mitochondria can be the basis of understanding disease and could be used to test effectiveness of therapies. For more information about this exciting method please feel free to contact Kedar Narayan, the corresponding author of the study at email@example.com.
Image courtesy of Conrad, R., Ruth, T., Löffler, F., Hadlak, S., Konrad, S., Götze, C., . . . Narayan, K. (2020). Efficient Skeleton Editing in a VR Environment Facilitates Accurate Modeling of Highly Branched Mitochondria. Microscopy and Microanalysis; doi: 10.1017/S1431927620017158
arivis VisionVR goes beyond simple proofreading to identify areas where automatic segmentation on a desktop program has gone wrong. Preserving the correct portions of the original segmentation, arivis VisionVR provides sculpting and painting tools to interactively grow or shrink segmented objects and transforming them to a 100% fit to the original image data. Joining, splitting and deleting objects where automatic segmentation has gone wrong is equally as intuitive. arivis VisionVR significantly increases the efficiency of the proof-editing process.
Productivity and accuracy gains provided by arivis VisionVR are not just limited to pre-segmented data. De-novo segmentation can be performed semi-automatically by pointing to a local ROI and pulling the trigger to run a choice of automatic algorithms, thus rapidly segmenting objects literally at your fingertips. When all else fails, segmentation can be performed by sculpting a generic object to fit the original data or by manually painting from scratch.
In the second movie, arivis VisionVR was used to segment brain regions of Drosophila in VR. The segmented data was seamlessly exchanged with the desktop program arivis Vision4D where it was used to create masks of the original volume data in each segmented region and to create a new separate color channel for each region. The movie shows that these regions can be interactively turned on and off in any order at any time and overlaid upon the rest of the original data. It is now possible to selectively observe the relationships between the structures. Data kindly provided by the Fly Light Team Project, Janelia Research Campus.
Performing 3D distance measurements on a desktop is a cumbersome process that involves a lot of guesswork. In VR, this task is a breeze. Measurement Points can be interactively dropped at any location in VR space. The user can choose from point-to-point, multi-point, and angle measurements. The position of Measurement Points can be interactively modified making the whole process very intuitive. Users can control the size, visibility and if measurement points are subjected to clipping or not. Measurements are recorded for viewing in VR or export to other programs and are calibrated based upon the original voxel (pixel) sizes.
When image segmentation is not necessary or simply not possible, manual workflows are needed to analyze images. For counting and classifying objects, arivis VisionVR provides an easy, manual workflow that speeds up simple tasks enormously: Simply point at an object to count and press a button. The structure will be labeled with a clearly visible marker, making double counts impossible. Markers can be named and classified as well, making a very common task to count and categorize objects very simple in 3D.
arivis VisionVR can be used to manually segment structures otherwise impossible to segment automatically. Structures that cross each other would normally confuse automatic algorithms. However, our brain is adept at figuring out which piece is part of the correct structure. Because it is possible to visually follow the path of the structure of interest in VR we are able to paint the structure and segment it independently of other parts of the data. Because this segment can be transferred to our desktop arivis Vision4D we can undertake operations like masking out the original data to create a color channel for each segmented region. Then we have the ability to interactively color and turn on and off portions of the original data for presentation and analysis.