arivis InViewR - Analysis in Virtual Reality
Analysis for Life Science research images in VR
Notable arivis InViewR Analysis Capabilities:
Import, edit and proofread manually or automatically segmented data against the original image data
De-novo segment data automatically, semi- automatically, and /or manually
Export segments & statistics (position, intensity, size, classification) for further analysis
Custom label & count structures of interest
Perform distance & angle measurements
Interact seamlessly with other platform products
Images - especially fluorescence images - can be difficult to accurately segment because of a host of reasons:
- poor signal to noise ratio
- staining that does not fill objects completely or uniformly
- varied intensity of signal between the “same” objects
- objects that appear to touch one another and really need to be separated because of packing and/or lack of spatial resolution
arivis InViewR goes beyond simple proofreading to identify areas where automatic or manual segmentation has on a desktop program has gone wrong. Preserving the correct portions of the original segmentation, arivis InViewR provides sculpting and painting tools to interactively grow or shrink segmented objects transforming them to a 100% fit to the original image data. Joining, splitting and deleting objects where automatic segmentation has gone wrong is equally as intuitive. arivis InViewR significantly increases the efficiency of the proof-editing process.
Productivity and accuracy gains provided by arivis InViewR are not just limited to pre-segmented data. De-novo segmentation can be performed semi-automatically by pointing to a local ROI and pulling the trigger to run a choice of automatic algorithms, thus rapidly segmenting objects literally at your fingertips. When all else fails, segmentation can be performed by sculpting a generic object to fit the original data or by manually painting from scratch.
Performing 3D distance measurements on a desktop is a cumbersome process that involves a lot of guess work, while in VR the task is a breeze. Measurement points can be interactively dropped at any location in VR space including both on and in volume rendered and surface rendered data. A user can choose from point to point, multi-point, and angle measurements. While the placement of points at the correct location in 3D space is intuitive, if something does go wrong the position of the points can be interactively modified or a point can be deleted. Users can control the size, visibility and if measurement points are subject to clipping or not. Measurements are recorded for viewing in VR or export to other programs and are calibrated based upon the original voxel (pixel) sizes.
Just as easy a dropping measurement points in VR, users can place classification and counting markers on structures in VR space. User can create a custom list of markers designed and labeled specifically for the image that they are working on. While in VR they can quickly change between the marker types and point and click to drop the labels on structures. A count of the number of markers of each type is recorded and can be exported. Users can control the size, visibility and if markers are subject to clipping or not.
Because segmented data is overlaid on the original volume data in arivis InViewR, statistical properties of segments can be calculated in real time based on the original data even as segments are modified, added, and deleted. Statistics on position, size, shape, and per color channel intensity values are calculated. Those statistics can be viewed in VR space, in a table in the desktop portion of the application, and can be exported to Excel or other statistical programs for analysis. The segments themselves can be seamlessly passed to a desktop program like arivis Vision4D for further analysis or can be exported at object files to be used in other programs.
Proofreading tracking data and manually adjusting the results is very cumbersome on a 2D screen. Especially, when lots of objects are involved or tracks are very complicated. Using Virtual Reality, you can immerse yourself into the data and gain a detailed view onto your sample, that a 2D application cannot match. Entering the dataset and looking at objects and structures as they are surrounding you makes it easier to concentrate on a particular object of interest. With time control on your fingertips, you can always keep your eyes on the object as it moves through the specimen and follow it in space. We developed tools that allow you to add new or edit and delete existing tracks right where you see them - in the image. This makes proofreading tracks faster, easier and more precice than on a desktop.
arivis InViewR can be used to manually segment structures otherwise impossible to segment algorithmically. Structures that cross each other would normally confuse automatic algorithms. However, our brain is adept at figuring out which piece is part of the correct structure. Because it is possible to visually follow the path of the structure of interest in VR we are able to paint the structure to segment it independently of other parts of the data. Because this segment can be transferred to our desktop arivis Vision4D we can undertake operations like masking out the original data to create a color channel for each segmented region. Then we have the ability to interactively color and turn on and off portions of the original data for presentation and analysis.