Identifying objects in images acquired by electron microscopy (EM) can be challenging. Since contrast and intensity distributuion in EM images is generally low, simple segmentation algorithms which are based on intensity thresholds or contrast detection often fail with such datasets. This issue is especially pronounced in tissue samples where cells or organelles with comparable density boarders lie close together. This makes the discrimination of neighboring objects difficult because intensity differences and contrast are low. There are, however ways to cope with these images. One is to manually segment the objects of interest by encircling them by hand, plane by plane, and by fusing these segments at the end. While this method bring success, it is very tedious and time consuming.
Automatic algorithms and machine learning (trainable segmenters) can speed up the process of segmenting EM images enormously. To make these approaches successful, it is necessary to crudely isolate the objects of interest first in a manual step.
Here, we want to demonstrate an approach to segment mitochondria in a 3D transmission EM image by manually pre-segmenting the image using Virtual Reality and subsequently identifying mitochondria in the pre-segmented image. This method brings considerably faster results than manual segmentation procedures and can achieve high accuracy.
Figure 1: Seed objects marking individual mitochondira. By marking the mitochondria with seeding objects, the basis for subsequent manual presegmentation is laid out.
Figure 2: Adjust the Color Transfer Function (CTF) to display structures in a semitransparent way. Doing so, it is easier to have an overview over the sample.
Figure 3: Scultpted mitochondria. Sculpting describes the process of manual presegmentation. Starting from the seeding objects, you expand those objects with a brush tool to fully encompass the mitochondria.
Figure 4: Switch to Vision4D. The arivis file format enables switching between VR and our powerful desktop image analysis program arivis Vision4D with the click of a button. All image data and metadata are transferred during this step.
Open a new Analysis Pipeline and use the Segments you created in VR as a mask to run a subsequent analysis operator.
Figure 5: Select Objects created in VR a mask fort he original image. The “Annotation Mask” Operator enables you to select the objects you created on VR as mask for subsequent analysis steps. Simply drag &drop the Operator into the Analysis Pipeline and select the Tags “Sculpted, Sphere drop, Edited in VR”.
Figure 6: Define Foreground and Background features. In order to train the Trainable Segmenter, you need to define foreground and background objects as a decision basis for the machine learning algorithm. To do so, select the Create Manual Annotation Tool-> Create Freehand Annotation Tool and draw foreground and background feature objects. Add them to the respective list in the Trainable Segmenter Operator by selecting the respective list as well as the objects in your Annotation window and click on “+”.
In this case, we use a machine learning approach to segment the mitochondria. For this, foreground and background features have to be determined by manually drawing them on a 2D plane and adding them to the list.
Figure 7: Select Classifier Features. Classifier Features determine, which features are considered being important for the machine learning algorithm when it decides for a voxel being foreground or background. Usually, edge and structure are very effective classifiers
Figure 8: Before running an analysis, the Operator needs to be trained. After this is done, you are able to have a preview on the segmentation result by clicking on the eye button. You can adjust the result by changing the values for Threshold and Smoothing. It is also possible to add more foreground and background features afterwards, however, the algorithm needs to be trained afterwards again to add this new information.
Figure 9: Comparison of the same trainable segmenter segmentation result with and without presegmentation in VR. The result on the left represents the workflow as described here with presegmentation using arivis VisionVR . Mitochondria are nicely separated and highlighted by random color segments. The right hand image shows the result of the same machine learning algorithm without using the presegmented image mask. Here, mitochondria are visibly oversegmented. Also major parts of the cytosol and other organelles are segmented.