OPEN SCALABLE INTEGRATED

Mitochondria Analysis with Deep Learning

Ask for a Trial

Deep Learning-based segmentation of the mitochondria with varying morphological phenotypes in the EM serial sections

Institute: Barrow Neurological Institute, Phoenix Children’s Hospital.

Authors: Wendy Bautista MD PhD, Physician Scientist; Mones Abu Asab PhD, Electron Microscopy Core Director National Eye Institute NIH

 

Vision4D-icon-box big-image-data

Researchers at Barrow Neurological Institute, Phoenix Children’s Hospital are using transmission electron microscopy imaging and arivis Vision4D to gain an understanding of how the mitochondria in the brain tissue are affected by hypoxic conditions.

Presently, automated segmentation of the transmission electron microscopy images remains a challenge. Vision4D is specifically designed to allow customers to easily apply the Deep Learning models on the images and run the subsequent analysis within the established workflow tailored to the specific analysis needs. This article describes the best practices in creating the ground truth annotations, running the inference (predictions) in Vision4D, and utilizing the generous Vision4D toolset for the downstream analysis.

In this case study, the researchers from the Barrow Neurological Institute, Phoenix Children’s Hospital used the pre-trained Deep Learning model to segment all the mitochondria objects on the hippocampal tissue section. Due to the exposure to the hypoxic conditions, the mitochondria in these tissue samples have varying morphology: some appear normal, and some have ‘swollen’ morphology. This posed an additional challenge since we aimed at creating one Deep Learning model to recognize all mitochondria phenotypes in a single step.

Video courtesy: Original imaging data was kindly provided by Dr. Wendy Bautista, MD PhD, Barrow Neurological Institute, Phoenix Children’s Hospital.

Preparing the image set for training the Deep Learning model

Preparing the image

Image courtesy: Original imaging data was kindly provided by Dr. Wendy Bautista, MD PhD, Barrow Neurological Institute, Phoenix Children’s Hospital. Hippocampus tissue section, transmission electron microscopy. The objects in yellow are the manually segmented mitochondria from both control and swollen phenotypes.

 

When preparing for the Deep Learning (DL) training, the critical step is creating the ground-truth annotations. This is often achieved by manually annotating the regions of interest, thereby creating the objects. In total, for this research project, 30 TEM serial sections were used with 309 mitochondria objects, annotated manually with the Vision4D 3.6 drawing tool. Both mitochondria phenotypes, normal and swollen, were pooled into one class for the DL training.

These manual annotations were used to train the Deep Learning model for semantic segmentation, also known as pixel classification. Specifically, we have used the U-net model, with the architecture very similar to the original publication (O. Ronneberger et al., 2016). Prior to the training, the images and the annotations were downscaled two-fold of the raw image size using the bicubic interpolation method to facilitate the DL training time and match the feature size. In the following step, the grayscale images and the binary ground-truth images were augmented by applying rotations, reflections, and elastic transformations. The U-net model was trained with the custom-made script* for 50 epochs and the 42nd epoch had the highest accuracy score and was selected for running the inference (predictions) to segment the mitochondria. During the last training step, the model was converted to the ONNX format to run the inference (predictions) in Vision4D.

*For further details, please contact support@arivis.com

Applying the Deep Learning-based segmentation in arivis Vision4D

The DL model then was applied to the whole dataset in Vision4D 3.6 for automated segmentation. In Vision4D, we can apply the DL model on the data with the same resolution in the pipeline and scale the resulting objects back to the original size. The pipeline for segmenting the mitochondria was run at 50% scale compared to the original images. Object filtering and classification by the phenotype and export of the numerical data into the Excel format is done automatically within the same pipeline. This enables applying the entire workflow in a Batch mode on a set of images.

Applying Deep Learning-based segmentation

Image courtesy: Original imaging data was kindly provided by Dr. Wendy Bautista, MD PhD, Barrow Neurological Institute, Phoenix Children’s Hospital. Hippocampus tissue section, transmission electron microscopy. Manually segmented mitochondria (yellow objects) and the DL inference results (cyan objects) are overlayed to illustrate the accuracy of the predictions.

Classifying the mitochondria phenotypes

Classifying the mitochondria phenotypes

Image courtesy: Original imaging data was kindly provided by Dr. Wendy Bautista, MD PhD, Barrow Neurological Institute, Phoenix Children’s Hospital. Hippocampus tissue section, transmission electron microscopy. The spectrum of the mitochondria phenotypes is reflected in the color of the corresponding objects (Purple (normal) to Red (extremely swollen)). The phenotype is quantified as the object’s mean intensity divided by it’s volume and stored in the custom feature value.

 

Vision4D has an extensive list of quantitative features, that characterize each object. In addition, we have the possibility to create custom features or import them from external sources. In other to access the quantitative distribution of the mitochondria phenotypes, we have created a custom object feature, which computes the ratio of the mean intensity of each object to its volume. This ratiometric function reflects and emphasizes the differences in the mitochondria phenotype with high accuracy. It was used for classifying the objects into the ‘Control’ and ‘Swollen’ groups. For visualization purposes, each object was color-coded according to the value of the mitochondria phenotype custom feature.

How to succeed with Deep Learning training and analysis:

Challenges and advantages:

  • Automated unbiased image segmentation of the complex electron microscopy images is not a straightforward analysis due to the low contrast inherent to the EM images. Most commercial software only offers the possibility to run the segmentation using the traditional image analysis algorithms
  • Vision4D 3.6 allows applying your neuronal network directly in the pipeline and let it do the analysis work on your imaging datasets. Typical tasks are reduced from weeks to hours compared to manual execution
  • By incorporating the DL inference in your pipeline combined with the already existing tools into customized workflows, we can also process many datasets in a Batch mode without the need for programming knowledge
 
How to plan your workflow:
  • This type of analysis is ideal to recognize and segment the objects with the complex morphological patterns
  • When preparing the data for the manual ground-truth annotations, make sure that the images from all the experimental conditions and variance between the experiments are included in the training set
  • The precision of the manual annotation is crucial for training the model with the high accuracy prediction power

 

Mitochondria results

Image courtesy: Original imaging data was kindly provided by Dr. Wendy Bautista, MD PhD, Barrow Neurological Institute, Phoenix Children’s Hospital.

 

Image analysis workflow for Mitochondria segmentation and classification

1
training

Paint the objects of interest on a set of images to create ground truth. Train the Deep Learning model. The robust model will recognize the objects of interest in a variety of conditions.

2
SEGMENTATION

Add the Deep Learning Segmentation operator in the Vision4D pipeline and select your model to create the mitochondria segments.

3
classification

Compute and score the mitochondria phenotypes with the custom ratiometric feature and group the objects accordingly. 

4
EXPORT

Add exporting operators if you want to automatically generate an Excel file with results and run the Vision4D pipeline over the whole data set. 

TALK WITH AN EXPERT

Let an arivis professional help you with your scientific work
Talk with an expert