Volumetric Segmentation of MRI Scans Using AI

Surya Remanan
Heartbeat
Published in
5 min readJul 24, 2023

--

Image generated using Stable Diffusion

Introduction

Artificial Intelligence has helped us reach a level where medical practitioners rely deeply on state-of-the-art (SOTA) machine learning models to diagnose various diseases. Volumetric segmentation of MRI Scans will further help to serve the cause.

This article is inspired by the paper “Brainchop: In-browser MRI volumetric segmentation and rendering” by Mohamed Masoud et al. which was published on March 28, 2023. This article will cover briefly the architecture of the deep learning model used for the purpose. I will also be dealing with a custom dataset of abdomen MRI scans that follows the same principles as has been followed in the paper and walk you through a step-by-step guide to how it can be implemented.

GIF from https://github.com/neuroneural/brainchop

Architecture

The figure given below is a high-level overview of the architecture of Brainchop. A pre-trained PyTorch model is converted into a TensorFlow model which is then again converted to TensorFlow.js. This is done to render the results of the model in javascript for the purpose of showing it in a web browser. Then the preprocessing steps are conducted on the 3D data. A Meshnet deep CNN is trained and finally deployed using Three.js which is a javascript library to show 3D images in the browser.

Source

Steps followed

The Brainchop architecture accepts images in Nifti format, and the dataset that I have is a folder of a patient’s abdomen divided into 291 images in .dcm format. Conversion of images from .dcm to Nifti requires only two lines of code.

First, install the dependency using:

!pip install dicom2nifti

In a Python shell type:

import dicom2nifti
dicom2nifti.convert_directory("path to .dcm images"," path where results to be stored")

And boom! In just a matter of seconds, your file will be generated in the results folder.

Preprocessing

The nifti image needs to be resized to 256x256x256. And then it is scaled and resampled to 1mm isotropic voxels. The whole process is shown below.

Segmentation accuracy is increased by removing noisy voxels. This algorithm also does tissue chopping to remove computational complexities.

How does the team at Uber manage to keep their data organized and their team united? Comet’s experiment tracking. Learn more from Uber’s Olcay Cirit.

The Overall Design of the Meshnet Model

The input images are provided in the form of meshes as described in the above image. Mesh data provide more attention to the natural aspects of the MRI Scan which is essential for training. The layers in the Meshnet model correctly inform about the positional nature of the elements in the nifti image which highly increases the accuracy. In this particular case, there are a total of eight layers in the convolutional neural network that contribute to the training process.

This Meshnet model is inspired by multi-scale context aggregation by dilated convolutions, which is a technique that expands the input layer by introducing holes in it. In this case, the additional tissues or noise are replaced by holes to obtain more accuracy while conducting the segmentation process. Hence, each pixel that is significant for our use case is annotated and thus, improves the performance of the model. This way, our model is lightweight and less computationally complex leading to the desired results with high accuracy.

Code and Demo

The GitHub repository for implementing the above steps is available here. Clone the repository to your local system and open index.html. Click on “file” and select your custom nifti image. The following is the demo output in my case.

Conclusion

As we can see we don’t need any complex software for rendering the visualizations and the quality is high-class at zero cost. This application can be used by anyone from junior to experienced doctors. Medical students who do not have access to enterprise-level software can also use this easily. This project is mainly targeted to reach out to underdeveloped and developing countries where there is no access to high-speed internet or advanced supercomputing infrastructure.

This particular algorithm is not restricted to human anatomy. It can also be applied to MRI scans of animals and hence revolutionize the whole medical industry.

GIF fom https://github.com/neuroneural/brainchop

References

[1] M. Masoud, F. Hu, and S. Plis, ‘Brainchop: In-browser MRI volumetric segmentation and rendering’, Journal of Open Source Software, vol. 8, no. 83, p. 5098, 2023. doi:10.21105/joss.05098

[2] Fedorov, A., Johnson, J., Damaraju, E., Ozerin, A., Calhoun, V., & Plis, S. (2017). End-to-end learning of brain tissue segmentation from imperfect labeling. IEEE International Joint Conference on Neural Networks (IJCNN). https://doi.org/10.1109/IJCNN.2017.7966333

[3] F. Yu and V. Koltun, “Multi-scale context aggregation by dilated convolutions,” arXiv preprint arXiv:1511.07122, 2015.

[4] Data source: https://slicer.readthedocs.io/en/latest/user_guide/modules/sampledata.html#sample-data

Editor’s Note: Heartbeat is a contributor-driven online publication and community dedicated to providing premier educational resources for data science, machine learning, and deep learning practitioners. We’re committed to supporting and inspiring developers and engineers from all walks of life.

Editorially independent, Heartbeat is sponsored and published by Comet, an MLOps platform that enables data scientists & ML teams to track, compare, explain, & optimize their experiments. We pay our contributors, and we don’t sell ads.

If you’d like to contribute, head on over to our call for contributors. You can also sign up to receive our weekly newsletter (Deep Learning Weekly), check out the Comet blog, join us on Slack, and follow Comet on Twitter and LinkedIn for resources, events, and much more that will help you build better ML models, faster.

--

--