Project MONAI is excited to announce that its flagship framework, MONAI Core, has reached v1.0

MONAI Medical Open Network for AI
5 min readSep 26, 2022

An exciting journey started three years ago when NVIDIA and King’s College London came together during MICCAI 2019 and formed Project MONAI as an initiative to develop a standardized, user-friendly, and open-source platform for Deep Learning in Medical Imaging. Soon after that, they established the MONAI Advisory Board and Working Groups with representatives from Stanford University, National Cancer Institute, DKFZ, TUM, Chinese Academy of Sciences, University of Warwick, Northwestern University, Kitware, and Mayo Clinic.

Throughout this journey, MONAI has deepened its offering in radiology, expanded to pathology, and most recently, included support for streaming modalities starting with endoscopy. Now three years later, MONAI has over 600,000 downloads. It is used in over 450 GitHub projects, has been cited in over 150 published papers, and academic and industry leaders are using MONAI in their research and clinical workflows.

We’re excited to announce that MONAI is continuing to expand open-source healthcare AI innovation with v1.0. With a focus on providing a robust API that is designed for backward compatibility, this release ensures that you can integrate MONAI into your projects today and benefit from the stability of an industry-leading framework into the future.

Let’s look at the features included in the MONAI Core v1.0, MONAI Label v0.5 releases, and a new initiative called the MONAI Model Zoo.

MONAI Core v1.0

With the release of v1.0, MONAI Core focuses heavily on a robust and backward-compatible API design and also includes additional features like MetaTensors, a Federated Learning API, the MONAI Bundle Specification, and an Auto3D Segmetentation framework.

MetaTensor

The MetaTensor enhances the metadata-aware imaging processing pipeline by integrating both torch tensors and imaging meta-information. This combined information is essential for delivering clinically useful models, supporting image registration, and joining multiple models into a cohesive workflow.

MONAI MetaTensor Docs

MONAI Bundle

The MONAI Bundle is a self-contained model package with pre-trained weights and all associated metadata abstracted through JSON and YAML-based configurations. By focusing on ease of use and flexibility, you can directly override or customize these configs or utilize a hybrid programming model that supports config to Python Code abstraction.

MONAI Bundle Docs

Federated Learning

The MONAI Federated Learning module provides a base API that defines a MONAI Client App that can run on any federated learning platform. With the new federated learning APIs, you can utilize MONAI bundles and seamlessly extend them to the federated learning paradigm.

The first platform to support these new Federated Learning APIs is NVIDIA FLARE, the federated learning platform developed by NVIDIA. We welcome the integration of other federated learning toolkits to the MONAI Federated Learning APIs to help build a common foundation for collaborative learning in medical imaging.

MONAI and Federated Learning high-level workflow using the new MONAIAlgo FL APIs

MONAI Federated Learning Docs

NVIDIA FLARE + MONAI Example

Auto3D Segmentation

Auto3D is a low-code framework that allows data scientists and researchers of any skill level to train models that can quickly segment regions of interest in data from 3D imaging modalities like CT and MRI.

Developers can start with as little as 1–5 lines of code, resulting in a highly accurate segmentation model. By focusing on accuracy and including state-of-the-art models like Swin UNETR, DiNTS, and SegResNet, data scientists and researchers can utilize the latest and greatest algorithms to help maximize their productivity.

Auto3D Tutorial

Auto3D Segmentation Training and Inference workflow

MONAI Model Zoo

We’re excited to announce the MONAI Model Zoo, a hub for sharing pre-trained models that allow data scientists and clinical researchers to jump-start their AI development.

In the first release, there are 15 pre-trained models from MONAI partners, including King’s College London, Charité University, University of Warwick, Vanderbilt University, and Mayo Clinic.

These models utilize the MONAI Bundle specification, making it easy to get started in just a few commands. With the MONAI Bundle and Model Zoo, we hope to establish a common standard for reproducible research and collaboration, and we welcome everyone to contribute to this effort by submitting their pre-trained models for downstream tasks.

MONAI Model Zoo Landing Page

MONAI Label v0.5

MONAI Label now supports MONAI Core v1.0 and continues to evolve by improving overall performance, offering new models for radiology, and expanding into endoscopy with integration into the CVAT viewer for annotation and releasing new endoscopy models.

MONAI Label has been updated to support MONAI Core v1.0. For radiology, we’ve focused on improving overall performance and released a new vertebra model. For endoscopy, we’ve continued to improve CVAT integration and released three new models.

Radiology

In this release, MONAI Label focuses heavily on improving the overall performance of radiology applications.

By utilizing caching for pre-transforms in the case of repeated inference for interaction models, we were able to speed up the overall interactive loop. There is also support for DICOM Web API responses and fixes to the DICOM Proxy for WADO and QIDO.

Additionally, MONAI Label has released a new Multi-Stage Vertebra Segmentation model with three stages that can be used together or independently. The Vertebra model demonstrates the power of a multi-stage approach for segmenting several structures on a CT Image.

MONAI Label’s new Vertebra model segments several structures in CT images, shown running in 3D Slicer.

Endoscopy

MONAI Label now supports 2D segmentation for endoscopy. By continuing to expand on the previous CVAT integration, MONAI Label has integrated active learning into CVATs automated workflow.

Three new models are being released including Tool Tracking Segmentation, InBody vs. OutBody De-identification classification model, and DeepEdit for interactive tool annotation.

MONAI Label Endoscopy Sample Applications

Conclusion

This is a momentous milestone for Project MONAI and we are looking forward to further serving the medical imaging community. We want to hear your feedback! Connect with us on Slack and GitHub. Please share your successes and report any issues you might have with MONAI.

Interested in joining the MONAI Community? Get started on our MONAI YouTube Channel, where we have tutorials, archived bootcamps, and walkthrough guides.

Stay tuned for the latest news on our hosted events! Whether you’re new to MONAI or already integrating MONAI into your workflow, the MONAI Website and Twitter account are the best places to stay up to date!

--

--

MONAI Medical Open Network for AI

MONAI framework is a PyTorch-based open-source foundation for deep learning in healthcare imaging. It is domain-optimized, freely available and community backed