MONAI Releases v0.5

MONAI Medical Open Network for AI
5 min readApr 16, 2021

In the latest v0.5 release of MONAI, the team expands its transforms and workflows and now includes end-to-end applications based on new research areas. Let’s take a complete look at this latest release.

Transforms

The newest transforms for MONAI include SaveImage, EnsureChannelFirst, and a tech preview feature that allows you to inverse your transforms.

First, from training to deployment, images are often transformed before being passed through the network. These transformations enable a wide range of functionality, from data augmentations to modifying an image’s spacing and orientation. After passing the image through our network, it is desirable to return the output image to the same space as the input. This inversion allows us to compare with ground truth labels, save to disk to analyze them in an external viewer, or even perform test-time augmentations to estimate network uncertainty. To help with this functionality, MONAI now includes an inverse operation for all spatial transforms. Users can quickly invert all of the spatial transforms individually or use the new TransformInverter handler within a workflow.

Learn more about inverting transforms and test-time augmentations in a new tutorial.

Second, users can inject the SaveImage transform into the transform chain to save the results. Doing this allows the developers to convert the images into files and debug the transform chain.

Third, medical images have different shape formats and channel positions. The EnsureChannelFirst transformer will automatically detect the data shape according to the image’s meta information and convert it to a channel-first format.

Applications

In the research area of medical image deep learning, advancements are constantly evolving, and new papers are published all the time. MONAI has implemented many of the individual components needed to implement these research papers into end-to-end solutions or prototypes. To give researchers an easy way to experiment with state-of-the-art research, MONAI is now starting to create reference applications out of the latest research.

DeepGrow

Now included in our new apps component is a reimplementation of the DeepGrow components. DeepGrow is a deep-learning-based semi-automated segmentation approach that aims to be a “smart” interactive tool for region interest delineation in medical images.

In addition to releasing DeepGrow, MONAI has now been integrated into NVIDIA Clara Train, where DeepGrow was originally developed. This integration is exciting news since it means that MONAI is finally in production and is being used by even more researchers.

Lesion Detection

Now expanding into Digital Pathology, MONAI now includes a lesion detection application. Using the NVIDIA cuCIM library enables large datasets to be tiled on-demand and processed through a CUDA pipeline. Using SmartCaching can help efficiently re-use data in memory at each epoch which can be very helpful given the large size of whole slide images. This application also includes FROC measurements analysis and probabilistic post-processing for lesion detection.

Learning-based Image Registration

Starting from v0.5, MONAI now provides a tech preview of features for building learning-based 2D/3D registration workflows. These include image similarity measures as loss functions, bending energy as model regularization, network architectures, and warping modules. These components can be used to build common unsupervised and weakly-supervised algorithms. Expect to see more features regarding Learning-based Image Registration in upcoming releases.

Losses, Optimizers, Workflows, and Other Enhancements

MONAI continues to expand on its core components, and this release includes new loss functions, optimizers, workflow enhancements, and an update to the ImageReader API.

PyTorch Module Serialization

We’ve updated our existing intermediate blocks, generic networks, and most of the loss functions to support PyTorch serialization pipelines based on ‘torch.jit.script’. This means that you can now export your models to TorchScript, which decouples your model from any runtime environment. One significant benefit of this is that the exported model’s overall performance will improve inference time, latency, and throughput.

Workflows

Previously, MONAI workflows had event handlers for a limited number of events that a user could attach to. We’ve now added the ability for rich event handlers, which can be independently attached to the trainer or evaluator through custom events.

ImageReader

MONAI’s ImageReader API will now automatically choose which image reader to use based on the supported suffixes. Chosen first are the user-specific readers; otherwise, the default reader order will be NibabelReader, PILReader, NumpyReader, and ITKReader. To find the specific file formats supported by each of those readers, look at our documentation here.

Loss Functions

Three new loss functions, FocalLoss, DiceCELoss, and DiceFocalLoss, have been added.

Optimizers

MONAI now provides several optimizers to help accelerate the training or fine-tuning process, including the Novograd optimizer. Users can also define different learning rates for the models based on layers.

Another important feature introduced in this release is the LearningRateFinder. The learning rate finder can provide valuable information on how well the network can be trained over a range of learning rates.

Transfer Learning

Transfer Learning is a ubiquitous and efficient training approach, especially in the medical domain, where obtaining large datasets for training can be difficult. Transfer learning from a pre-trained checkpoint can significantly improve the model metrics and shorten training time.

To help with some of the common issues in transfer learning, like layer names or shapes that don’t precisely match the original weights, we’ve introduced the CheckpointLoader. The CheckpointLoader will load a checkpoint for the workflow before training and allow for some mismatch between the new and old models, which can help if the current task has a different input image shape or output class.

Summary

MONAI continues to remain up-to-date with the latest research developments and is expanding into new areas!

There are also three great sessions to check out on MONAI and Medical Imaging at NVIDIA’s 2021 GTC Spring Conference.

Daniel Rubin, Professor at Stanford University and MONAI Advisory Board member, will be presenting a talk on “Deploying Robust Medical Imaging AI Applications: Federated Learning and Other Approaches.”

We’re also hosting a MONAI Bootcamp on April 22nd, 2021, and a Dinner with Strangers event on April 20th, 2021. Make sure to sign up!

--

--

MONAI Medical Open Network for AI

MONAI framework is a PyTorch-based open-source foundation for deep learning in healthcare imaging. It is domain-optimized, freely available and community backed