In collaboration with the MICCAI Educational Initiative, MONAI hosted its first Bootcamp from September 30 to October 2, 2020. This three-day virtual event included presentations, hands-on labs, direct contact with the MONAI core group, and an open challenge on the last day.

It was great to see so many people apply for the Bootcamp! We had over 550 applicants with participants ranging from over 40 different countries, which truly represents a global event! Initially, we planned only with 60 participants with cluster access to V100s. However, due to the community’s interest, we opened up the Bootcamp for additional 140 people who had full access to the materials, the live presentations, and communication channels to the MONAI experts.

The relevant presentations and hands-on notebooks are available online. Here we briefly summarize the Bootcamp activities:

We kicked off the first day by introducing MONAI Transforms, DataSets, and DataLoader. This lab gives you a quick look at some basic functionality in MONAI and sets you up for the next lab, which is an end-to-end training and evaluation notebook.

The end-to-end notebook demonstrates how you can integrate MONAI into a typical vanilla PyTorch training loop. It then explores using PyTorch-ignite and features of MONAI that work well with ignite.

The last lab is where the architectural deep dive begins. Three separate notebooks cover in-depth discussion about DataSets, Networks, Sliding Window Inference, and Post-Processing transforms, where you learn some of the best practices of using MONAI.

The first section of this lab covers the different caching mechanisms, along with the pros and cons of each. Next, you’ll explore the individual network layers and how you can utilize the flexible network architecture to build your networks quickly. Last, you’ll learn about sliding window inference and then put it in action along with post-process transforms to visualize your results.

We ended the day with a presentation and demo of Auto Mixed Precision (AMP) and Distributed Data Parallelism (DDP). MONAI has integrated these features in v0.3 to accelerate common training workflows and provides rich examples to demonstrate the API usages. You can find the notebooks and Python files for these examples in the MONAI tutorials repo.

Day two builds on what you learned the day before and goes into more complex examples using MONAI.

You’ll begin by looking at implementing Generative Adversarial Networks (GANs) using the MedNIST Dataset. You’ll define the generator and discriminator networks using the MONAI specific networks and training them both. In the end, you’ll plot some of the randomly generated images to see how well you’re doing.

We also covered a segmentation exercise using the Sunnybrook Cardiac Dataset of cardiac MR images and a simple neural network. Using MONAI transforms and layers throughout the network, you’ll see a basic implementation that gets relatively low results. It’s then up to you to improve the results by expanding on the existing notebook and adding additional transforms and network layers to get better results!

At the end of day two, we brought together the MONAI working groups to present to the participants. Each presentation covered the working group’s overall goal, what work they’re planning soon, and how to engage with them if you’re interested in contributing to Project MONAI.

The last day of the event was mainly for the 60 participants who were lucky enough to have full access to our GPU cluster. This limitation was put in place so that participants would all have access to the same type of hardware and ensure a level playing field.

The challenge was for participants to train a classifier to distinguish chest x-ray images of patients with no lung conditions, those with pneumonia, and those with COVID-19. As determined by a combination of speed and accuracy, the best results would win an Nvidia Titan RTX.

The winner of the challenge was Fernando Perez-Garcia from University College London in the United Kingdom. Second place went to Sharath M Shankaranarayana with Zasti, India, and third place went to Brady Hunt from Thayer School of Engineering — Dartmouth College in the United States. We want to congratulate everyone who participated in the event and for making it a great three days!

Bootcamps are one of the best ways for us to engage with the community. Bringing together developers interested in learning is a great way to gather new ideas from those directly involved as Academic Researchers or Developers in the industry. With this event being such a huge success, we’re hoping to host more of them in the future! If you have any suggestions for what you would like to see in a future event, please reach out to us.

For those interested in joining the MONAI community, check out the main MONAI Github Repo or take a look at the MONAI website.

On Github, you’ll find a #Contributing section that will walk you through the contribution process and give helpful tips along the way. You’ll also find our #Community section to point at the different places where the MONAI team is active. Take a look through the MONAI Wiki and Issues to see where we’re actively looking for feedback through our Feedback Welcomed tag or check out our Good First Issue tag if you’re looking to start small.

You can always reach out to us on Twitter @ProjectMONAI, join our Slack Channel, or E-mail us at MONAI.Contact@gmail.com.

MONAI framework is a PyTorch-based open-source foundation for deep learning in healthcare imaging. It is domain-optimized, freely available and community backed