🧾 TL;DR🧾¶
- Read this paper to find out more about the HaN-Seg dataset.
- This is a semantic segmentation challenge. The task is to segment 30 organs-at-risk from CT and MR images of the same patient (both image modalities are available for all subjects).
- Training data consisting of 42 manually segmented cases is publically available on Zenodo.
- Our challenge has two phases (check the
timeline and instructions on how to prepare your
algorithm submission):
- Preliminary Test Phase (4 test cases): Participants develop their methods and test them on a subset of the test set.
- Final Test Phase (14 test cases): Participants submit their final methods and test them on a full test set (4 cases from the Preliminary phase + 10 new cases = 14 cases).
- NOTE: provided CT and MR images are not yet registered. We decided to leave this fundamental step to the participants as this can be an important methodological contribution that we did not want to bias in any way.
🎯 Problem statement🎯¶
Multi-modal segmentation could further improve segmentation of soft tissues¶
Cancer in the region of the head and neck (HaN) is one of the most prominent cancers, for which radiotherapy represents an important treatment modality that aims to deliver a high radiation dose to the targeted cancerous cells while sparing the nearby healthy organs-at-risk (OARs). A precise three-dimensional spatial description, i.e. segmentation, of the target volumes as well as OARs is required for optimal radiation dose distribution calculation, which is primarily performed using computed tomography (CT) images. However, the HaN region contains many OARs that are poorly visible in CT, but better visible in magnetic resonance (MR) images. Although attempts have been made towards the segmentation of OARs from MR images, so far there has been no evaluation of the impact the combined analysis of CT and MR images has on the segmentation of OARs in the HaN region. The Head and Neck Organ-at-Risk Multi-Modal Segmentation Challenge aims to promote the development of new and application of existing fully automated techniques for OAR segmentation in the HaN region from CT images that exploit the information of multiple imaging modalities so as to improve the accuracy of segmentation results.
Figure 1. Example of reference organ-at-risk (OAR) segmentations, displayed as color-coded three-dimensional binary masks.
👩🎓 The HaN-Seg Challenge 👨🎓¶
The task of the HaN-Seg (Head and Neck Segmentation) grand challenge is
to automatically segment 30 OARs in the HaN region from CT images in the
devised Set 2 (test set), consisting of 14 CT and MR images of the
same patients, given the availability of Set 1
(training set available on Zenodo), consisting of
42 CT and MR images of the same patients with reference 3D OAR binary
segmentation masks for CT images.
Set 2 is held private and therefore not released to the potential
participants to prevent algorithm tuning, but instead the algorithms
have to be submitted in the form of
Docker containers that will be run
by organizers on Set 2. The challenge is organized by taking into
account the current guidelines for biomedical image analysis
competitions, in particular the recommendations of the Biomedical Image
Analysis Challenges (BIAS) initiative for transparent challenge
reporting (Maier-Hein et al., 2020).
🎇 Motivation of the HaN-Seg Challenge:🎇¶
- To promote the development of new and application of existing state-of-the-art fully automated techniques for OAR segmentation in the HaN region from CT images that exploit the information of multiple imaging modalities, in this case from CT and MR images.
- To serve as a benchmark dataset for objective comparison of new methods for OAR segmentation in the HaN region from CT images, MR images or both, CT and MR, images.
- To encourage the development of novel general-purpose multi-modal methods for semantic segmentation.