Skip to content

Latest commit

 

History

History
121 lines (94 loc) · 5.27 KB

README_lumbar_rootlets.md

File metadata and controls

121 lines (94 loc) · 5.27 KB

Lumbar rootlets

Getting started

Important

This README provides instructions on how to use the model for lumbar rootlets. Please note that this model is still under development and is not yet available in the Spinal Cord Toolbox (SCT).

Note

For the stable model for dorsal cervical rootlets only, use SCT v6.2 or higher (please refer to this README).

Dependencies

Step 1: Cloning the Repository

Open a terminal and clone the repository using the following command:

git clone https://github.com/ivadomed/model-spinal-rootlets

Step 2: Setting up the Environment

The following commands show how to set up the environment. Note that the documentation assumes that the user has conda installed on their system. Instructions on installing conda can be found here.

  1. Create a conda environment with the following command:
conda create -n venv_nnunet python=3.9
  1. Activate the environment with the following command:
conda activate venv_nnunet
  1. Install the required packages with the following command:
cd model-spinal-rootlets
pip install -r packaging/requirements.txt

Step 3: Getting the Predictions

Note

To temporarily suppress warnings raised by the nnUNet, you can run the following three commands in the same terminal session as the above command:

export nnUNet_raw="${HOME}/nnUNet_raw"
export nnUNet_preprocessed="${HOME}/nnUNet_preprocessed"
export nnUNet_results="${HOME}/nnUNet_results"

To segment a single image using the trained model, run the following command from the terminal.

This assumes that the lumbar model has been downloaded and unzipped (unzip Dataset202_LumbarRootlets_r20240527.zip or unzip Dataset302_LumbarRootlets_r20240723.zip).

python packaging/run_inference_single_subject.py -i <INPUT> -o <OUTPUT> -path-model <PATH_TO_MODEL_FOLDER> -fold <FOLD>

For example:

python packaging/run_inference_single_subject.py -i sub-001_T2w.nii.gz -o sub-001_T2w_label-rootlets_dseg.nii.gz -path-model ~/Downloads/Dataset202_LumbarRootlets_r20240527 -fold 0

If the model folder contains also trainer subfolders (e.g., nnUNetTrainer__nnUNetPlans__3d_fullres, nnUNetTrainerDA5__nnUNetPlans__3d_fullres, ...), specify the trainer folder as well:

python packaging/run_inference_single_subject.py -i sub-001_T2w.nii.gz -o sub-001_T2w_label-rootlets_dseg.nii.gz -path-model ~/Downloads/Dataset322_LumbarRootlets/nnUNetTrainerDA5__nnUNetPlans__3d_fullres -fold 0

Tip

  • nnUNetTrainer__nnUNetPlans__3d_fullres - default nnU-Net trainer
  • nnUNetTrainerDA5__nnUNetPlans__3d_fullres - nnU-Net trainer with aggressive data augmentation
  • nnUNetTrainer_1000epochs_NoMirroring__nnUNetPlans__3d_fullres - nnU-Net trainer with no mirroring during data augmentation

Note

For models trained using custom trainers, it is necessary to add the given trainer to the nnunet code also for the inference.

For example, for the nnUNetTrainer_1000epochs_NoMirroring__nnUNetPlans__3d_fullres trainer, the following lines need to be added to the nnUNetTrainer_Xepochs_NoMirroring.py file:

(Use can use find . -name "nnUNetTrainer_*epochs_NoMirroring.py" to get path to the file.)

class nnUNetTrainer_1000epochs_NoMirroring(nnUNetTrainer):
    def __init__(self, plans: dict, configuration: str, fold: int, dataset_json: dict, unpack_dataset: bool = True,
                 device: torch.device = torch.device('cuda')):
        super().__init__(plans, configuration, fold, dataset_json, unpack_dataset, device)
        self.num_epochs = 1000

    def configure_rotation_dummyDA_mirroring_and_inital_patch_size(self):
        rotation_for_DA, do_dummy_2d_data_aug, initial_patch_size, mirror_axes = \
            super().configure_rotation_dummyDA_mirroring_and_inital_patch_size()
        mirror_axes = None
        self.inference_allowed_mirroring_axes = None
        return rotation_for_DA, do_dummy_2d_data_aug, initial_patch_size, mirror_axes

Note

Note that some models, for example, Dataset312_LumbarRootlets and Dataset322_LumbarRootlets, were trained on images cropped around the spinal cord. This means that also the input image for inference needs to be cropped around the spinal cord. You can use the following command to crop the image:

file=sub-001_T2w
# Segment the spinal cord using the contrast-agnostic model
sct_deepseg -i ${file}.nii.gz -o ${file}_seg.nii.gz -task seg_sc_contrast_agnostic -qc ../qc -qc-subject ${file}
# Crop the image around the spinal cord
sct_crop_image -i ${file}.nii.gz -m ${file}_seg.nii.gz -dilate 64x64x64 -o ${file}_crop.nii.gz
# Now you can use the cropped image for inference

Note

The script also supports getting segmentations on a GPU. To do so, simply add the flag --use-gpu at the end of the above commands. By default, the inference is run on the CPU. It is useful to note that obtaining the predictions from the GPU is significantly faster than the CPU.