Presentations
2021
Weishaupt, Luca L.; Thibodeau-Antonacci, Alana; Garant, Aurelie; Singh, Kelita; Miller, Corey; Vuong, Té; Enger, Shirin A.
Deep learning based tumor segmentation of endoscopy images for rectal cancer patients Presentation
ESTRO Annual meeting, 27.08.2021.
Abstract | Links | BibTeX | Tags: artificial intelligence, Deep Learning, endoscopy, tumor detection
@misc{Weishaupt2021b,
title = {Deep learning based tumor segmentation of endoscopy images for rectal cancer patients},
author = {Luca L. Weishaupt and Alana Thibodeau-Antonacci and Aurelie Garant and Kelita Singh and Corey Miller and Té Vuong and Shirin A. Enger},
url = {https://www.estro.org/Congresses/ESTRO-2021/610/posterdiscussion34-deep-learningforauto-contouring/3710/deeplearning-basedtumorsegmentationofendoscopyimag},
year = {2021},
date = {2021-08-27},
urldate = {2021-08-27},
abstract = {Purpose or Objective
The objective of this study was to develop an automated rectal tumor segmentation algorithm from endoscopy images. The algorithm will be used in a future multimodal treatment outcome prediction model. Currently, treatment outcome prediction models rely on manual segmentations of regions of interest, which are prone to inter-observer variability. To quantify this human error and demonstrate the feasibility of automated endoscopy image segmentation, we compare three deep learning architectures.
Material and Methods
A gastrointestinal physician (G1) segmented 550 endoscopy images of rectal tumors into tumor and non-tumor regions. To quantify the inter-observer variability, a second gastrointestinal physician (G2) contoured 319 of the images independently.
The 550 images and annotations from G1 were divided into 408 training, 82 validation, and 60 testing sets. Three deep learning architectures were trained; a fully convolutional neural network (FCN32), a U-Net, and a SegNet. These architectures have been used for robust medical image segmentation in previous studies.
All models were trained on a CPU supercomputing cluster. Data augmentation in the form of random image transformations, including scaling, rotation, shearing, Gaussian blurring, and noise addition, was used to improve the models' robustness.
The neural networks' output went through a final layer of noise removal and hole filling before evaluation. Finally, the segmentations from G2 and the neural networks' predictions were compared against the ground truth labels from G1.
Results
The FCN32, U-Net, and SegNet had average segmentation times of 0.77, 0.48, and 0.43 seconds per image, respectively. The average segmentation time per image for G1 and G2 were 10 and 8 seconds, respectively.
All the ground truth labels contained tumors, but G2 and the deep learning models did not always find tumors in the images. The scores are based on the agreement of tumor contours with G1’s ground truth and were thus only computed for images in which tumor was found. The automated segmentation algorithms consistently achieved equal or better scores than G2's manual segmentations. G2's low F1/DICE and precision scores indicate poor agreement between the manual contours.
Conclusion
There is a need for robust and accurate segmentation algorithms for rectal tumor segmentation since manual segmentation of these tumors is susceptible to significant inter-observer variability. The deep learning-based segmentation algorithms proposed in this study are more efficient and achieved a higher agreement with our manual ground truth segmentations than a second expert annotator. Future studies will investigate how to train deep learning models on multiple ground truth annotations to prevent learning observer biases.},
howpublished = {ESTRO Annual meeting},
keywords = {artificial intelligence, Deep Learning, endoscopy, tumor detection},
pubstate = {published},
tppubtype = {presentation}
}
Purpose or Objective
The objective of this study was to develop an automated rectal tumor segmentation algorithm from endoscopy images. The algorithm will be used in a future multimodal treatment outcome prediction model. Currently, treatment outcome prediction models rely on manual segmentations of regions of interest, which are prone to inter-observer variability. To quantify this human error and demonstrate the feasibility of automated endoscopy image segmentation, we compare three deep learning architectures.
Material and Methods
A gastrointestinal physician (G1) segmented 550 endoscopy images of rectal tumors into tumor and non-tumor regions. To quantify the inter-observer variability, a second gastrointestinal physician (G2) contoured 319 of the images independently.
The 550 images and annotations from G1 were divided into 408 training, 82 validation, and 60 testing sets. Three deep learning architectures were trained; a fully convolutional neural network (FCN32), a U-Net, and a SegNet. These architectures have been used for robust medical image segmentation in previous studies.
All models were trained on a CPU supercomputing cluster. Data augmentation in the form of random image transformations, including scaling, rotation, shearing, Gaussian blurring, and noise addition, was used to improve the models’ robustness.
The neural networks’ output went through a final layer of noise removal and hole filling before evaluation. Finally, the segmentations from G2 and the neural networks’ predictions were compared against the ground truth labels from G1.
Results
The FCN32, U-Net, and SegNet had average segmentation times of 0.77, 0.48, and 0.43 seconds per image, respectively. The average segmentation time per image for G1 and G2 were 10 and 8 seconds, respectively.
All the ground truth labels contained tumors, but G2 and the deep learning models did not always find tumors in the images. The scores are based on the agreement of tumor contours with G1’s ground truth and were thus only computed for images in which tumor was found. The automated segmentation algorithms consistently achieved equal or better scores than G2’s manual segmentations. G2’s low F1/DICE and precision scores indicate poor agreement between the manual contours.
Conclusion
There is a need for robust and accurate segmentation algorithms for rectal tumor segmentation since manual segmentation of these tumors is susceptible to significant inter-observer variability. The deep learning-based segmentation algorithms proposed in this study are more efficient and achieved a higher agreement with our manual ground truth segmentations than a second expert annotator. Future studies will investigate how to train deep learning models on multiple ground truth annotations to prevent learning observer biases.Journal Articles
2020
Mao, Ximeng; Pineau, Joelle; Keyes, Roy; Enger, Shirin A.
RapidBrachyDL: Rapid Radiation Dose Calculations in Brachytherapy Via Deep Learning Journal Article
In: International Journal of Radiation Oncology, Biology, Physics, vol. 108, no. 3, pp. 802–812, 2020, ISSN: 1879-355X.
Abstract | Links | BibTeX | Tags: Brachytherapy, Colon, Computer, Computer-Assisted, Deep Learning, Female, Humans, Iridium Radioisotopes, Male, Monte Carlo Method, Neural Networks, Organs at Risk, Prostate, Prostatic Neoplasms, Radiotherapy Dosage, Radiotherapy Planning, Rectum, Retrospective Studies, Sigmoid, Urinary Bladder, Uterine Cervical Neoplasms
@article{mao_rapidbrachydl_2020,
title = {RapidBrachyDL: Rapid Radiation Dose Calculations in Brachytherapy Via Deep Learning},
author = {Ximeng Mao and Joelle Pineau and Roy Keyes and Shirin A. Enger},
doi = {10.1016/j.ijrobp.2020.04.045},
issn = {1879-355X},
year = {2020},
date = {2020-11-01},
journal = {International Journal of Radiation Oncology, Biology, Physics},
volume = {108},
number = {3},
pages = {802--812},
abstract = {PURPOSE: Detailed and accurate absorbed dose calculations from radiation interactions with the human body can be obtained with the Monte Carlo (MC) method. However, the MC method can be slow for use in the time-sensitive clinical workflow. The aim of this study was to provide a solution to the accuracy-time trade-off for 192Ir-based high-dose-rate brachytherapy by using deep learning.
METHODS AND MATERIALS: RapidBrachyDL, a 3-dimensional deep convolutional neural network (CNN) model, is proposed to predict dose distributions calculated with the MC method given a patient's computed tomography images, contours of clinical target volume (CTV) and organs at risk, and treatment plan. Sixty-one patients with prostate cancer and 10 patients with cervical cancer were included in this study, with data from 47 patients with prostate cancer being used to train the model.
RESULTS: Compared with ground truth MC simulations, the predicted dose distributions by RapidBrachyDL showed a consistent shape in the dose-volume histograms (DVHs); comparable DVH dosimetric indices including 0.73% difference for prostate CTV D90, 1.1% for rectum D2cc, 1.45% for urethra D0.1cc, and 1.05% for bladder D2cc; and substantially smaller prediction time, acceleration by a factor of 300. RapidBrachyDL also demonstrated good generalization to cervical data with 1.73%, 2.46%, 1.68%, and 1.74% difference for CTV D90, rectum D2cc, sigmoid D2cc, and bladder D2cc, respectively, which was unseen during the training.
CONCLUSION: Deep CNN-based dose estimation is a promising method for patient-specific brachytherapy dosimetry. Desired radiation quantities can be obtained with accuracies arbitrarily close to those of the source MC algorithm, but with much faster computation times. The idea behind deep CNN-based dose estimation can be safely extended to other radiation sources and tumor sites by following a similar training process.},
keywords = {Brachytherapy, Colon, Computer, Computer-Assisted, Deep Learning, Female, Humans, Iridium Radioisotopes, Male, Monte Carlo Method, Neural Networks, Organs at Risk, Prostate, Prostatic Neoplasms, Radiotherapy Dosage, Radiotherapy Planning, Rectum, Retrospective Studies, Sigmoid, Urinary Bladder, Uterine Cervical Neoplasms},
pubstate = {published},
tppubtype = {article}
}
PURPOSE: Detailed and accurate absorbed dose calculations from radiation interactions with the human body can be obtained with the Monte Carlo (MC) method. However, the MC method can be slow for use in the time-sensitive clinical workflow. The aim of this study was to provide a solution to the accuracy-time trade-off for 192Ir-based high-dose-rate brachytherapy by using deep learning.
METHODS AND MATERIALS: RapidBrachyDL, a 3-dimensional deep convolutional neural network (CNN) model, is proposed to predict dose distributions calculated with the MC method given a patient’s computed tomography images, contours of clinical target volume (CTV) and organs at risk, and treatment plan. Sixty-one patients with prostate cancer and 10 patients with cervical cancer were included in this study, with data from 47 patients with prostate cancer being used to train the model.
RESULTS: Compared with ground truth MC simulations, the predicted dose distributions by RapidBrachyDL showed a consistent shape in the dose-volume histograms (DVHs); comparable DVH dosimetric indices including 0.73% difference for prostate CTV D90, 1.1% for rectum D2cc, 1.45% for urethra D0.1cc, and 1.05% for bladder D2cc; and substantially smaller prediction time, acceleration by a factor of 300. RapidBrachyDL also demonstrated good generalization to cervical data with 1.73%, 2.46%, 1.68%, and 1.74% difference for CTV D90, rectum D2cc, sigmoid D2cc, and bladder D2cc, respectively, which was unseen during the training.
CONCLUSION: Deep CNN-based dose estimation is a promising method for patient-specific brachytherapy dosimetry. Desired radiation quantities can be obtained with accuracies arbitrarily close to those of the source MC algorithm, but with much faster computation times. The idea behind deep CNN-based dose estimation can be safely extended to other radiation sources and tumor sites by following a similar training process.
Presentations
2021
Weishaupt, Luca L.; Thibodeau-Antonacci, Alana; Garant, Aurelie; Singh, Kelita; Miller, Corey; Vuong, Té; Enger, Shirin A.
Deep learning based tumor segmentation of endoscopy images for rectal cancer patients Presentation
ESTRO Annual meeting, 27.08.2021.
Abstract | Links | BibTeX | Tags: artificial intelligence, Deep Learning, endoscopy, tumor detection
@misc{Weishaupt2021b,
title = {Deep learning based tumor segmentation of endoscopy images for rectal cancer patients},
author = {Luca L. Weishaupt and Alana Thibodeau-Antonacci and Aurelie Garant and Kelita Singh and Corey Miller and Té Vuong and Shirin A. Enger},
url = {https://www.estro.org/Congresses/ESTRO-2021/610/posterdiscussion34-deep-learningforauto-contouring/3710/deeplearning-basedtumorsegmentationofendoscopyimag},
year = {2021},
date = {2021-08-27},
urldate = {2021-08-27},
abstract = {Purpose or Objective
The objective of this study was to develop an automated rectal tumor segmentation algorithm from endoscopy images. The algorithm will be used in a future multimodal treatment outcome prediction model. Currently, treatment outcome prediction models rely on manual segmentations of regions of interest, which are prone to inter-observer variability. To quantify this human error and demonstrate the feasibility of automated endoscopy image segmentation, we compare three deep learning architectures.
Material and Methods
A gastrointestinal physician (G1) segmented 550 endoscopy images of rectal tumors into tumor and non-tumor regions. To quantify the inter-observer variability, a second gastrointestinal physician (G2) contoured 319 of the images independently.
The 550 images and annotations from G1 were divided into 408 training, 82 validation, and 60 testing sets. Three deep learning architectures were trained; a fully convolutional neural network (FCN32), a U-Net, and a SegNet. These architectures have been used for robust medical image segmentation in previous studies.
All models were trained on a CPU supercomputing cluster. Data augmentation in the form of random image transformations, including scaling, rotation, shearing, Gaussian blurring, and noise addition, was used to improve the models' robustness.
The neural networks' output went through a final layer of noise removal and hole filling before evaluation. Finally, the segmentations from G2 and the neural networks' predictions were compared against the ground truth labels from G1.
Results
The FCN32, U-Net, and SegNet had average segmentation times of 0.77, 0.48, and 0.43 seconds per image, respectively. The average segmentation time per image for G1 and G2 were 10 and 8 seconds, respectively.
All the ground truth labels contained tumors, but G2 and the deep learning models did not always find tumors in the images. The scores are based on the agreement of tumor contours with G1’s ground truth and were thus only computed for images in which tumor was found. The automated segmentation algorithms consistently achieved equal or better scores than G2's manual segmentations. G2's low F1/DICE and precision scores indicate poor agreement between the manual contours.
Conclusion
There is a need for robust and accurate segmentation algorithms for rectal tumor segmentation since manual segmentation of these tumors is susceptible to significant inter-observer variability. The deep learning-based segmentation algorithms proposed in this study are more efficient and achieved a higher agreement with our manual ground truth segmentations than a second expert annotator. Future studies will investigate how to train deep learning models on multiple ground truth annotations to prevent learning observer biases.},
howpublished = {ESTRO Annual meeting},
keywords = {artificial intelligence, Deep Learning, endoscopy, tumor detection},
pubstate = {published},
tppubtype = {presentation}
}
The objective of this study was to develop an automated rectal tumor segmentation algorithm from endoscopy images. The algorithm will be used in a future multimodal treatment outcome prediction model. Currently, treatment outcome prediction models rely on manual segmentations of regions of interest, which are prone to inter-observer variability. To quantify this human error and demonstrate the feasibility of automated endoscopy image segmentation, we compare three deep learning architectures.
Material and Methods
A gastrointestinal physician (G1) segmented 550 endoscopy images of rectal tumors into tumor and non-tumor regions. To quantify the inter-observer variability, a second gastrointestinal physician (G2) contoured 319 of the images independently.
The 550 images and annotations from G1 were divided into 408 training, 82 validation, and 60 testing sets. Three deep learning architectures were trained; a fully convolutional neural network (FCN32), a U-Net, and a SegNet. These architectures have been used for robust medical image segmentation in previous studies.
All models were trained on a CPU supercomputing cluster. Data augmentation in the form of random image transformations, including scaling, rotation, shearing, Gaussian blurring, and noise addition, was used to improve the models’ robustness.
The neural networks’ output went through a final layer of noise removal and hole filling before evaluation. Finally, the segmentations from G2 and the neural networks’ predictions were compared against the ground truth labels from G1.
Results
The FCN32, U-Net, and SegNet had average segmentation times of 0.77, 0.48, and 0.43 seconds per image, respectively. The average segmentation time per image for G1 and G2 were 10 and 8 seconds, respectively.
All the ground truth labels contained tumors, but G2 and the deep learning models did not always find tumors in the images. The scores are based on the agreement of tumor contours with G1’s ground truth and were thus only computed for images in which tumor was found. The automated segmentation algorithms consistently achieved equal or better scores than G2’s manual segmentations. G2’s low F1/DICE and precision scores indicate poor agreement between the manual contours.
Conclusion
There is a need for robust and accurate segmentation algorithms for rectal tumor segmentation since manual segmentation of these tumors is susceptible to significant inter-observer variability. The deep learning-based segmentation algorithms proposed in this study are more efficient and achieved a higher agreement with our manual ground truth segmentations than a second expert annotator. Future studies will investigate how to train deep learning models on multiple ground truth annotations to prevent learning observer biases.
Journal Articles
2020
Mao, Ximeng; Pineau, Joelle; Keyes, Roy; Enger, Shirin A.
RapidBrachyDL: Rapid Radiation Dose Calculations in Brachytherapy Via Deep Learning Journal Article
In: International Journal of Radiation Oncology, Biology, Physics, vol. 108, no. 3, pp. 802–812, 2020, ISSN: 1879-355X.
Abstract | Links | BibTeX | Tags: Brachytherapy, Colon, Computer, Computer-Assisted, Deep Learning, Female, Humans, Iridium Radioisotopes, Male, Monte Carlo Method, Neural Networks, Organs at Risk, Prostate, Prostatic Neoplasms, Radiotherapy Dosage, Radiotherapy Planning, Rectum, Retrospective Studies, Sigmoid, Urinary Bladder, Uterine Cervical Neoplasms
@article{mao_rapidbrachydl_2020,
title = {RapidBrachyDL: Rapid Radiation Dose Calculations in Brachytherapy Via Deep Learning},
author = {Ximeng Mao and Joelle Pineau and Roy Keyes and Shirin A. Enger},
doi = {10.1016/j.ijrobp.2020.04.045},
issn = {1879-355X},
year = {2020},
date = {2020-11-01},
journal = {International Journal of Radiation Oncology, Biology, Physics},
volume = {108},
number = {3},
pages = {802--812},
abstract = {PURPOSE: Detailed and accurate absorbed dose calculations from radiation interactions with the human body can be obtained with the Monte Carlo (MC) method. However, the MC method can be slow for use in the time-sensitive clinical workflow. The aim of this study was to provide a solution to the accuracy-time trade-off for 192Ir-based high-dose-rate brachytherapy by using deep learning.
METHODS AND MATERIALS: RapidBrachyDL, a 3-dimensional deep convolutional neural network (CNN) model, is proposed to predict dose distributions calculated with the MC method given a patient's computed tomography images, contours of clinical target volume (CTV) and organs at risk, and treatment plan. Sixty-one patients with prostate cancer and 10 patients with cervical cancer were included in this study, with data from 47 patients with prostate cancer being used to train the model.
RESULTS: Compared with ground truth MC simulations, the predicted dose distributions by RapidBrachyDL showed a consistent shape in the dose-volume histograms (DVHs); comparable DVH dosimetric indices including 0.73% difference for prostate CTV D90, 1.1% for rectum D2cc, 1.45% for urethra D0.1cc, and 1.05% for bladder D2cc; and substantially smaller prediction time, acceleration by a factor of 300. RapidBrachyDL also demonstrated good generalization to cervical data with 1.73%, 2.46%, 1.68%, and 1.74% difference for CTV D90, rectum D2cc, sigmoid D2cc, and bladder D2cc, respectively, which was unseen during the training.
CONCLUSION: Deep CNN-based dose estimation is a promising method for patient-specific brachytherapy dosimetry. Desired radiation quantities can be obtained with accuracies arbitrarily close to those of the source MC algorithm, but with much faster computation times. The idea behind deep CNN-based dose estimation can be safely extended to other radiation sources and tumor sites by following a similar training process.},
keywords = {Brachytherapy, Colon, Computer, Computer-Assisted, Deep Learning, Female, Humans, Iridium Radioisotopes, Male, Monte Carlo Method, Neural Networks, Organs at Risk, Prostate, Prostatic Neoplasms, Radiotherapy Dosage, Radiotherapy Planning, Rectum, Retrospective Studies, Sigmoid, Urinary Bladder, Uterine Cervical Neoplasms},
pubstate = {published},
tppubtype = {article}
}
METHODS AND MATERIALS: RapidBrachyDL, a 3-dimensional deep convolutional neural network (CNN) model, is proposed to predict dose distributions calculated with the MC method given a patient’s computed tomography images, contours of clinical target volume (CTV) and organs at risk, and treatment plan. Sixty-one patients with prostate cancer and 10 patients with cervical cancer were included in this study, with data from 47 patients with prostate cancer being used to train the model.
RESULTS: Compared with ground truth MC simulations, the predicted dose distributions by RapidBrachyDL showed a consistent shape in the dose-volume histograms (DVHs); comparable DVH dosimetric indices including 0.73% difference for prostate CTV D90, 1.1% for rectum D2cc, 1.45% for urethra D0.1cc, and 1.05% for bladder D2cc; and substantially smaller prediction time, acceleration by a factor of 300. RapidBrachyDL also demonstrated good generalization to cervical data with 1.73%, 2.46%, 1.68%, and 1.74% difference for CTV D90, rectum D2cc, sigmoid D2cc, and bladder D2cc, respectively, which was unseen during the training.
CONCLUSION: Deep CNN-based dose estimation is a promising method for patient-specific brachytherapy dosimetry. Desired radiation quantities can be obtained with accuracies arbitrarily close to those of the source MC algorithm, but with much faster computation times. The idea behind deep CNN-based dose estimation can be safely extended to other radiation sources and tumor sites by following a similar training process.
