EngerLab

Artificial Intelligence

Mission

The Artificial Intelligence Group at the EngerLab, part of Mila – Quebec AI Institute, develops advanced machine learning methods to improve cancer diagnosis, treatment, and outcome prediction. Using multimodal patient data, including diagnostic imaging, digital pathology, and clinical text, we build models for organ segmentation, dose prediction, and optimization of treatment planning, with a particular focus on automating brachytherapy workflows. In parallel, we design outcome prediction models that apply broadly across all treatment types, supporting more personalized and effective cancer care. A major initiative is the development of a province-wide, AI-enabled data platform that harmonizes imaging, molecular, and clinical data to advance precision oncology across Quebec. Through our research and training activities such as the McMedHacks summer school, we are shaping the future of AI-driven personalized medicine.

 

Members

Alana
Alana
Hossein
Alana
Juan
Yujing
Sébastien

Projects

AI-based treatment planning for brachytherapy applications

Treatment plan optimization is a routine part of both external beam radiotherapy and high dose rate brachytherapy. High dose rate brachytherapy is a mode of internal radiotherapy in which a source of radiation is placed inside the tumor through hollow needles known as catheters (interstitial brachytherapy) or near the tumor through site specific applicators (intracavitary brachytherapy). For every patient, the process of treatment plan optimization ensures that the dose to the tumor region is sufficient while the dose to the surrounding organs at risk is minimal. Therefore, treatment plan optimization is a key process in ensuring positive radiotherapy outcome.
In interstitial high dose rate brachytherapy, treatment planning is a labor-intensive and time-consuming task. The treatment planning cannot begin before the patient is put under anesthesia and the catheters are inserted. This makes the process painful for the patient and costly for the hospital. In addition, the current optimization method only controls the amount of time for which the source dwells at a specific location inside a catheter. The number and the location of the catheters are not optimized.
Hossein investigates the use of reinforcement learning for optimizing the number and position of the catheters prior to catheter insertion, in addition to the dwell time optimization. This way, quality of the treatment will improve because the catheters will be inserted according to an optimized plan.

Deep Learning-Based Multimodality Treatment Outcome Prediction

Cancer is the leading cause of death in Canada. Radiotherapy is used in about half of all cancer treatments. However, it is clinically challenging to predict which patients will benefit from which treatment combination despite improved protocols, imaging techniques for cancer management, and combining various radiotherapy treatment modalities. An accurate method for predicting a patient’s likelihood of response may reduce unnecessary interventions, lower healthcare costs, and reduce side effects. Therefore, investigating how pre-treatment patient characteristics influence treatment efficacy as measured by post-treatment response is crucial. The aim of Yujing and Alana’s work  is to develop a patient-specific, machine learning-based multi-modal treatment outcome prediction model for patients suffering from gynecological cancers. Our group has recently shown that the patient-specific radiation dose response may be influenced by inter-patient variation in tumor nuclei size. Therefore, the models developed by Yujing and Alana will not only integrate diagnostic images such as Computed Tomography, Magnetic Resonance, and Ultrasound images to detect high-order features that determine treatment response on a patient-specific basis, but also tumor nucleus size and cell spacing data obtained from scanned digital images of histopathology slides. Correlation between cell morphology and treatment outcome will also be investigated

Inter-observer variability of the manual segmentation 

Alana is investigating the inter-observer variability of the manual segmentation of tumor regions in endoscopy images and its effect on treatment outcomes. Furthermore, they are developing a deep learning-based segmentation tool that can learn from multiple observers labels with high inter-observer variability.

AI-based dosimetry and catheter reconstruction for brachytherapy application

Brachytherapy is a form of radiotherapy where a sealed radiation source (seed) is placed inside or in close proximity of the tumor.  The treatment starts by a radiation oncologist inserting catheters or an applicator inside or in proximity of the clinical treatment volume. Once this is done, CT or MR scans of the area are acquired. Organs at risk and the tumor are contoured by a radiation oncologist and the catheters/applicator are manually reconstructed by a medical physicist on the image set. Finally an optimized treatment plan is created to deliver an optimal dose to the tumor while sparing organs at risk. However, before optimization of the treatment plan, the absorbed dose contribution per second from each possible seed position along a catheter (dwell position) is determined. This is done by using precalculated dose distributions around a single seed and scaling it with respect to the daily air kerma strength of the seed. 

The precalculated dose used by clinical treatment planning systems is based on the American Association of Physicists in Medicine (AAPM) Task Group 43 (TG-43) formalism which describes dose deposition around a single source centrally positioned in a spherical water phantom with unit density. The influence of patient tissue and applicator heterogeneities, intersource attenuation, and finite patient dimensions are ignored. Recently, more advanced model based dose calculation algorithms (MBDCAs) calculating dose to medium in medium have been developed. In MBDCAs, dose calculations are performed using the patient’s CT or MR images. Voxel-by-voxel assignment of tissue mass density and elemental composition is required. Tissue mass density is obtained from CT or synthetic CT images using a Hounsfield Unit (HU) to density calibration curve. Tissue composition and nominal mass density can also be assigned to contoured organs. Exact description of the source and applicator geometry, material composition, and nominal mass density is required as well. MBDCAs such as the Monte Carlo  method, provide a detailed and accurate method for calculation of absorbed dose in heterogeneous systems such as the human body however are too time consuming to be used in a clinical workflow.

The main goal of this project is to develop a precise and automated dosimetry algorithm that will take into account patient’s tissue/applicator heterogeneities and replace the time consuming Monte-Carlo simulations.

Incorporation of causal principles for credible predictions in precision oncology

Despite substantial interest, clinical uptake of machine learning based precision oncology has been slow. In particular, radiotherapy is generally still prescribed using a “one size fits all” approach. We hypothesize that this is largely due to avoidable bias and lack of interpretability in machine learning precision oncology models. We will therefore aim to generate more credible and reliable predictions by incorporating causal principles. We will integrate a priori causal domain knowledge to identify tumour-type-agnostic biomarkers for the prediction of radiotherapy response, we will create tools for bias quantification and mitigation, and we will develop causally constrained, interpretable machine learning strategies for precision oncology.

 

Deep Learning-based Patient-Specific Outcome Survival  Prediction using inter-treatment multimodal data

My thesis explores how artificial intelligence can be used to better predict how long cancer patients remain free of disease after treatment by combining digital pathology slides with clinical information. Instead of focusing on a single algorithm, it develops a progression of models that move from establishing strong multimodal baselines, to making predictions more interpretable for clinicians, to creating new transformer-based methods that handle the scale and complexity of pathology images, and finally to using unsupervised learning to identify the most informative regions of tissue without manual labels. Taken together, the work contributes both technical advances and conceptual frameworks aimed at making survival prediction models more accurate, transparent, and ultimately usable in real-world oncology decision making.

McMedHacks

McMedHacks is an eight-week-long program that aims to teach students, researchers, and clinicians fundamentals of medical image analysis and deep learning in Python. This comprehensive program features weekly workshops, demos, and seminars conducted by leaders in the field. Our primary objective is to equip newcomers with the essential skills required for medical image analysis and deep learning, thereby fostering accelerated research in this critical domain.

Learn more

Publications

54 entries « 1 of 2 »

2025

Duran, Juan; Zou, Yujing; Vallières, Martin; Enger, Shirin A.

Beyond single-run metrics with CP-fuse: A rigorous multi-cohort evaluation of clinico-pathological fusion for improved survival prediction in TCGA Journal Article

In: Machine Learning with Applications, vol. 22, no. 100789, 2025, ISSN: 2666-8270.

Abstract | Links | BibTeX

@article{nokey,
title = {Beyond single-run metrics with CP-fuse: A rigorous multi-cohort evaluation of clinico-pathological fusion for improved survival prediction in TCGA},
author = {Juan Duran and Yujing Zou and Martin Vallières and Shirin A. Enger},
url = {https://www.sciencedirect.com/science/article/pii/S2666827025001720},
doi = {https://doi.org/10.1016/j.mlwa.2025.100789},
issn = {2666-8270},
year = {2025},
date = {2025-12-01},
journal = {Machine Learning with Applications},
volume = {22},
number = {100789},
abstract = {Accurate prediction of progression-free survival (PFS) is critical for precision oncology. However, most existing multimodal survival studies rely on single fusion strategies, one-off cross-validation runs, and focus solely on discrimination metrics, leaving gaps in systematic evaluation and calibration. We evaluated multimodal fusion approaches combining histopathology whole-slide images (via Hierarchical Image Pyramid Transformer) and clinical variables (via Feature Tokenizer-Transformer) across five TCGA cohorts: bladder cancer (BLCA), uterine corpus endometrial carcinoma (UCEC), lung adenocarcinoma (LUAD), breast cancer (BRCA), and head and neck squamous cell carcinoma (HNSC) (N=2,984). Three intermediate (marginal, cross-attention, Variational Autoencoder or VAE) and two late fusion strategies (trainable-weight, meta-learning) were trained end-to-end with DeepSurv. Our 100-repetition 10-fold cross-validation (CV) framework mitigates the variance overlooked in single-run CV evaluations. VAE fusion achieved superior PFS prediction (Concordance-index) in BLCA (0.739±0.019), UCEC (0.770±0.021), LUAD (0.683±0.018), and BRCA (0.760±0.021), while meta-learning was best for HNSC (0.686±0.022). However, Integrated Brier Score values (0.066–0.142) revealed calibration variability. Our findings highlight the importance of multimodal fusion, combined discrimination and calibration metrics, and rigorous validation for clinically meaningful survival modeling.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Accurate prediction of progression-free survival (PFS) is critical for precision oncology. However, most existing multimodal survival studies rely on single fusion strategies, one-off cross-validation runs, and focus solely on discrimination metrics, leaving gaps in systematic evaluation and calibration. We evaluated multimodal fusion approaches combining histopathology whole-slide images (via Hierarchical Image Pyramid Transformer) and clinical variables (via Feature Tokenizer-Transformer) across five TCGA cohorts: bladder cancer (BLCA), uterine corpus endometrial carcinoma (UCEC), lung adenocarcinoma (LUAD), breast cancer (BRCA), and head and neck squamous cell carcinoma (HNSC) (N=2,984). Three intermediate (marginal, cross-attention, Variational Autoencoder or VAE) and two late fusion strategies (trainable-weight, meta-learning) were trained end-to-end with DeepSurv. Our 100-repetition 10-fold cross-validation (CV) framework mitigates the variance overlooked in single-run CV evaluations. VAE fusion achieved superior PFS prediction (Concordance-index) in BLCA (0.739±0.019), UCEC (0.770±0.021), LUAD (0.683±0.018), and BRCA (0.760±0.021), while meta-learning was best for HNSC (0.686±0.022). However, Integrated Brier Score values (0.066–0.142) revealed calibration variability. Our findings highlight the importance of multimodal fusion, combined discrimination and calibration metrics, and rigorous validation for clinically meaningful survival modeling.

Close

Quetin, Sébastien; Jafarzadeh, Hossein; Kalinowski, Jonathan; Bekerat, Hamed; Bahoric, Boris; Maleki, Farhad; Enger, Shirin A.

Automatic catheter digitization in breast brachytherapy Journal Article

In: Medical Physics, vol. 52, iss. 9, no. e18107, 2025, ISSN: 2473-4209.

Abstract | Links | BibTeX

@article{nokey,
title = {Automatic catheter digitization in breast brachytherapy},
author = {Sébastien Quetin and Hossein Jafarzadeh and Jonathan Kalinowski and Hamed Bekerat and Boris Bahoric and Farhad Maleki and Shirin A. Enger},
url = {https://aapm.onlinelibrary.wiley.com/doi/10.1002/mp.18107},
doi = {https://doi.org/10.1002/mp.18107},
issn = {2473-4209},
year = {2025},
date = {2025-09-12},
urldate = {2025-09-12},
journal = {Medical Physics},
volume = {52},
number = {e18107},
issue = {9},
abstract = {Background:
High dose rate (HDR) brachytherapy requires clinicians to digitize catheters manually. This process is time-consuming, complex, and depends heavily on clinical experience-especially in breast cancer cases, where catheters may be inserted at varying angles and orientations due to an irregular anatomy.

Purpose:
This study is the first to automate catheter digitization specifically for breast HDR brachytherapy, emphasizing the unique challenges associated with this treatment site. It also introduces a pipeline that automatically digitizes catheters, generates dwell positions, and calculates the delivered dose for new breast cancer patients.

Methods:
Treatment data from 117 breast cancer patients treated with HDR brachytherapy were used. Pseudo-contours for the catheters were created from the treatment digitization points and divided into three classes: catheter body, catheter head, and catheter tip. An nnU-Net pipeline was trained to segment the pseudo-contours on treatment planning computed tomography images of 88 patients (training and validation). Then, pseudo-contours were digitized by separating the catheters into connected components. Predicted catheters with an unusual volume were flagged for manual review. A custom algorithm was designed to report and separate connected components containing colliding catheters. Finally, a spline was fitted to every separated catheter, and the tip was identified on the spline using the tip contour prediction. Dwell positions were placed from the created tip at a regular step size extracted from the DICOM plan file. Distance from each dwell position used during the clinical treatment to the fitted spline (shaft distance) was computed, as well as the distance from the treatment tip to the one identified by our pipeline. Dwell times from the clinical plan were assigned to the nearest generated dwell positions. TG-43 dose in water was computed analytically, and the absorbed dose in the medium was predicted using a published AI-based dose prediction model. Dosimetric comparison between the clinically delivered plan dose and the created automated plan dose was evaluated regarding dosimetric indices percent error.

Results:
Our pipeline was used to digitize 408 catheters on a test set of 29 patients. Shaft distance was on average 0.70 ± 3.91 mm and distance to the tip was on average 1.37 ± 5.25 mm. The dosimetric error between the manual and automated treatment plans was, on average, below 3% for planning target volume V100, V150, V200 and for the lung, heart, skin, and chest wall D2cc and D1cc, in both water and heterogeneous media. For D0.1cc values in all the organs at risk, the average error remained below 5%. The pipeline execution time, including auto-contouring, digitization, and dose to medium prediction, averages 118 s, ranging from 63 to 294 s. The pipeline successfully flagged all cases where digitization was not performed correctly.

Conclusions:
Our pipeline is the first to automate the digitization of catheters for breast brachytherapy, as well as the first to generate dwell positions and predict corresponding AI-based absorbed dose to medium based on automatically digitized catheters. The automatically digitized catheters are in excellent agreement with the manually digitized ones while more accurately reflecting their true anatomical shape.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Background:
High dose rate (HDR) brachytherapy requires clinicians to digitize catheters manually. This process is time-consuming, complex, and depends heavily on clinical experience-especially in breast cancer cases, where catheters may be inserted at varying angles and orientations due to an irregular anatomy.

Purpose:
This study is the first to automate catheter digitization specifically for breast HDR brachytherapy, emphasizing the unique challenges associated with this treatment site. It also introduces a pipeline that automatically digitizes catheters, generates dwell positions, and calculates the delivered dose for new breast cancer patients.

Methods:
Treatment data from 117 breast cancer patients treated with HDR brachytherapy were used. Pseudo-contours for the catheters were created from the treatment digitization points and divided into three classes: catheter body, catheter head, and catheter tip. An nnU-Net pipeline was trained to segment the pseudo-contours on treatment planning computed tomography images of 88 patients (training and validation). Then, pseudo-contours were digitized by separating the catheters into connected components. Predicted catheters with an unusual volume were flagged for manual review. A custom algorithm was designed to report and separate connected components containing colliding catheters. Finally, a spline was fitted to every separated catheter, and the tip was identified on the spline using the tip contour prediction. Dwell positions were placed from the created tip at a regular step size extracted from the DICOM plan file. Distance from each dwell position used during the clinical treatment to the fitted spline (shaft distance) was computed, as well as the distance from the treatment tip to the one identified by our pipeline. Dwell times from the clinical plan were assigned to the nearest generated dwell positions. TG-43 dose in water was computed analytically, and the absorbed dose in the medium was predicted using a published AI-based dose prediction model. Dosimetric comparison between the clinically delivered plan dose and the created automated plan dose was evaluated regarding dosimetric indices percent error.

Results:
Our pipeline was used to digitize 408 catheters on a test set of 29 patients. Shaft distance was on average 0.70 ± 3.91 mm and distance to the tip was on average 1.37 ± 5.25 mm. The dosimetric error between the manual and automated treatment plans was, on average, below 3% for planning target volume V100, V150, V200 and for the lung, heart, skin, and chest wall D2cc and D1cc, in both water and heterogeneous media. For D0.1cc values in all the organs at risk, the average error remained below 5%. The pipeline execution time, including auto-contouring, digitization, and dose to medium prediction, averages 118 s, ranging from 63 to 294 s. The pipeline successfully flagged all cases where digitization was not performed correctly.

Conclusions:
Our pipeline is the first to automate the digitization of catheters for breast brachytherapy, as well as the first to generate dwell positions and predict corresponding AI-based absorbed dose to medium based on automatically digitized catheters. The automatically digitized catheters are in excellent agreement with the manually digitized ones while more accurately reflecting their true anatomical shape.

Close

Zou, Yujing; Glickman, Harry; Pelmus, Manuela; Maleki, Farhad; Bahoric, Boris; Lecavalier-Barsoum, Magali; Enger, Shirin A.

Tumour nuclear size heterogeneity as a biomarker for post-radiotherapy outcomes in gynecological malignancies Journal Article

In: Physics and Imaging in Radiation Oncology, vol. 35, no. 100793, 2025, ISSN: 2405-6316.

Abstract | Links | BibTeX

@article{nokey,
title = {Tumour nuclear size heterogeneity as a biomarker for post-radiotherapy outcomes in gynecological malignancies},
author = {Yujing Zou and Harry Glickman and Manuela Pelmus and Farhad Maleki and Boris Bahoric and Magali Lecavalier-Barsoum and Shirin A. Enger},
url = {https://www.phiro.science/article/S2405-6316(25)00098-3/fulltext},
doi = {10.1016/j.phro.2025.100793},
issn = {2405-6316},
year = {2025},
date = {2025-07-01},
journal = {Physics and Imaging in Radiation Oncology},
volume = {35},
number = {100793},
abstract = {Background and purpose: Radiotherapy targets DNA in cancer cell nuclei. Radiation dose, however, is prescribed to a macroscopic target volume assuming uniform distribution, failing to consider microscopic variations in dose absorbed by individual nuclei. This study investigated a potential link between pre-treatment tumour nuclear size distributions and post-radiotherapy outcomes in gynecological squamous cell carcinoma (SCC).

Materials and methods: Our multi-institutional cohort consisted of 191 non-metastatic gynecological SCC patients who had received radiotherapy with diagnostic whole slide images (WSIs) available. Tumour nuclear size distribution mean and standard deviation were extracted from WSIs using deep learning, and used to predict progression-free interval (PFI) and overall survival (OS) in multivariate Cox proportional hazards (CoxPH) analysis adjusted for age and clinical stage.

Results: Multivariate CoxPH analysis revealed that a larger nuclear size distribution mean results in more favorable outcomes for PFI (HR = 0.45, 95% CI: 0.19 - 1.09, p = 0.084) and OS (HR = 0.55, 95% CI: 0.24 - 1.25, p = 0.16), and that a larger nuclear size standard deviation results in less favorable outcomes for PFI (HR = 7.52, 95% CI: 1.43 - 39.52, p = 0.023) and OS (HR = 4.67, 95% CI: 0.96 - 22.57, p = 0.063). The bootstrap-validated C-statistic was 0.56 for PFI and 0.57 for OS.

Conclusion: Despite low accuracy, tumour nuclear size heterogeneity aided prognostication over standard clinical variables and was associated with outcomes following radiotherapy in gynecological SCC. This highlights the potential importance of personalized multiscale dosimetry and warrants further large-scale pan-cancer studies.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Background and purpose: Radiotherapy targets DNA in cancer cell nuclei. Radiation dose, however, is prescribed to a macroscopic target volume assuming uniform distribution, failing to consider microscopic variations in dose absorbed by individual nuclei. This study investigated a potential link between pre-treatment tumour nuclear size distributions and post-radiotherapy outcomes in gynecological squamous cell carcinoma (SCC).

Materials and methods: Our multi-institutional cohort consisted of 191 non-metastatic gynecological SCC patients who had received radiotherapy with diagnostic whole slide images (WSIs) available. Tumour nuclear size distribution mean and standard deviation were extracted from WSIs using deep learning, and used to predict progression-free interval (PFI) and overall survival (OS) in multivariate Cox proportional hazards (CoxPH) analysis adjusted for age and clinical stage.

Results: Multivariate CoxPH analysis revealed that a larger nuclear size distribution mean results in more favorable outcomes for PFI (HR = 0.45, 95% CI: 0.19 - 1.09, p = 0.084) and OS (HR = 0.55, 95% CI: 0.24 - 1.25, p = 0.16), and that a larger nuclear size standard deviation results in less favorable outcomes for PFI (HR = 7.52, 95% CI: 1.43 - 39.52, p = 0.023) and OS (HR = 4.67, 95% CI: 0.96 - 22.57, p = 0.063). The bootstrap-validated C-statistic was 0.56 for PFI and 0.57 for OS.

Conclusion: Despite low accuracy, tumour nuclear size heterogeneity aided prognostication over standard clinical variables and was associated with outcomes following radiotherapy in gynecological SCC. This highlights the potential importance of personalized multiscale dosimetry and warrants further large-scale pan-cancer studies.

Close

Morén, Björn; Jafarzadeh, Hossein; Enger, Shirin A

A data-driven approach to model spatial dose characteristics for catheter placement of high dose-rate brachytherapy for prostate cancer Journal Article

In: Computers in Biology and Medicine, vol. 190, no. 110020, 2025, ISSN: 1879-0534.

Abstract | Links | BibTeX

@article{nokey,
title = {A data-driven approach to model spatial dose characteristics for catheter placement of high dose-rate brachytherapy for prostate cancer},
author = {Björn Morén and Hossein Jafarzadeh and Shirin A Enger},
url = {https://www.sciencedirect.com/science/article/pii/S0010482525003713?via%3Dihub},
doi = {https://doi.org/10.1016/j.compbiomed.2025.110020},
issn = {1879-0534},
year = {2025},
date = {2025-05-01},
journal = {Computers in Biology and Medicine},
volume = {190},
number = {110020},
abstract = {Background: High dose rate brachytherapy (HDR BT) is a common treatment modality for cancer. In HDR BT, a radioactive source is placed inside or close to a tumor, aiming to give a high enough dose to the tumor, while sparing nearby healthy tissue and organs at risk. Treatment planning of HDR BT for prostate cancer consists of two types of decisions, placement of catheters, modeled with binary variables, and dwell times, modeled with continuous non-negative variables. Optimal spatial placement of catheters is important for avoiding local recurrence and complications, but such characteristics have not been modeled for the combined treatment planning problem of catheter placement and dwell time optimization.

Method: We propose a data-driven approach using linear regression, mutual information, and random forests to find convex estimates of spatial dose characteristics that correlate well with contiguous volumes receiving a too-high (hot spots) or too-low dose (cold spots). These estimates were incorporated in retrospective treatment plan optimization of 28 prostate cancer patients.

Results: The proposed hot-spot terms reduced the volume receiving twice the prescribed dose by 29% at 14 catheters. Also, the results illustrate the trade-offs between the number of catheters and spatial dose characteristics.

Conclusions: Our study demonstrates that incorporating a term for hot spots in the objective function of the treatment planning model is more effective in reducing hot spots than catheter placements that are not optimized for hot spots.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Background: High dose rate brachytherapy (HDR BT) is a common treatment modality for cancer. In HDR BT, a radioactive source is placed inside or close to a tumor, aiming to give a high enough dose to the tumor, while sparing nearby healthy tissue and organs at risk. Treatment planning of HDR BT for prostate cancer consists of two types of decisions, placement of catheters, modeled with binary variables, and dwell times, modeled with continuous non-negative variables. Optimal spatial placement of catheters is important for avoiding local recurrence and complications, but such characteristics have not been modeled for the combined treatment planning problem of catheter placement and dwell time optimization.

Method: We propose a data-driven approach using linear regression, mutual information, and random forests to find convex estimates of spatial dose characteristics that correlate well with contiguous volumes receiving a too-high (hot spots) or too-low dose (cold spots). These estimates were incorporated in retrospective treatment plan optimization of 28 prostate cancer patients.

Results: The proposed hot-spot terms reduced the volume receiving twice the prescribed dose by 29% at 14 catheters. Also, the results illustrate the trade-offs between the number of catheters and spatial dose characteristics.

Conclusions: Our study demonstrates that incorporating a term for hot spots in the objective function of the treatment planning model is more effective in reducing hot spots than catheter placements that are not optimized for hot spots.

Close

Thibodeau-Antonacci, Alana; Popovic, Marija; Ates, Ozgur; Hua, Chia-Ho; Schneider, James; Skamene, Sonia; Freeman, Carolyn; Enger, Shirin Abbasinejad; Tsui, James Man Git

Trade-off of different deep learning-based auto-segmentation approaches for treatment planning of pediatric craniospinal irradiation autocontouring of OARs for pediatric CSI Journal Article

In: Medical Physics, vol. 52, iss. 6, pp. 3541–3556, 2025, ISSN: 2473-4209.

Abstract | Links | BibTeX

@article{nokey,
title = {Trade-off of different deep learning-based auto-segmentation approaches for treatment planning of pediatric craniospinal irradiation autocontouring of OARs for pediatric CSI},
author = {Alana Thibodeau-Antonacci and Marija Popovic and Ozgur Ates and Chia-Ho Hua and James Schneider and Sonia Skamene and Carolyn Freeman and Shirin Abbasinejad Enger and James Man Git Tsui},
url = {https://aapm.onlinelibrary.wiley.com/doi/10.1002/mp.17782},
doi = {10.1002/mp.17782},
issn = {2473-4209},
year = {2025},
date = {2025-04-01},
journal = {Medical Physics},
volume = {52},
issue = {6},
pages = {3541–3556},
abstract = {Background: As auto-segmentation tools become integral to radiotherapy, more commercial products emerge. However, they may not always suit our needs. One notable example is the use of adult-trained commercial software for the contouring of organs at risk (OARs) of pediatric patients.

Purpose: This study aimed to compare three auto-segmentation approaches in the context of pediatric craniospinal irradiation (CSI): commercial, out-of-the-box, and in-house.

Methods: CT scans from 142 pediatric patients undergoing CSI were obtained from St. Jude Children's Research Hospital (training: 115; validation: 27). A test dataset comprising 16 CT scans was collected from the McGill University Health Centre. All images underwent manual delineation of 18 OARs. LimbusAI v1.7 served as the commercial product, while nnU-Net was trained for benchmarking. Additionally, a two-step in-house approach was pursued where smaller 3D CT scans containing the OAR of interest were first recovered and then used as input to train organ-specific models. Three variants of the U-Net architecture were explored: a basic U-Net, an attention U-Net, and a 2.5D U-Net. The dice similarity coefficient (DSC) assessed segmentation accuracy, and the DSC trend with age was investigated (Mann-Kendall test). A radiation oncologist determined the clinical acceptability of all contours using a five-point Likert scale.

Results: Differences in the contours between the validation and test datasets reflected the distinct institutional standards. The lungs and left kidney displayed an increasing age-related trend of the DSC values with LimbusAI on the validation and test datasets. LimbusAI contours of the esophagus were often truncated distally and mistaken for the trachea for younger patients, resulting in a DSC score of less than 0.5 on both datasets. Additionally, the kidneys frequently exhibited false negatives, leading to mean DSC values that were up to 0.11 lower on the validation set and 0.07 on the test set compared to the other models. Overall, nnU-Net achieved good performance for body organs but exhibited difficulty differentiating the laterality of head structures, resulting in a large variation of DSC values with the standard deviation reaching 0.35 for the lenses. All in-house models generally had similar DSC values when compared against each other and nnU-Net. Inference time on the test data was between 47-55 min on a Central Processing Unit (CPU) for the in-house models, while it was 1h 21m with a V100 Graphics Processing Unit (GPU) for nnU-Net.

Conclusions: LimbusAI could not adapt well to pediatric anatomy for the esophagus and the kidneys. When commercial products do not suit the study population, the nnU-Net is a viable option but requires adjustments. In resource-constrained settings, the in-house model provides an alternative. Implementing an automated segmentation tool requires careful monitoring and quality assurance regardless of the approach.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Background: As auto-segmentation tools become integral to radiotherapy, more commercial products emerge. However, they may not always suit our needs. One notable example is the use of adult-trained commercial software for the contouring of organs at risk (OARs) of pediatric patients.

Purpose: This study aimed to compare three auto-segmentation approaches in the context of pediatric craniospinal irradiation (CSI): commercial, out-of-the-box, and in-house.

Methods: CT scans from 142 pediatric patients undergoing CSI were obtained from St. Jude Children's Research Hospital (training: 115; validation: 27). A test dataset comprising 16 CT scans was collected from the McGill University Health Centre. All images underwent manual delineation of 18 OARs. LimbusAI v1.7 served as the commercial product, while nnU-Net was trained for benchmarking. Additionally, a two-step in-house approach was pursued where smaller 3D CT scans containing the OAR of interest were first recovered and then used as input to train organ-specific models. Three variants of the U-Net architecture were explored: a basic U-Net, an attention U-Net, and a 2.5D U-Net. The dice similarity coefficient (DSC) assessed segmentation accuracy, and the DSC trend with age was investigated (Mann-Kendall test). A radiation oncologist determined the clinical acceptability of all contours using a five-point Likert scale.

Results: Differences in the contours between the validation and test datasets reflected the distinct institutional standards. The lungs and left kidney displayed an increasing age-related trend of the DSC values with LimbusAI on the validation and test datasets. LimbusAI contours of the esophagus were often truncated distally and mistaken for the trachea for younger patients, resulting in a DSC score of less than 0.5 on both datasets. Additionally, the kidneys frequently exhibited false negatives, leading to mean DSC values that were up to 0.11 lower on the validation set and 0.07 on the test set compared to the other models. Overall, nnU-Net achieved good performance for body organs but exhibited difficulty differentiating the laterality of head structures, resulting in a large variation of DSC values with the standard deviation reaching 0.35 for the lenses. All in-house models generally had similar DSC values when compared against each other and nnU-Net. Inference time on the test data was between 47-55 min on a Central Processing Unit (CPU) for the in-house models, while it was 1h 21m with a V100 Graphics Processing Unit (GPU) for nnU-Net.

Conclusions: LimbusAI could not adapt well to pediatric anatomy for the esophagus and the kidneys. When commercial products do not suit the study population, the nnU-Net is a viable option but requires adjustments. In resource-constrained settings, the in-house model provides an alternative. Implementing an automated segmentation tool requires careful monitoring and quality assurance regardless of the approach.

Close

Morén, Björn; Thibodeau-Antonacci, Alana; Kalinowski, Jonathan; Enger, Shirin A.

Dosimetric impact of positional uncertainties and a robust optimization approach for rectal intensity-modulated brachytherapy Journal Article

In: Medical Physics, vol. 52, iss. 6, pp. 3528–3540, 2025, ISSN: 0094-2405.

Abstract | Links | BibTeX

@article{nokey,
title = {Dosimetric impact of positional uncertainties and a robust optimization approach for rectal intensity-modulated brachytherapy},
author = {Björn Morén and Alana Thibodeau-Antonacci and Jonathan Kalinowski and Shirin A. Enger},
url = {https://aapm.onlinelibrary.wiley.com/doi/10.1002/mp.17800},
doi = {10.1002/mp.17800},
issn = {0094-2405},
year = {2025},
date = {2025-03-31},
journal = {Medical Physics},
volume = {52},
issue = {6},
pages = {3528–3540},
abstract = {Background: Intensity-modulated brachytherapy (IMBT) employs rotating high-Z shields during treatment to decrease radiation in certain directions and conform the dose distribution to the target volume. Prototypes for dynamic IMBT have been proposed for prostate, cervical, and rectal cancer.

Purpose: We considered two shielded applicators for IMBT rectal cancer treatment and investigated how rotational uncertainties in the shield angle and translational uncertainties in the source position affect plan evaluation criteria.

Methods: The effect of rotational errors of 3∘ , 5∘ and 10∘ , and translational errors of 1, 2 and 3 mm on evaluation criteria were investigated for shields with
180

and
90

emission windows. Further, a robust optimization approach based on quadratic penalties that includes scenarios with errors was proposed. The extent to which dosimetric effects of positional errors can be mitigated with this model was evaluated compared to a quadratic penalty model without scenarios with errors. A retrospective rectal cancer data set of ten patients was included in this study. Treatment planning was performed using the Monte Carlo-based treatment planning system, RapidBrachyMCTPS.

Results: For the largest investigated rotational error of
±
10

, the clinical target volume
D
90
remained, on average, within
5
%
of the result without error, while the contralateral healthy rectal wall experienced an increase in the mean
D
0.1
c
c
,
D
2
c
c
, and
D
50
of
26
%
,
9
%
, and
1
%
for the
180

shield and of 32%, 9%, and 2% for the
90

shield. For translational errors of
±
2
mm, there were increases in dosimetric indices for both the superior (sup) and inferior (inf) dose spill regions. Specifically, for the
180

shield, the
D
0.1
c
c
,
D
2
c
c
, and
D
50
increased by
13
%
,
11
%
, and
10
%
, respectively, for the sup region, and by
26
%
,
15
%
, and
11
%
, respectively, for the inf region. Similar results were obtained with the
90

shield. Overall, the robust and traditional models had similar results. However, the number of active dwell positions obtained with the robust model was larger, and the longest dwell time was shorter.

Conclusions: We have quantified the effect of rotational shield and translational source errors of various magnitudes on evaluation criteria for rectal IMBT. The robust optimization approach was generally not able to mitigate positional errors. However, it resulted in more homogeneous dwell times, which can be beneficial in conventional high-dose-rate brachytherapy to avoid hot spots around specific dwell positions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Background: Intensity-modulated brachytherapy (IMBT) employs rotating high-Z shields during treatment to decrease radiation in certain directions and conform the dose distribution to the target volume. Prototypes for dynamic IMBT have been proposed for prostate, cervical, and rectal cancer.

Purpose: We considered two shielded applicators for IMBT rectal cancer treatment and investigated how rotational uncertainties in the shield angle and translational uncertainties in the source position affect plan evaluation criteria.

Methods: The effect of rotational errors of 3∘ , 5∘ and 10∘ , and translational errors of 1, 2 and 3 mm on evaluation criteria were investigated for shields with
180

and
90

emission windows. Further, a robust optimization approach based on quadratic penalties that includes scenarios with errors was proposed. The extent to which dosimetric effects of positional errors can be mitigated with this model was evaluated compared to a quadratic penalty model without scenarios with errors. A retrospective rectal cancer data set of ten patients was included in this study. Treatment planning was performed using the Monte Carlo-based treatment planning system, RapidBrachyMCTPS.

Results: For the largest investigated rotational error of
±
10

, the clinical target volume
D
90
remained, on average, within
5
%
of the result without error, while the contralateral healthy rectal wall experienced an increase in the mean
D
0.1
c
c
,
D
2
c
c
, and
D
50
of
26
%
,
9
%
, and
1
%
for the
180

shield and of 32%, 9%, and 2% for the
90

shield. For translational errors of
±
2
mm, there were increases in dosimetric indices for both the superior (sup) and inferior (inf) dose spill regions. Specifically, for the
180

shield, the
D
0.1
c
c
,
D
2
c
c
, and
D
50
increased by
13
%
,
11
%
, and
10
%
, respectively, for the sup region, and by
26
%
,
15
%
, and
11
%
, respectively, for the inf region. Similar results were obtained with the
90

shield. Overall, the robust and traditional models had similar results. However, the number of active dwell positions obtained with the robust model was larger, and the longest dwell time was shorter.

Conclusions: We have quantified the effect of rotational shield and translational source errors of various magnitudes on evaluation criteria for rectal IMBT. The robust optimization approach was generally not able to mitigate positional errors. However, it resulted in more homogeneous dwell times, which can be beneficial in conventional high-dose-rate brachytherapy to avoid hot spots around specific dwell positions.

Close

2024

Jafarzadeh, Hossein; Antaki, Majd; Mao, Ximeng; Duclos, Marie; Maleki, Farhard; Enger, Shirin A

Penalty weight tuning in high dose rate brachytherapy using multi-objective Bayesian optimization Journal Article

In: Physics in Medicine & Biology, vol. 69, 2024.

Links | BibTeX

@article{nokey,
title = {Penalty weight tuning in high dose rate brachytherapy using multi-objective Bayesian optimization},
author = {Hossein Jafarzadeh and Majd Antaki and Ximeng Mao and Marie Duclos and Farhard Maleki and Shirin A Enger },
doi = {10.1088/1361-6560/ad4448},
year = {2024},
date = {2024-05-21},
urldate = {2024-05-21},
journal = {Physics in Medicine & Biology},
volume = {69},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

2023

Amod, Alyssa R.; Smith, Alexandra; Joubert, Pearly; Sebastien, Quetin

2nd Place at BraTS Africa 2023 Challenge Miscellaneous

2023, (MICCAI 2023 ).

Links | BibTeX

@misc{nokey,
title = {2nd Place at BraTS Africa 2023 Challenge },
author = {Alyssa R. Amod and Alexandra Smith and Pearly Joubert and Quetin Sebastien
},
url = {https://www.synapse.org/#!Synapse:syn51156910/wiki/622556
},
year = {2023},
date = {2023-10-12},
urldate = {2023-10-12},
note = {MICCAI 2023
},
keywords = {},
pubstate = {published},
tppubtype = {misc}
}

Close

Sebastien, Quetin; Bahoric, Boris; Maleki, Farhad; Enger, Shirin A.

Improving TG-43 dose accuracy with Deep Learning Conference

2023, (CARO-COMP 2023 Joint Scientific Meeting ).

Links | BibTeX

@conference{nokey,
title = {Improving TG-43 dose accuracy with Deep Learning},
author = {Quetin Sebastien and Boris Bahoric and Farhad Maleki and Shirin A. Enger
},
url = {https://caro-acro.wildapricot.org/event-5150952
},
year = {2023},
date = {2023-09-21},
urldate = {2023-09-21},
note = {CARO-COMP 2023 Joint Scientific Meeting
},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}

Close

Sebastien, Quetin; Bahoric, Boris; Maleki, Farhad; Enger, Shirin A.

Artificial-Intelligence based high precision Brachytherapy dose calculation, Presentation

21.06.2023, (Temerty Centre for AI Research and Education in Medicine, University of Toronto ).

Links | BibTeX

@misc{nokey,
title = {Artificial-Intelligence based high precision Brachytherapy dose calculation,},
author = {Quetin Sebastien and Boris Bahoric and Farhad Maleki and Shirin A. Enger
},
url = {https://tcairem.utoronto.ca/event/trainee-rounds-phoenix-yu-wilkie-and-sebastien-quetin
},
year = {2023},
date = {2023-06-21},
urldate = {2023-06-21},
note = {Temerty Centre for AI Research and Education in Medicine, University of Toronto
},
keywords = {},
pubstate = {published},
tppubtype = {presentation}
}

Close

Jafarzadeh, Hossein

Doctoral Internship Award Miscellaneous

2023, (Graduate and Post Doctoral Studies, McGill University ).

BibTeX

@misc{nokey,
title = {Doctoral Internship Award},
author = {Hossein Jafarzadeh},
year = {2023},
date = {2023-05-20},
urldate = {2023-05-20},
note = {Graduate and Post Doctoral Studies, McGill University
},
keywords = {},
pubstate = {published},
tppubtype = {misc}
}

Close

Sebastien, Quetin; Bahoric, Boris; Maleki, Farhad; Enger, Shirin A.

Artificial-Intelligence based high precision Brachytherapy dose calculation Presentation

13.05.2023, (The European Society for Radiotherapy and Oncology 2023 Congress ).

Links | BibTeX

@misc{nokey,
title = {Artificial-Intelligence based high precision Brachytherapy dose calculation},
author = {Quetin Sebastien and Boris Bahoric and Farhad Maleki and Shirin A. Enger},
url = {https://www.estro.org/
},
year = {2023},
date = {2023-05-13},
urldate = {2023-05-13},
note = {The European Society for Radiotherapy and Oncology 2023 Congress
},
keywords = {},
pubstate = {published},
tppubtype = {presentation}
}

Close

Sebastien, Quetin

Lady Davis Institute Travel Award Miscellaneous

2023, (Lady Davis Institute ).

BibTeX

@misc{nokey,
title = {Lady Davis Institute Travel Award},
author = {Quetin Sebastien
},
year = {2023},
date = {2023-04-01},
urldate = {2023-04-01},
note = {Lady Davis Institute
},
keywords = {},
pubstate = {published},
tppubtype = {misc}
}

Close

2022

Sebastien, Quetin

Artificial Intelligence-based Brachytherapy Presentation

From New avenues in the non-operative management of patients with rectal cancer Conference, 14.10.2022.

BibTeX

@misc{nokey,
title = {Artificial Intelligence-based Brachytherapy},
author = {Quetin Sebastien},
editor = {New avenues in the non-operative management of patients with rectal cancer: Time for discussion},
year = {2022},
date = {2022-10-14},
urldate = {2022-10-14},
howpublished = {From New avenues in the non-operative management of patients with rectal cancer Conference},
keywords = {},
pubstate = {published},
tppubtype = {presentation}
}

Close

Zou, Yujing

TransMedTech Excellence Scholarship (Doctoral Award) Journal Article

In: 2022.

Links | BibTeX

@article{nokey,
title = {TransMedTech Excellence Scholarship (Doctoral Award)},
author = {Yujing Zou },
url = {https://transmedtech.org/en/training/transmedtech-institute-excellence-scholarships/},
year = {2022},
date = {2022-09-01},
urldate = {2022-09-01},
organization = {The Institut TransMedTech},
key = {award},
type = {award},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Sebastien, Quetin; Zou, Yujing

Deep Learning Framework : Tensorboard and Pytorch Lightning Workshop

2022.

Links | BibTeX

@workshop{nokey,
title = {Deep Learning Framework : Tensorboard and Pytorch Lightning},
author = {Quetin Sebastien and Yujing Zou},
url = {https://www.youtube.com/watch?v=8q09b-Yqly4&list=PLVH7T2_su-vkHLGQXJ0gHijbhjLJOCbaq&index=18
https://mcmedhacks.com/},
year = {2022},
date = {2022-07-20},
urldate = {2022-07-20},
howpublished = {from McMedHacks 2022},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}

Close

Zou, Yujing; Alvarez, David-Santiago Ayala

ESTRO 2022: innovations in brachytherapy Journal Article

In: ESTRO Newsletter, pp. 754–767, 2022.

Links | BibTeX

@article{zou2022brachyestro,
title = {ESTRO 2022: innovations in brachytherapy},
author = {Yujing Zou and David-Santiago Ayala Alvarez },
url = {https://www.estro.org/About/Newsroom/Newsletter/Brachytheraphy/ESTRO-2022-innovations-in-brachytherapy-Brachyther},
year = {2022},
date = {2022-06-29},
urldate = {2022-06-29},
journal = {ESTRO Newsletter},
pages = {754--767},
publisher = {The European SocieTy for Radiotherapy and Oncology},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Sebastien, Quetin

Deep Learning Framework : Pytorch tensors and Autograd Workshop

2022.

Links | BibTeX

@workshop{nokey,
title = {Deep Learning Framework : Pytorch tensors and Autograd},
author = {Quetin Sebastien },
url = {https://www.youtube.com/watch?v=3X0ZEfY-nuc&list=PLVH7T2_su-vkHLGQXJ0gHijbhjLJOCbaq&index=12
https://mcmedhacks.com/},
year = {2022},
date = {2022-06-29},
urldate = {2022-06-29},
howpublished = {from McMedHacks 2022},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}

Close

Sebastien, Quetin; Bahoric, Boris; Maleki, Farhad; Enger, Shirin A.

rtificial Intelligence-based dosimetry in high dose rate brachytherapy Conference

2022, (Celebration of Research and Training in Oncology Conference ).

BibTeX

@conference{nokey,
title = {rtificial Intelligence-based dosimetry in high dose rate brachytherapy},
author = { Quetin Sebastien and Boris Bahoric and Farhad Maleki and Shirin A. Enger
},
year = {2022},
date = {2022-06-21},
urldate = {2022-06-21},
note = {Celebration of Research and Training in Oncology Conference
},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}

Close

Sebastien, Quetin; Zou, Yujing

Introduction to medical image processing with Python : DICOM and histopathology images Workshop

2022.

Links | BibTeX

@workshop{nokey,
title = {Introduction to medical image processing with Python : DICOM and histopathology images},
author = {Quetin Sebastien and Yujing Zou},
url = {https://www.youtube.com/watch?v=oazONk9JpFg&list=PLVH7T2_su-vkHLGQXJ0gHijbhjLJOCbaq&index=7
https://mcmedhacks.com/},
year = {2022},
date = {2022-06-17},
urldate = {2022-06-17},
howpublished = {from McMedHacks 2022},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}

Close

Zou, Yujing

Fonds de recherche du Québec - Nature et technologies (FRQNT) PhD doctoral training scholarship award

2022, ($84,000 for 2022 - 2026; Declined due to acceptance of FRQS).

Links | BibTeX

@award{nokey,
title = {Fonds de recherche du Québec - Nature et technologies (FRQNT) PhD doctoral training scholarship},
author = {Yujing Zou},
url = {https://repertoire.frq.gouv.qc.ca/offres/rechercheOffres.do?methode=afficher},
year = {2022},
date = {2022-06-02},
urldate = {2022-06-02},
organization = {Fonds de recherche du Québec},
note = {$84,000 for 2022 - 2026; Declined due to acceptance of FRQS},
keywords = {},
pubstate = {published},
tppubtype = {award}
}

Close

Zou, Yujing

Fonds de recherche du Québec - Santé (FRQS) PhD doctoral training scholarship award

2022, ($84,000 for 2022 - 2026).

Links | BibTeX

@award{nokey,
title = {Fonds de recherche du Québec - Santé (FRQS) PhD doctoral training scholarship},
author = {Yujing Zou},
url = {https://repertoire.frq.gouv.qc.ca/offres/rechercheOffres.do?methode=afficher},
year = {2022},
date = {2022-06-02},
urldate = {2022-06-02},
organization = {Fonds de recherche du Québec},
note = {$84,000 for 2022 - 2026},
keywords = {},
pubstate = {published},
tppubtype = {award}
}

Close

Zou, Yujing; Lecavalier-Barsoum, Magali; Pelmus, Manuela; Maleki, Farhad; Enger, Shirin A.

Predictive modeling of post radiation-therapy recurrence for gynecological cancer patients using clinical and histopathology imaging features Conference

Curietherapies 2022.

Abstract | Links | BibTeX

@conference{nokey,
title = {Predictive modeling of post radiation-therapy recurrence for gynecological cancer patients using clinical and histopathology imaging features},
author = {Yujing Zou and Magali Lecavalier-Barsoum and Manuela Pelmus and Farhad Maleki and Shirin A. Enger},
url = {https://www.researchgate.net/publication/361436138_Predictive_modeling_of_post_radiation-therapy_recurrence_for_gynecological_cancer_patients_using_clinical_and_histopathology_imaging_features},
year = {2022},
date = {2022-05-23},
urldate = {2022-05-23},
organization = {Curietherapies},
abstract = {Purpose: To build a machine-learning (ML) classifier to predict the clinical endpoint of post-Radiation-Therapy (RT) recurrence of gynecological cancer patients, while exploring the outcome predictability of cell spacing and nuclei size pre-treatment histopathology image features and clinical variables. Materials and Methods: Thirty-six gynecological (i.e., cervix, vaginal, and vulva) cancer patients (median age at diagnosis = 59.5 years) with a median follow-up time of 25.7 months, nine of which (event rate of 25%) experienced post-RT recurrence, were included in this analysis. Patient-specific nuclei size and cell spacing distributions from cancerous and non-tumoral regions of pre-treatment hematoxylin and eosin (H&E) stained digital histopathology Whole-Slide-Images (WSI) were extracted. The mean and standard deviation of these distributions were computed as imaging features for each WSI. Clinical features of clinical and radiological stage at the time of radiation, p16 status, age at diagnosis, and cancer type were also obtained. Uniquely, a Tree-based Pipeline Optimization Tool (TPOT) AutoML approach, including hyperparameter tuning, was implemented to find the best performing pipeline for this class-imbalanced and small dataset. A Radial Basis Function Kernel (RBF) sampler (gamma = 0.25) was applied to combined imaging and clinical input variables for training. The resulting features were fed into an XGBoost (ie., eXtreme gradient-boosting) classifier (learning rate = 0.1). Its outputs were propagated as “synthetic features” followed by polynomial feature transforms. All raw and transformed features were trained with a decision tree classification algorithm. Results of model evaluation metrics from a 10-fold stratified shuffle split cross-validation were averaged. A permutation test (n=1000) was performed to validate the significance of the classification scores. Results: Our model achieved a 10-fold stratified shuffle split cross-validation scores of 0.87 for mean accuracy, 0.92 for mean balanced accuracy, 0.78 for precision, 1 for recall, 0.85 for F1 score, and 0.92 for Area Under the Curve of Receiver Operating Characteristics Curve, to predict our patient cohort’s post-RT recurrence binary outcome. A p-value of 0.036 was obtained from the permutation test. This implies real dependencies between our combined imaging and clinical features and outcomes which were learned by the classifier, and the primising model performance was not by chance. Conclusions: Despite the small dataset and low event rate, as a proof of concept, we showed that a decision-tree-based ML classification algorithm using an XGBoost algorithm is able to utilize combined (cell spacing & nuclei size) imaging and clinical features to predict post-RT outcomes for gynecological cancer patients.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}

Close

Purpose: To build a machine-learning (ML) classifier to predict the clinical endpoint of post-Radiation-Therapy (RT) recurrence of gynecological cancer patients, while exploring the outcome predictability of cell spacing and nuclei size pre-treatment histopathology image features and clinical variables. Materials and Methods: Thirty-six gynecological (i.e., cervix, vaginal, and vulva) cancer patients (median age at diagnosis = 59.5 years) with a median follow-up time of 25.7 months, nine of which (event rate of 25%) experienced post-RT recurrence, were included in this analysis. Patient-specific nuclei size and cell spacing distributions from cancerous and non-tumoral regions of pre-treatment hematoxylin and eosin (H&E) stained digital histopathology Whole-Slide-Images (WSI) were extracted. The mean and standard deviation of these distributions were computed as imaging features for each WSI. Clinical features of clinical and radiological stage at the time of radiation, p16 status, age at diagnosis, and cancer type were also obtained. Uniquely, a Tree-based Pipeline Optimization Tool (TPOT) AutoML approach, including hyperparameter tuning, was implemented to find the best performing pipeline for this class-imbalanced and small dataset. A Radial Basis Function Kernel (RBF) sampler (gamma = 0.25) was applied to combined imaging and clinical input variables for training. The resulting features were fed into an XGBoost (ie., eXtreme gradient-boosting) classifier (learning rate = 0.1). Its outputs were propagated as “synthetic features” followed by polynomial feature transforms. All raw and transformed features were trained with a decision tree classification algorithm. Results of model evaluation metrics from a 10-fold stratified shuffle split cross-validation were averaged. A permutation test (n=1000) was performed to validate the significance of the classification scores. Results: Our model achieved a 10-fold stratified shuffle split cross-validation scores of 0.87 for mean accuracy, 0.92 for mean balanced accuracy, 0.78 for precision, 1 for recall, 0.85 for F1 score, and 0.92 for Area Under the Curve of Receiver Operating Characteristics Curve, to predict our patient cohort’s post-RT recurrence binary outcome. A p-value of 0.036 was obtained from the permutation test. This implies real dependencies between our combined imaging and clinical features and outcomes which were learned by the classifier, and the primising model performance was not by chance. Conclusions: Despite the small dataset and low event rate, as a proof of concept, we showed that a decision-tree-based ML classification algorithm using an XGBoost algorithm is able to utilize combined (cell spacing & nuclei size) imaging and clinical features to predict post-RT outcomes for gynecological cancer patients.

Close

Zou, Yujing; Lecavalier-Barsoum, Magali; Pelmus, Manuela; Maleki, Farhad; Enger, Shirin A.

Young Investigator Competition Winner at the Curietherapies Conference award

2022.

Abstract | Links | BibTeX

@award{nokey,
title = {Young Investigator Competition Winner at the Curietherapies Conference },
author = {Yujing Zou and Magali Lecavalier-Barsoum and Manuela Pelmus and Farhad Maleki and Shirin A. Enger },
url = {https://www.researchgate.net/publication/360979157_SP-0014_McMedHacks_Deep_learning_for_medical_image_analysis_workshops_and_Hackathon_in_radiation_oncology},
year = {2022},
date = {2022-05-23},
urldate = {2022-05-23},
organization = {Curietherapies},
abstract = {Purpose/Objective: The McMedHacks workshop and presentation series was created to teach individuals from various backgrounds about deep learning (DL) for medical image analysis in May, 2021. Material/Methods: McMedHacks is a free and student-led 8-week summer program. Registration for the event was open to everyone, including a form to survey participants’ area of expertise, country of origin, level of study, and level of programming skills. The weekly workshops were instructed by 8 students and experts assisted by 20 mentors who provided weekly tutorials. Recent developments in DL and medical physics were highlighted by 21 leaders from industry and academia. A virtual grand challenge Hackathon took place at the end of the workshop series. All events were held virtually and recorded on Zoom to accommodate all time zones and locations. The workshops were designed as interactive coding demos and shared through Google Colab notebooks. Results: McMedHacks gained 356 registrations from participants of 38 different countries (Fig. 1) from undergraduates, to PhDs and MDs. A vast number of disciplines and professions were represented, dominated by medical physics students, academic, and clinical medical physicists (Fig. 2). Sixty-nine participants earned a certificate of completion by having engaged with at least 12 of all 14 events. The program received participant feedback average scores of 4.768, 4.478, 4.579, 4.292, 4.84 out of five for the qualities of presentation, workshop session, tutorial and mentor, assignments, and course delivery, respectively. The eight-week long workshop’s duration allowed participants to digest the taught materials in a continuous manner as opposed to bootcamp-style conference workshops. Conclusion: The overwhelming interest and engagement for the McMedHacks workshop series from the Radiation Oncology (RadOnc) community illustrates a demand for Artificial Intelligence (AI) education in RadOnc. The future of RadOnc clinics will inevitably integrate AI. Therefore, current RadOnc professionals, and student and resident trainees should be prepared to understand basic AI principles and its applications to troubleshoot, innovate, and collaborate. McMedHacks set an excellent example of promoting open and multidisciplinary education, scientific communication, and leadership for integrating AI education into the RadOnc community on an international level. Therefore, we advocate for implementation of AI curriculums in professional education programs such as Commission on Accreditation of Medical Physics Education Programs (CAMPEP). Furthermore, we encourage experts from around the world in the field of AI, or RadOnc, or both, to take initiatives like McMedHacks to collaborate and push forward AI education in their departments and lead practical workshops, regardless of their levels of education.},
keywords = {},
pubstate = {published},
tppubtype = {award}
}

Close

Purpose/Objective: The McMedHacks workshop and presentation series was created to teach individuals from various backgrounds about deep learning (DL) for medical image analysis in May, 2021. Material/Methods: McMedHacks is a free and student-led 8-week summer program. Registration for the event was open to everyone, including a form to survey participants’ area of expertise, country of origin, level of study, and level of programming skills. The weekly workshops were instructed by 8 students and experts assisted by 20 mentors who provided weekly tutorials. Recent developments in DL and medical physics were highlighted by 21 leaders from industry and academia. A virtual grand challenge Hackathon took place at the end of the workshop series. All events were held virtually and recorded on Zoom to accommodate all time zones and locations. The workshops were designed as interactive coding demos and shared through Google Colab notebooks. Results: McMedHacks gained 356 registrations from participants of 38 different countries (Fig. 1) from undergraduates, to PhDs and MDs. A vast number of disciplines and professions were represented, dominated by medical physics students, academic, and clinical medical physicists (Fig. 2). Sixty-nine participants earned a certificate of completion by having engaged with at least 12 of all 14 events. The program received participant feedback average scores of 4.768, 4.478, 4.579, 4.292, 4.84 out of five for the qualities of presentation, workshop session, tutorial and mentor, assignments, and course delivery, respectively. The eight-week long workshop’s duration allowed participants to digest the taught materials in a continuous manner as opposed to bootcamp-style conference workshops. Conclusion: The overwhelming interest and engagement for the McMedHacks workshop series from the Radiation Oncology (RadOnc) community illustrates a demand for Artificial Intelligence (AI) education in RadOnc. The future of RadOnc clinics will inevitably integrate AI. Therefore, current RadOnc professionals, and student and resident trainees should be prepared to understand basic AI principles and its applications to troubleshoot, innovate, and collaborate. McMedHacks set an excellent example of promoting open and multidisciplinary education, scientific communication, and leadership for integrating AI education into the RadOnc community on an international level. Therefore, we advocate for implementation of AI curriculums in professional education programs such as Commission on Accreditation of Medical Physics Education Programs (CAMPEP). Furthermore, we encourage experts from around the world in the field of AI, or RadOnc, or both, to take initiatives like McMedHacks to collaborate and push forward AI education in their departments and lead practical workshops, regardless of their levels of education.

Close

Zou, Yujing

Biological & Biomedical Engineering PhD Recruitment Award award

2022, (These awards are designed to help recruit top students to our program and are offered to applicants wishing to start in Fall or Winter. The standard recruitment awards are $10,000/year for three years for Doctoral students.).

Links | BibTeX

@award{nokey,
title = {Biological & Biomedical Engineering PhD Recruitment Award },
author = {Yujing Zou},
url = {https://www.mcgill.ca/bbme/programs/funding#BME-Recruitment-Award},
year = {2022},
date = {2022-05-10},
urldate = {2022-05-10},
organization = {McGill Biological & Biomedical Engineering department },
note = {These awards are designed to help recruit top students to our program and are offered to applicants wishing to start in Fall or Winter. The standard recruitment awards are $10,000/year for three years for Doctoral students.},
keywords = {},
pubstate = {published},
tppubtype = {award}
}

Close

Jafarzadeh, Hossein

Biological & Biomedical Engineering PhD Recruitment Award award

2022.

Links | BibTeX

@award{nokey,
title = {Biological & Biomedical Engineering PhD Recruitment Award },
author = {Hossein Jafarzadeh },
url = {https://www.mcgill.ca/bbme/programs/funding#BME-Recruitment-Award},
year = {2022},
date = {2022-05-10},
keywords = {},
pubstate = {published},
tppubtype = {award}
}

Close

Zou, Yujing

Graduate Research Enhancement and Travel Awards (GREAT Awards) award

2022, (In 2009, Graduate and Postdoctoral Studies introduced the Graduate Research Enhancement and Travel awards (GREAT awards) program in consultation with the Faculty Deans and Associate Deans. These awards cover dissemination of research through graduate student presentations at conferences, and other graduate student research-enhancement activities, such as travel for fieldwork, archival inquiry and extra-mural collaborative research. GREAT budgets are allocated to Faculties each year, and as such, are managed directly by the Associate Dean's office.).

Links | BibTeX

@award{nokey,
title = {Graduate Research Enhancement and Travel Awards (GREAT Awards)},
author = {Yujing Zou},
url = {https://www.mcgill.ca/gps/funding/fac-staff/awards/great},
year = {2022},
date = {2022-04-14},
note = {In 2009, Graduate and Postdoctoral Studies introduced the Graduate Research Enhancement and Travel awards (GREAT awards) program in consultation with the Faculty Deans and Associate Deans. These awards cover dissemination of research through graduate student presentations at conferences, and other graduate student research-enhancement activities, such as travel for fieldwork, archival inquiry and extra-mural collaborative research. GREAT budgets are allocated to Faculties each year, and as such, are managed directly by the Associate Dean's office.},
keywords = {},
pubstate = {published},
tppubtype = {award}
}

Close

Weishaupt, Luca L.; Sayed, Hisham Kamal; Mao, Ximeng; Choo, Richard; Stish, Bradley J; Enger, Shirin A; Deufel, Christopher

Approaching automated applicator digitization from a new angle: Using sagittal images to improve deep learning accuracy and robustness in high-dose-rate prostate brachytherapy Journal Article

In: Brachytherapy, 2022.

BibTeX

@article{weishaupt2022approaching,
title = {Approaching automated applicator digitization from a new angle: Using sagittal images to improve deep learning accuracy and robustness in high-dose-rate prostate brachytherapy},
author = {Luca L. Weishaupt and Hisham Kamal Sayed and Ximeng Mao and Richard Choo and Bradley J Stish and Shirin A Enger and Christopher Deufel},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {Brachytherapy},
publisher = {Elsevier},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Jafarzadeh, Hossein; Mao, Ximeng; Enger, Shirin A.

Bayesian Optimization in Treatment Planning of High Dose Rate Brachytherapy Proceedings Article

In: MEDICAL PHYSICS, pp. E200–E200, WILEY 111 RIVER ST, HOBOKEN 07030-5774, NJ USA 2022.

BibTeX

@inproceedings{jafarzadeh2022bayesian,
title = {Bayesian Optimization in Treatment Planning of High Dose Rate Brachytherapy},
author = { Hossein Jafarzadeh and Ximeng Mao and Shirin A. Enger},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = {MEDICAL PHYSICS},
volume = {49},
number = {6},
pages = {E200--E200},
organization = {WILEY 111 RIVER ST, HOBOKEN 07030-5774, NJ USA},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}

Close

Zou, Yujing; Lecavalier-barsoum, Magali; Pelmus, Manuela; Enger, Shirin A.

Patient-Specific Nuclei Size and Cell Spacing Distribution Extraction From Histopathology Whole Slide Images for Treatment Outcome Prediction Modelling Proceedings Article

In: MEDICAL PHYSICS, pp. E266–E266, WILEY 111 RIVER ST, HOBOKEN 07030-5774, NJ USA 2022.

Abstract | Links | BibTeX

@inproceedings{zou2022patient,
title = {Patient-Specific Nuclei Size and Cell Spacing Distribution Extraction From Histopathology Whole Slide Images for Treatment Outcome Prediction Modelling},
author = {Yujing Zou and Magali Lecavalier-barsoum and Manuela Pelmus and Shirin A. Enger},
url = {https://w4.aapm.org/meetings/2022AM/programInfo/programAbs.php?sid=10686&aid=66642},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = {MEDICAL PHYSICS},
volume = {49},
number = {6},
pages = {E266--E266},
organization = {WILEY 111 RIVER ST, HOBOKEN 07030-5774, NJ USA},
abstract = {MO-C930-IePD-F5-1 (Monday, 7/11/2022) 9:30 AM - 10:00 AM [Eastern Time (GMT-4)]

Exhibit Hall | Forum 5

Purpose: To deliver a fully automated and generalizable approach extracting patient-specific nuclei size (ns) and cell spacing (cs) distributions from cancerous and non-tumoral regions of hematoxylin and eosin (H&E) stained digital histopathology Whole-Slide-Images (WSI) for gynecological cancer multiscale treatment outcome modelling.

Methods: Each pre-treatment gigapixel H&E WSI digitized at 40 x magnification (0.2482 microns/pixel) were divided into 5000 x 5000-pixel patches. Within each patch, the nucleus centers were identified by a difference of gaussians blob detection algorithm obtaining Delaunay triangulations and Voronoi diagrams providing cs radius. The ns radius was computed from stained pixels dominated by hematoxylin content with an automatic thresholding algorithm. With multiprocessing CPUs on a PC for each WSI, eight feature types were calculated preserving biopsies tissue heterogeneity: the mean and standard deviation of cs and ns distributions concatenated from all patches for cancerous and non-tumoral regions. This method was applied to 40 patients (1 WSI per patient) with treatment outcomes of post radiation-therapy (RT) recurrence (n = 9),and death (n = 8)).

Results: The WSI cancerous region cs distribution mean among patients without post-RT recurrence has a median of 6.64 microns, and those with post-RT recurrence with a median of 7.11 microns. This indicates the potential of utilizing such distribution features in treatment outcome prognosis modelling. Furthermore, at the third quartile, the WSI non-tumoral region ns distribution standard deviation among patients without post-RT recurrence has a value of 2.16 microns, and 1.46 microns for those with post-RT recurrence.

Conclusion: Our approach derives patient-specific microscopic data distributions from histopathology WSI that can be directly associated with retrospective patient outcomes. They are complementary and spatially orthogonal to information served by other medical imaging modalities such as CT, MR, and Ultrasound. Therefore, it has the unique potential to augment treatment outcome model inference when properly fused with radiological scans.

Funding Support, Disclosures, and Conflict of Interest: CIHR grant number 103548 and Canada Research Chairs Program (grant #252135)

Keywords

Image Analysis, Feature Extraction, Radiation Therapy

Taxonomy

IM/TH- Image Analysis (Single Modality or Multi-Modality): Imaging biomarkers and radiomics

Contact Email

yujing.zou@mail.mcgill.ca},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}

Close

MO-C930-IePD-F5-1 (Monday, 7/11/2022) 9:30 AM - 10:00 AM [Eastern Time (GMT-4)]

Exhibit Hall | Forum 5

Purpose: To deliver a fully automated and generalizable approach extracting patient-specific nuclei size (ns) and cell spacing (cs) distributions from cancerous and non-tumoral regions of hematoxylin and eosin (H&E) stained digital histopathology Whole-Slide-Images (WSI) for gynecological cancer multiscale treatment outcome modelling.

Methods: Each pre-treatment gigapixel H&E WSI digitized at 40 x magnification (0.2482 microns/pixel) were divided into 5000 x 5000-pixel patches. Within each patch, the nucleus centers were identified by a difference of gaussians blob detection algorithm obtaining Delaunay triangulations and Voronoi diagrams providing cs radius. The ns radius was computed from stained pixels dominated by hematoxylin content with an automatic thresholding algorithm. With multiprocessing CPUs on a PC for each WSI, eight feature types were calculated preserving biopsies tissue heterogeneity: the mean and standard deviation of cs and ns distributions concatenated from all patches for cancerous and non-tumoral regions. This method was applied to 40 patients (1 WSI per patient) with treatment outcomes of post radiation-therapy (RT) recurrence (n = 9),and death (n = 8)).

Results: The WSI cancerous region cs distribution mean among patients without post-RT recurrence has a median of 6.64 microns, and those with post-RT recurrence with a median of 7.11 microns. This indicates the potential of utilizing such distribution features in treatment outcome prognosis modelling. Furthermore, at the third quartile, the WSI non-tumoral region ns distribution standard deviation among patients without post-RT recurrence has a value of 2.16 microns, and 1.46 microns for those with post-RT recurrence.

Conclusion: Our approach derives patient-specific microscopic data distributions from histopathology WSI that can be directly associated with retrospective patient outcomes. They are complementary and spatially orthogonal to information served by other medical imaging modalities such as CT, MR, and Ultrasound. Therefore, it has the unique potential to augment treatment outcome model inference when properly fused with radiological scans.

Funding Support, Disclosures, and Conflict of Interest: CIHR grant number 103548 and Canada Research Chairs Program (grant #252135)

Keywords

Image Analysis, Feature Extraction, Radiation Therapy

Taxonomy

IM/TH- Image Analysis (Single Modality or Multi-Modality): Imaging biomarkers and radiomics

Contact Email

yujing.zou@mail.mcgill.ca

Close

Zou, Yujing; Weishaupt, Luca; Enger, Shirin A.

SP-0014 McMedHacks: Deep learning for medical image analysis workshops and Hackathon in radiation oncology Journal Article

In: Radiotherapy and Oncology, vol. 170, pp. S4–S5, 2022.

Abstract | Links | BibTeX

@article{zou2022sp,
title = {SP-0014 McMedHacks: Deep learning for medical image analysis workshops and Hackathon in radiation oncology},
author = {Yujing Zou and Luca Weishaupt and Shirin A. Enger},
url = {https://www-sciencedirect-com.proxy3.library.mcgill.ca/science/article/pii/S0167814022038695?via%3Dihub},
doi = {10.1016/S0167-8140(22)03869-5},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {Radiotherapy and Oncology},
volume = {170},
pages = {S4--S5},
publisher = {Elsevier},
abstract = {Purpose/Objective: The McMedHacks workshop and presentation series was created to teach individuals from various backgrounds about deep learning (DL) for medical image analysis in May, 2021. Material/Methods: McMedHacks is a free and student-led 8-week summer program. Registration for the event was open to everyone, including a form to survey participants’ area of expertise, country of origin, level of study, and level of programming skills. The weekly workshops were instructed by 8 students and experts assisted by 20 mentors who provided weekly tutorials. Recent developments in DL and medical physics were highlighted by 21 leaders from industry and academia. A virtual grand challenge Hackathon took place at the end of the workshop series. All events were held virtually and recorded on Zoom to accommodate all time zones and locations. The workshops were designed as interactive coding demos and shared through Google Colab notebooks. Results: McMedHacks gained 356 registrations from participants of 38 different countries (Fig. 1) from undergraduates, to PhDs and MDs. A vast number of disciplines and professions were represented, dominated by medical physics students, academic, and clinical medical physicists (Fig. 2). Sixty-nine participants earned a certificate of completion by having engaged with at least 12 of all 14 events. The program received participant feedback average scores of 4.768, 4.478, 4.579, 4.292, 4.84 out of five for the qualities of presentation, workshop session, tutorial and mentor, assignments, and course delivery, respectively. The eight-week long workshop’s duration allowed participants to digest the taught materials in a continuous manner as opposed to bootcamp-style conference workshops. Conclusion: The overwhelming interest and engagement for the McMedHacks workshop series from the Radiation Oncology (RadOnc) community illustrates a demand for Artificial Intelligence (AI) education in RadOnc. The future of RadOnc clinics will inevitably integrate AI. Therefore, current RadOnc professionals, and student and resident trainees should be prepared to understand basic AI principles and its applications to troubleshoot, innovate, and collaborate. McMedHacks set an excellent example of promoting open and multidisciplinary education, scientific communication, and leadership for integrating AI education into the RadOnc community on an international level. Therefore, we advocate for implementation of AI curriculums in professional education programs such as Commission on Accreditation of Medical Physics Education Programs (CAMPEP). Furthermore, we encourage experts from around the world in the field of AI, or RadOnc, or both, to take initiatives like McMedHacks to collaborate and push forward AI education in their departments and lead practical workshops, regardless of their levels of education.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Purpose/Objective: The McMedHacks workshop and presentation series was created to teach individuals from various backgrounds about deep learning (DL) for medical image analysis in May, 2021. Material/Methods: McMedHacks is a free and student-led 8-week summer program. Registration for the event was open to everyone, including a form to survey participants’ area of expertise, country of origin, level of study, and level of programming skills. The weekly workshops were instructed by 8 students and experts assisted by 20 mentors who provided weekly tutorials. Recent developments in DL and medical physics were highlighted by 21 leaders from industry and academia. A virtual grand challenge Hackathon took place at the end of the workshop series. All events were held virtually and recorded on Zoom to accommodate all time zones and locations. The workshops were designed as interactive coding demos and shared through Google Colab notebooks. Results: McMedHacks gained 356 registrations from participants of 38 different countries (Fig. 1) from undergraduates, to PhDs and MDs. A vast number of disciplines and professions were represented, dominated by medical physics students, academic, and clinical medical physicists (Fig. 2). Sixty-nine participants earned a certificate of completion by having engaged with at least 12 of all 14 events. The program received participant feedback average scores of 4.768, 4.478, 4.579, 4.292, 4.84 out of five for the qualities of presentation, workshop session, tutorial and mentor, assignments, and course delivery, respectively. The eight-week long workshop’s duration allowed participants to digest the taught materials in a continuous manner as opposed to bootcamp-style conference workshops. Conclusion: The overwhelming interest and engagement for the McMedHacks workshop series from the Radiation Oncology (RadOnc) community illustrates a demand for Artificial Intelligence (AI) education in RadOnc. The future of RadOnc clinics will inevitably integrate AI. Therefore, current RadOnc professionals, and student and resident trainees should be prepared to understand basic AI principles and its applications to troubleshoot, innovate, and collaborate. McMedHacks set an excellent example of promoting open and multidisciplinary education, scientific communication, and leadership for integrating AI education into the RadOnc community on an international level. Therefore, we advocate for implementation of AI curriculums in professional education programs such as Commission on Accreditation of Medical Physics Education Programs (CAMPEP). Furthermore, we encourage experts from around the world in the field of AI, or RadOnc, or both, to take initiatives like McMedHacks to collaborate and push forward AI education in their departments and lead practical workshops, regardless of their levels of education.

Close

Weishaupt, Luca L.; Vuong, Te; Thibodeau-Antonacci, Alana; Garant, A; Singh, K; Miller, C; Martin, A; Schmitt-Ulms, F; Enger, Shirin A.

PO-1325 Automated rectal tumor segmentation with inter-observer variability-based uncertainty estimates Journal Article

In: Radiotherapy and Oncology, vol. 170, pp. S1120–S1121, 2022.

BibTeX

@article{weishaupt2022po,
title = {PO-1325 Automated rectal tumor segmentation with inter-observer variability-based uncertainty estimates},
author = {Luca L. Weishaupt and Te Vuong and Alana Thibodeau-Antonacci and A Garant and K Singh and C Miller and A Martin and F Schmitt-Ulms and Shirin A. Enger},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {Radiotherapy and Oncology},
volume = {170},
pages = {S1120--S1121},
publisher = {Elsevier},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Weishaupt, Luca L; Vuong, Te; Thibodeau-Antonacci, Alana; Garant, A; Singh, KS; Miller, C; Martin, A; Enger, Shirin A.

A121 QUANTIFYING INTER-OBSERVER VARIABILITY IN THE SEGMENTATION OF RECTAL TUMORS IN ENDOSCOPY IMAGES AND ITS EFFECTS ON DEEP LEARNING Journal Article

In: Journal of the Canadian Association of Gastroenterology, vol. 5, no. Supplement_1, pp. 140–142, 2022.

BibTeX

@article{weishaupt2022a121,
title = {A121 QUANTIFYING INTER-OBSERVER VARIABILITY IN THE SEGMENTATION OF RECTAL TUMORS IN ENDOSCOPY IMAGES AND ITS EFFECTS ON DEEP LEARNING},
author = {Luca L Weishaupt and Te Vuong and Alana Thibodeau-Antonacci and A Garant and KS Singh and C Miller and A Martin and Shirin A. Enger},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {Journal of the Canadian Association of Gastroenterology},
volume = {5},
number = {Supplement_1},
pages = {140--142},
publisher = {Oxford University Press US},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Thibodeau-Antonacci, Alana; Vuong, Te; Liontis, B; Rayes, F; Pande, S; Enger, Shirin A.

Development of a Novel MRI-Compatible Applicator for Intensity Modulated Rectal Brachytherapy Proceedings Article

In: MEDICAL PHYSICS, pp. E240–E240, WILEY 111 RIVER ST, HOBOKEN 07030-5774, NJ USA 2022.

BibTeX

@inproceedings{thibodeau2022development,
title = {Development of a Novel MRI-Compatible Applicator for Intensity Modulated Rectal Brachytherapy},
author = {Alana Thibodeau-Antonacci and Te Vuong and B Liontis and F Rayes and S Pande and Shirin A. Enger},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = {MEDICAL PHYSICS},
volume = {49},
number = {6},
pages = {E240--E240},
organization = {WILEY 111 RIVER ST, HOBOKEN 07030-5774, NJ USA},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}

Close

Thibodeau-Antonacci, Alana; Enger, Shirin A.; Bekerat, Hamed; Vuong, Te

Gafchromic film and scintillator detector measurements in phantom with a novel intensity-modulated brachytherapy endorectal shield Proceedings Article

In: MEDICAL PHYSICS, pp. 5688–5689, WILEY 111 RIVER ST, HOBOKEN 07030-5774, NJ USA 2022.

BibTeX

@inproceedings{thibodeau2022gafchromic,
title = {Gafchromic film and scintillator detector measurements in phantom with a novel intensity-modulated brachytherapy endorectal shield},
author = {Alana Thibodeau-Antonacci and Shirin A. Enger and Hamed Bekerat and Te Vuong},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = {MEDICAL PHYSICS},
volume = {49},
number = {8},
pages = {5688--5689},
organization = {WILEY 111 RIVER ST, HOBOKEN 07030-5774, NJ USA},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}

Close

2021

Weishaupt, Luca L.

T I Gurman Prize in Physics award

2021.

Abstract | Links | BibTeX

@award{Weishaupt2021,
title = {T I Gurman Prize in Physics},
author = {Luca L. Weishaupt},
url = {http://scholarships.studentscholarships.org/t_i_gurman_prize_2236.php},
year = {2021},
date = {2021-09-01},
urldate = {2021-09-01},
organization = {McGill University},
abstract = {Established in 1997 by friends and family of T.I. Gurman in honour of his 95th birthday. Awarded by the Faculty of Science Scholarships Committee on the recommendation of the Department of Physics to a student with high academic standing entering the final year in a Major program in Physics.},
howpublished = {McGill University},
keywords = {},
pubstate = {published},
tppubtype = {award}
}

Close

Established in 1997 by friends and family of T.I. Gurman in honour of his 95th birthday. Awarded by the Faculty of Science Scholarships Committee on the recommendation of the Department of Physics to a student with high academic standing entering the final year in a Major program in Physics.

Close

Thibodeau-Antonacci, Alana

Canada Graduate Scholarship – Doctoral Program award

2021.

Links | BibTeX

@award{Thibodeau-Antonacci2021d,
title = {Canada Graduate Scholarship – Doctoral Program},
author = {Alana Thibodeau-Antonacci},
url = {https://www.nserc-crsng.gc.ca/students-etudiants/pg-cs/cgsd-bescd_eng.asp},
year = {2021},
date = {2021-09-01},
organization = {NSERC},
keywords = {},
pubstate = {published},
tppubtype = {award}
}

Close

Weishaupt, Luca L.; Thibodeau-Antonacci, Alana; Garant, Aurelie; Singh, Kelita; Miller, Corey; Vuong, Té; Enger, Shirin A.

Deep learning based tumor segmentation of endoscopy images for rectal cancer patients Presentation

ESTRO Annual meeting, 27.08.2021.

Abstract | Links | BibTeX

@misc{Weishaupt2021b,
title = {Deep learning based tumor segmentation of endoscopy images for rectal cancer patients},
author = {Luca L. Weishaupt and Alana Thibodeau-Antonacci and Aurelie Garant and Kelita Singh and Corey Miller and Té Vuong and Shirin A. Enger},
url = {https://www.estro.org/Congresses/ESTRO-2021/610/posterdiscussion34-deep-learningforauto-contouring/3710/deeplearning-basedtumorsegmentationofendoscopyimag},
year = {2021},
date = {2021-08-27},
urldate = {2021-08-27},
abstract = {Purpose or Objective
The objective of this study was to develop an automated rectal tumor segmentation algorithm from endoscopy images. The algorithm will be used in a future multimodal treatment outcome prediction model. Currently, treatment outcome prediction models rely on manual segmentations of regions of interest, which are prone to inter-observer variability. To quantify this human error and demonstrate the feasibility of automated endoscopy image segmentation, we compare three deep learning architectures.

Material and Methods
A gastrointestinal physician (G1) segmented 550 endoscopy images of rectal tumors into tumor and non-tumor regions. To quantify the inter-observer variability, a second gastrointestinal physician (G2) contoured 319 of the images independently.

The 550 images and annotations from G1 were divided into 408 training, 82 validation, and 60 testing sets. Three deep learning architectures were trained; a fully convolutional neural network (FCN32), a U-Net, and a SegNet. These architectures have been used for robust medical image segmentation in previous studies.

All models were trained on a CPU supercomputing cluster. Data augmentation in the form of random image transformations, including scaling, rotation, shearing, Gaussian blurring, and noise addition, was used to improve the models' robustness.

The neural networks' output went through a final layer of noise removal and hole filling before evaluation. Finally, the segmentations from G2 and the neural networks' predictions were compared against the ground truth labels from G1.

Results
The FCN32, U-Net, and SegNet had average segmentation times of 0.77, 0.48, and 0.43 seconds per image, respectively. The average segmentation time per image for G1 and G2 were 10 and 8 seconds, respectively.

All the ground truth labels contained tumors, but G2 and the deep learning models did not always find tumors in the images. The scores are based on the agreement of tumor contours with G1’s ground truth and were thus only computed for images in which tumor was found. The automated segmentation algorithms consistently achieved equal or better scores than G2's manual segmentations. G2's low F1/DICE and precision scores indicate poor agreement between the manual contours.

Conclusion
There is a need for robust and accurate segmentation algorithms for rectal tumor segmentation since manual segmentation of these tumors is susceptible to significant inter-observer variability. The deep learning-based segmentation algorithms proposed in this study are more efficient and achieved a higher agreement with our manual ground truth segmentations than a second expert annotator. Future studies will investigate how to train deep learning models on multiple ground truth annotations to prevent learning observer biases.},
howpublished = {ESTRO Annual meeting},
keywords = {},
pubstate = {published},
tppubtype = {presentation}
}

Close

Purpose or Objective
The objective of this study was to develop an automated rectal tumor segmentation algorithm from endoscopy images. The algorithm will be used in a future multimodal treatment outcome prediction model. Currently, treatment outcome prediction models rely on manual segmentations of regions of interest, which are prone to inter-observer variability. To quantify this human error and demonstrate the feasibility of automated endoscopy image segmentation, we compare three deep learning architectures.

Material and Methods
A gastrointestinal physician (G1) segmented 550 endoscopy images of rectal tumors into tumor and non-tumor regions. To quantify the inter-observer variability, a second gastrointestinal physician (G2) contoured 319 of the images independently.

The 550 images and annotations from G1 were divided into 408 training, 82 validation, and 60 testing sets. Three deep learning architectures were trained; a fully convolutional neural network (FCN32), a U-Net, and a SegNet. These architectures have been used for robust medical image segmentation in previous studies.

All models were trained on a CPU supercomputing cluster. Data augmentation in the form of random image transformations, including scaling, rotation, shearing, Gaussian blurring, and noise addition, was used to improve the models' robustness.

The neural networks' output went through a final layer of noise removal and hole filling before evaluation. Finally, the segmentations from G2 and the neural networks' predictions were compared against the ground truth labels from G1.

Results
The FCN32, U-Net, and SegNet had average segmentation times of 0.77, 0.48, and 0.43 seconds per image, respectively. The average segmentation time per image for G1 and G2 were 10 and 8 seconds, respectively.

All the ground truth labels contained tumors, but G2 and the deep learning models did not always find tumors in the images. The scores are based on the agreement of tumor contours with G1’s ground truth and were thus only computed for images in which tumor was found. The automated segmentation algorithms consistently achieved equal or better scores than G2's manual segmentations. G2's low F1/DICE and precision scores indicate poor agreement between the manual contours.

Conclusion
There is a need for robust and accurate segmentation algorithms for rectal tumor segmentation since manual segmentation of these tumors is susceptible to significant inter-observer variability. The deep learning-based segmentation algorithms proposed in this study are more efficient and achieved a higher agreement with our manual ground truth segmentations than a second expert annotator. Future studies will investigate how to train deep learning models on multiple ground truth annotations to prevent learning observer biases.

Close

Thibodeau-Antonacci, Alana; Jafarzadeh, Hossein; Carroll, Liam; Weishaupt, Luca L.

Mitacs Globalink Research Award award

2021.

Abstract | Links | BibTeX

@award{Thibodeau-Antonacci2021c,
title = {Mitacs Globalink Research Award},
author = {Alana Thibodeau-Antonacci and Hossein Jafarzadeh and Liam Carroll and Luca L. Weishaupt},
url = {https://www.mitacs.ca/en/programs/globalink/globalink-research-award},
year = {2021},
date = {2021-07-01},
urldate = {2021-07-01},
organization = {MITACS},
abstract = {The Mitacs Globalink Research Award (GRA) supports research collaborations between Canada and select partner organizations and eligible countries and regions. It was awarded to Alana Thibodeau-Antonacci, Hossein Jafarzadeh, Liam Carroll and Luca L. Weishaupt.

Under the joint supervision of a home and host professor, successful senior undergraduate students, graduate students, as well as postdoctoral fellows will receive a $6,000 research award to conduct a 12- to 24-week research project in the other country. Awards are offered in partnership with Mitacs’s Canadian academic partners (and, in some cases, with Mitacs’s international partners) and are subject to available funding. },
howpublished = {Mitacs},
keywords = {},
pubstate = {published},
tppubtype = {award}
}

Close

The Mitacs Globalink Research Award (GRA) supports research collaborations between Canada and select partner organizations and eligible countries and regions. It was awarded to Alana Thibodeau-Antonacci, Hossein Jafarzadeh, Liam Carroll and Luca L. Weishaupt.

Under the joint supervision of a home and host professor, successful senior undergraduate students, graduate students, as well as postdoctoral fellows will receive a $6,000 research award to conduct a 12- to 24-week research project in the other country. Awards are offered in partnership with Mitacs’s Canadian academic partners (and, in some cases, with Mitacs’s international partners) and are subject to available funding.

Close

Weishaupt, Luca L.; Thibodeau-Antonacci, Alana; Garant, Aurelie; Singh, Kelita; Miller, Corey; Vuong, Té; Enger, Shirin A.

Inter-Observer Variability and Deep Learning in Rectal Tumor Segmentation from Endoscopy Images Presentation

The COMP Annual Scientific Meeting 2021, 22.06.2021.

Abstract | BibTeX

@misc{Weishaupt2021c,
title = {Inter-Observer Variability and Deep Learning in Rectal Tumor Segmentation from Endoscopy Images},
author = {Luca L. Weishaupt and Alana Thibodeau-Antonacci and Aurelie Garant and Kelita Singh and Corey Miller and Té Vuong and Shirin A. Enger},
year = {2021},
date = {2021-06-22},
urldate = {2021-06-22},
abstract = {Purpose
To develop an automated rectal tumor segmentation algorithm from endoscopy images.

Material/Methods
A gastrointestinal physician (G1) segmented 2005 endoscopy images into tumor and non-tumor
regions. To quantify the inter-observer variability, a second gastrointestinal physician (G2)
contoured the images independently.

Three deep-learning architectures used for robust medical image segmentation in previous
studies were trained: a fully convolutional neural network (FCN32), a U-Net, and a SegNet.
Since the majority of the images did not contain tumors, two methods were compared for
training. Models were trained using only tumor images (M1) and all images (M2). G1’s images
and annotations were divided into 408 training, 82 validation, and 60 testing sets for M1, 1181
training, 372 validation, and 452 testing sets for M2.
Finally, segmentations from G2 and neural networks' predictions were compared against ground
truth labels from G1, and F1 scores were computed for images where both physicians found
tumors.

Results
The deep-learning segmentation took less than 1 second, while manual segmentation took
approximately 10 seconds per image.
The M1’s models consistently achieved equal or better scores (SegNet F1:0.80±0.08) than G2's
manual segmentations (F1:0.68±0.25). G2's low F1/DICE and precision scores indicate poor
agreement between the manual contours. Models from M2 achieved lower scores than G2 and
M1’s models since they demonstrated a strong bias towards predicting no tumor for all images.

Conclusion
Future studies will investigate training on an equal number of images with/without tumor, using
ground truth contours from multiple experts simultaneously.},
howpublished = {The COMP Annual Scientific Meeting 2021},
keywords = {},
pubstate = {published},
tppubtype = {presentation}
}

Close

Purpose
To develop an automated rectal tumor segmentation algorithm from endoscopy images.

Material/Methods
A gastrointestinal physician (G1) segmented 2005 endoscopy images into tumor and non-tumor
regions. To quantify the inter-observer variability, a second gastrointestinal physician (G2)
contoured the images independently.

Three deep-learning architectures used for robust medical image segmentation in previous
studies were trained: a fully convolutional neural network (FCN32), a U-Net, and a SegNet.
Since the majority of the images did not contain tumors, two methods were compared for
training. Models were trained using only tumor images (M1) and all images (M2). G1’s images
and annotations were divided into 408 training, 82 validation, and 60 testing sets for M1, 1181
training, 372 validation, and 452 testing sets for M2.
Finally, segmentations from G2 and neural networks' predictions were compared against ground
truth labels from G1, and F1 scores were computed for images where both physicians found
tumors.

Results
The deep-learning segmentation took less than 1 second, while manual segmentation took
approximately 10 seconds per image.
The M1’s models consistently achieved equal or better scores (SegNet F1:0.80±0.08) than G2's
manual segmentations (F1:0.68±0.25). G2's low F1/DICE and precision scores indicate poor
agreement between the manual contours. Models from M2 achieved lower scores than G2 and
M1’s models since they demonstrated a strong bias towards predicting no tumor for all images.

Conclusion
Future studies will investigate training on an equal number of images with/without tumor, using
ground truth contours from multiple experts simultaneously.

Close

Morcos, Marc; Antaki, Majd; Thibodeau-Antonacci, Alana; Kalinowski, Jonathan; Glickman, Harry; Enger, Shirin A.

RapidBrachyMCTPS: An open-source dose calculation and optimization tool for brachytherapy research Presentation

COMP, 01.06.2021.

BibTeX

@misc{Morcos2021c,
title = {RapidBrachyMCTPS: An open-source dose calculation and optimization tool for brachytherapy research},
author = {Marc Morcos and Majd Antaki and Alana Thibodeau-Antonacci and Jonathan Kalinowski and Harry Glickman and Shirin A. Enger},
year = {2021},
date = {2021-06-01},
howpublished = {COMP},
keywords = {},
pubstate = {published},
tppubtype = {presentation}
}

Close

Thibodeau-Antonacci, Alana; Vuong, Té; Bekerat, Hamed; Liang, Liheng; Enger, Shirin A.

Development of a Dynamic Shielding Intensity-Modulated Brachytherapy Applicator for the Treatment of Rectal Cancer award

2021.

Abstract | Links | BibTeX

@award{Thibodeau-Antonacci2021b,
title = {Development of a Dynamic Shielding Intensity-Modulated Brachytherapy Applicator for the Treatment of Rectal Cancer},
author = {Alana Thibodeau-Antonacci and Té Vuong and Hamed Bekerat and Liheng Liang and Shirin A. Enger},
url = {https://curietherapi.es/},
year = {2021},
date = {2021-05-23},
urldate = {2021-05-23},
organization = {Curietherapies},
abstract = {Oral presentation given online at the annual congress of Curietherapies https://curietherapi.es/},
howpublished = {Annual Congress of Curietherapies},
keywords = {},
pubstate = {published},
tppubtype = {award}
}

Close

Oral presentation given online at the annual congress of Curietherapies https://curietherapi.es/

Close

Weishaupt, Luca L.; Torres, Jose; Camilleri-Broët, Sophie; Rayes, Roni F.; Spicer, Jonathan D.; Maldonado, Sabrina Côté; Enger, Shirin A.

Deep learning-based tumor segmentation on digital images of histopathology slides for microdosimetry applications Journal Article

In: arXiv:2105.01824 [physics], 2021, (arXiv: 2105.01824).

Abstract | Links | BibTeX

@article{weishaupt_deep_2021,
title = {Deep learning-based tumor segmentation on digital images of histopathology slides for microdosimetry applications},
author = {Luca L. Weishaupt and Jose Torres and Sophie Camilleri-Broët and Roni F. Rayes and Jonathan D. Spicer and Sabrina Côté Maldonado and Shirin A. Enger},
url = {http://arxiv.org/abs/2105.01824},
year = {2021},
date = {2021-05-01},
urldate = {2021-09-08},
journal = {arXiv:2105.01824 [physics]},
abstract = {$textbackslashbfPurpose:$ The goal of this study was (i) to use artificial intelligence to automate the traditionally labor-intensive process of manual segmentation of tumor regions in pathology slides performed by a pathologist and (ii) to validate the use of a well-known and readily available deep learning architecture. Automation will reduce the human error involved in manual delineation, increase efficiency, and result in accurate and reproducible segmentation. This advancement will alleviate the bottleneck in the workflow in clinical and research applications due to a lack of pathologist time. Our application is patient-specific microdosimetry and radiobiological modeling, which builds on the contoured pathology slides. $textbackslashbfMethods:$ A U-Net architecture was used to segment tumor regions in pathology core biopsies of lung tissue with adenocarcinoma stained using hematoxylin and eosin. A pathologist manually contoured the tumor regions in 56 images with binary masks for training. Overlapping patch extraction with various patch sizes and image downsampling were investigated individually. Data augmentation and 8-fold cross-validation were used. $textbackslashbfResults:$ The U-Net achieved accuracy of 0.91$textbackslashpm$0.06, specificity of 0.90$textbackslashpm$0.08, sensitivity of 0.92$textbackslashpm$0.07, and precision of 0.8$textbackslashpm$0.1. The F1/DICE score was 0.85$textbackslashpm$0.07, with a segmentation time of 3.24$textbackslashpm$0.03 seconds per image, achieving a 370$textbackslashpm$3 times increased efficiency over manual segmentation. In some cases, the U-Net correctly delineated the tumor's stroma from its epithelial component in regions that were classified as tumor by the pathologist. $textbackslashbfConclusion:$ The U-Net architecture can segment images with a level of efficiency and accuracy that makes it suitable for tumor segmentation of histopathological images in fields such as radiotherapy dosimetry, specifically in the subfields of microdosimetry.},
note = {arXiv: 2105.01824},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

$textbackslashbfPurpose:$ The goal of this study was (i) to use artificial intelligence to automate the traditionally labor-intensive process of manual segmentation of tumor regions in pathology slides performed by a pathologist and (ii) to validate the use of a well-known and readily available deep learning architecture. Automation will reduce the human error involved in manual delineation, increase efficiency, and result in accurate and reproducible segmentation. This advancement will alleviate the bottleneck in the workflow in clinical and research applications due to a lack of pathologist time. Our application is patient-specific microdosimetry and radiobiological modeling, which builds on the contoured pathology slides. $textbackslashbfMethods:$ A U-Net architecture was used to segment tumor regions in pathology core biopsies of lung tissue with adenocarcinoma stained using hematoxylin and eosin. A pathologist manually contoured the tumor regions in 56 images with binary masks for training. Overlapping patch extraction with various patch sizes and image downsampling were investigated individually. Data augmentation and 8-fold cross-validation were used. $textbackslashbfResults:$ The U-Net achieved accuracy of 0.91$textbackslashpm$0.06, specificity of 0.90$textbackslashpm$0.08, sensitivity of 0.92$textbackslashpm$0.07, and precision of 0.8$textbackslashpm$0.1. The F1/DICE score was 0.85$textbackslashpm$0.07, with a segmentation time of 3.24$textbackslashpm$0.03 seconds per image, achieving a 370$textbackslashpm$3 times increased efficiency over manual segmentation. In some cases, the U-Net correctly delineated the tumor's stroma from its epithelial component in regions that were classified as tumor by the pathologist. $textbackslashbfConclusion:$ The U-Net architecture can segment images with a level of efficiency and accuracy that makes it suitable for tumor segmentation of histopathological images in fields such as radiotherapy dosimetry, specifically in the subfields of microdosimetry.

Close

Thibodeau-Antonacci, Alana; Vuong, Té; Bekerat, Hamed; Childress, Lilian; Enger, Shirin A.

OC-0112 development of a dynamic-shielding intensity modulated endorectal brachytherapy applicator Presentation

Radiotherapy and Oncology, 01.05.2021, ISBN: 0167-8140, 1879-0887.

Abstract | Links | BibTeX

@misc{Thibodeau-Antonacci2021,
title = {OC-0112 development of a dynamic-shielding intensity modulated endorectal brachytherapy applicator},
author = {Alana Thibodeau-Antonacci and Té Vuong and Hamed Bekerat and Lilian Childress and Shirin A. Enger},
url = {https://www.thegreenjournal.com/article/S0167-8140(21)06316-7/fulltext},
doi = {10.1016/S0167-8140(21)06316-7},
isbn = {0167-8140, 1879-0887},
year = {2021},
date = {2021-05-01},
abstract = {www.thegreenjournal.com},
howpublished = {Radiotherapy and Oncology},
keywords = {},
pubstate = {published},
tppubtype = {presentation}
}

Close

Zou, Yujing; Lecavalier-Barsoum, Magali; Enger, Shirin A.

Treatment outcome Prediction for gynecological cancers patients with a machine learning model using pre/post diagnostic image modalities and digital histopathology images Presentation

CRUK RadNet Manchester AI for Optimising Radiotherapy Outcomes Workshop, 10.02.2021.

Abstract | BibTeX

@misc{Zou2021,
title = {Treatment outcome Prediction for gynecological cancers patients with a machine learning model using pre/post diagnostic image modalities and digital histopathology images},
author = {Yujing Zou and Magali Lecavalier-Barsoum and Shirin A. Enger },
year = {2021},
date = {2021-02-10},
urldate = {2021-02-10},
abstract = {Oral Presentation (1 min fire-up pitch)},
howpublished = {CRUK RadNet Manchester AI for Optimising Radiotherapy Outcomes Workshop},
keywords = {},
pubstate = {published},
tppubtype = {presentation}
}

Close

Oral Presentation (1 min fire-up pitch)

Close

Weishaupt, Luca L.

Fire-Up - Radiation Treatment Outcome Prediction Presentation

Fire-Up Presentation, 09.02.2021.

BibTeX

@misc{luca_fireup,
title = {Fire-Up - Radiation Treatment Outcome Prediction},
author = {Luca L. Weishaupt},
year = {2021},
date = {2021-02-09},
urldate = {2021-02-09},
howpublished = {Fire-Up Presentation},
keywords = {},
pubstate = {published},
tppubtype = {presentation}
}

Close

Weishaupt, Luca L.; Sayed, Hisham Kamal; Mao, Ximeng; Choo, Chunhee; Stish, Bradley; Enger, Shirin A.; Deufel, Christopher

Approaching automated applicator digitization from a new angle: using sagittal images to improve deep learning accuracy and robustness in high-dose-rate prostate brachytherapy Journal Article

In: 2021 ABS Annual Meeting, 2021, (Type: Journal Article).

BibTeX

@article{weishaupt_approaching_2021-1,
title = {Approaching automated applicator digitization from a new angle: using sagittal images to improve deep learning accuracy and robustness in high-dose-rate prostate brachytherapy},
author = {Luca L. Weishaupt and Hisham Kamal Sayed and Ximeng Mao and Chunhee Choo and Bradley Stish and Shirin A. Enger and Christopher Deufel},
year = {2021},
date = {2021-01-01},
journal = {2021 ABS Annual Meeting},
note = {Type: Journal Article},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Deufel, Christopher; Weishaupt, Luca L.; Sayed, Hisham Kamal; Choo, Chunhee; Stish, Bradley

Deep learning for automated applicator reconstruction in high-dose-rate prostate brachytherapy Journal Article

In: World Congress of Brachytherapy 2021, 2021, (Type: Journal Article).

Links | BibTeX

@article{deufel_deep_2021,
title = {Deep learning for automated applicator reconstruction in high-dose-rate prostate brachytherapy},
author = {Christopher Deufel and Luca L. Weishaupt and Hisham Kamal Sayed and Chunhee Choo and Bradley Stish},
url = {https://www.estro.org/Congresses/WCB-2021/811/poster-physics/3229/deeplearningforautomatedapplicatorreconstructionin},
year = {2021},
date = {2021-01-01},
journal = {World Congress of Brachytherapy 2021},
note = {Type: Journal Article},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

2020

Mao, Ximeng; Pineau, Joelle; Keyes, Roy; Enger, Shirin A.

RapidBrachyDL: Rapid Radiation Dose Calculations in Brachytherapy Via Deep Learning Journal Article

In: International Journal of Radiation Oncology, Biology, Physics, vol. 108, no. 3, pp. 802–812, 2020, ISSN: 1879-355X.

Abstract | Links | BibTeX

@article{mao_rapidbrachydl_2020,
title = {RapidBrachyDL: Rapid Radiation Dose Calculations in Brachytherapy Via Deep Learning},
author = {Ximeng Mao and Joelle Pineau and Roy Keyes and Shirin A. Enger},
doi = {10.1016/j.ijrobp.2020.04.045},
issn = {1879-355X},
year = {2020},
date = {2020-11-01},
journal = {International Journal of Radiation Oncology, Biology, Physics},
volume = {108},
number = {3},
pages = {802--812},
abstract = {PURPOSE: Detailed and accurate absorbed dose calculations from radiation interactions with the human body can be obtained with the Monte Carlo (MC) method. However, the MC method can be slow for use in the time-sensitive clinical workflow. The aim of this study was to provide a solution to the accuracy-time trade-off for 192Ir-based high-dose-rate brachytherapy by using deep learning.
METHODS AND MATERIALS: RapidBrachyDL, a 3-dimensional deep convolutional neural network (CNN) model, is proposed to predict dose distributions calculated with the MC method given a patient's computed tomography images, contours of clinical target volume (CTV) and organs at risk, and treatment plan. Sixty-one patients with prostate cancer and 10 patients with cervical cancer were included in this study, with data from 47 patients with prostate cancer being used to train the model.
RESULTS: Compared with ground truth MC simulations, the predicted dose distributions by RapidBrachyDL showed a consistent shape in the dose-volume histograms (DVHs); comparable DVH dosimetric indices including 0.73% difference for prostate CTV D90, 1.1% for rectum D2cc, 1.45% for urethra D0.1cc, and 1.05% for bladder D2cc; and substantially smaller prediction time, acceleration by a factor of 300. RapidBrachyDL also demonstrated good generalization to cervical data with 1.73%, 2.46%, 1.68%, and 1.74% difference for CTV D90, rectum D2cc, sigmoid D2cc, and bladder D2cc, respectively, which was unseen during the training.
CONCLUSION: Deep CNN-based dose estimation is a promising method for patient-specific brachytherapy dosimetry. Desired radiation quantities can be obtained with accuracies arbitrarily close to those of the source MC algorithm, but with much faster computation times. The idea behind deep CNN-based dose estimation can be safely extended to other radiation sources and tumor sites by following a similar training process.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

PURPOSE: Detailed and accurate absorbed dose calculations from radiation interactions with the human body can be obtained with the Monte Carlo (MC) method. However, the MC method can be slow for use in the time-sensitive clinical workflow. The aim of this study was to provide a solution to the accuracy-time trade-off for 192Ir-based high-dose-rate brachytherapy by using deep learning.
METHODS AND MATERIALS: RapidBrachyDL, a 3-dimensional deep convolutional neural network (CNN) model, is proposed to predict dose distributions calculated with the MC method given a patient's computed tomography images, contours of clinical target volume (CTV) and organs at risk, and treatment plan. Sixty-one patients with prostate cancer and 10 patients with cervical cancer were included in this study, with data from 47 patients with prostate cancer being used to train the model.
RESULTS: Compared with ground truth MC simulations, the predicted dose distributions by RapidBrachyDL showed a consistent shape in the dose-volume histograms (DVHs); comparable DVH dosimetric indices including 0.73% difference for prostate CTV D90, 1.1% for rectum D2cc, 1.45% for urethra D0.1cc, and 1.05% for bladder D2cc; and substantially smaller prediction time, acceleration by a factor of 300. RapidBrachyDL also demonstrated good generalization to cervical data with 1.73%, 2.46%, 1.68%, and 1.74% difference for CTV D90, rectum D2cc, sigmoid D2cc, and bladder D2cc, respectively, which was unseen during the training.
CONCLUSION: Deep CNN-based dose estimation is a promising method for patient-specific brachytherapy dosimetry. Desired radiation quantities can be obtained with accuracies arbitrarily close to those of the source MC algorithm, but with much faster computation times. The idea behind deep CNN-based dose estimation can be safely extended to other radiation sources and tumor sites by following a similar training process.

Close

Zou, Yujing

Graduate Excellence Fellowship award

2020.

BibTeX

@award{nokey,
title = {Graduate Excellence Fellowship},
author = {Yujing Zou},
year = {2020},
date = {2020-09-01},
urldate = {2020-09-01},
organization = {McGill Medical Physics Unit},
keywords = {},
pubstate = {published},
tppubtype = {award}
}

Close

54 entries « 1 of 2 »
Exit mobile version