
Luca Weishaupt
B.Sc. Student
Physics & Computer Science
Artificial Intelligence Group
+1 (914) 486-4460
Learn more
Bio
Luca was born and raised in Germany and completed a DIAB (German Abitur and high school diploma) at the German International School New York after moving to the United States. He is in the last year of his undergraduate degree with a Major in Physics and a Minor in Computer Science at McGill University. Luca joined Enger Lab in the first year of his undergraduate studies in 2018 and has been working on deep learning-based medical image analysis and treatment planning optimization.
Current Projects
Multimodal Treatment Outcome Prediction

Tumor segmentation in endoscopy images

McMedHacks – Medical Image Analysis and Deep Learning in Python

2022
Weishaupt, Luca L.; Sayed, Hisham Kamal; Mao, Ximeng; Choo, Richard; Stish, Bradley J; Enger, Shirin A; Deufel, Christopher
Approaching automated applicator digitization from a new angle: Using sagittal images to improve deep learning accuracy and robustness in high-dose-rate prostate brachytherapy Journal Article
In: Brachytherapy, 2022.
@article{weishaupt2022approaching,
title = {Approaching automated applicator digitization from a new angle: Using sagittal images to improve deep learning accuracy and robustness in high-dose-rate prostate brachytherapy},
author = {Luca L. Weishaupt and Hisham Kamal Sayed and Ximeng Mao and Richard Choo and Bradley J Stish and Shirin A Enger and Christopher Deufel},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {Brachytherapy},
publisher = {Elsevier},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Weishaupt, Luca L.; Vuong, Te; Thibodeau-Antonacci, Alana; Garant, A; Singh, K; Miller, C; Martin, A; Schmitt-Ulms, F; Enger, Shirin A.
PO-1325 Automated rectal tumor segmentation with inter-observer variability-based uncertainty estimates Journal Article
In: Radiotherapy and Oncology, vol. 170, pp. S1120–S1121, 2022.
@article{weishaupt2022po,
title = {PO-1325 Automated rectal tumor segmentation with inter-observer variability-based uncertainty estimates},
author = {Luca L. Weishaupt and Te Vuong and Alana Thibodeau-Antonacci and A Garant and K Singh and C Miller and A Martin and F Schmitt-Ulms and Shirin A. Enger},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {Radiotherapy and Oncology},
volume = {170},
pages = {S1120--S1121},
publisher = {Elsevier},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2021
Weishaupt, Luca L.
T I Gurman Prize in Physics award
2021.
@award{Weishaupt2021,
title = {T I Gurman Prize in Physics},
author = {Luca L. Weishaupt},
url = {http://scholarships.studentscholarships.org/t_i_gurman_prize_2236.php},
year = {2021},
date = {2021-09-01},
urldate = {2021-09-01},
organization = {McGill University},
abstract = {Established in 1997 by friends and family of T.I. Gurman in honour of his 95th birthday. Awarded by the Faculty of Science Scholarships Committee on the recommendation of the Department of Physics to a student with high academic standing entering the final year in a Major program in Physics.},
howpublished = {McGill University},
keywords = {},
pubstate = {published},
tppubtype = {award}
}
Weishaupt, Luca L.; Thibodeau-Antonacci, Alana; Garant, Aurelie; Singh, Kelita; Miller, Corey; Vuong, Té; Enger, Shirin A.
Deep learning based tumor segmentation of endoscopy images for rectal cancer patients Presentation
ESTRO Annual meeting, 27.08.2021.
@misc{Weishaupt2021b,
title = {Deep learning based tumor segmentation of endoscopy images for rectal cancer patients},
author = {Luca L. Weishaupt and Alana Thibodeau-Antonacci and Aurelie Garant and Kelita Singh and Corey Miller and Té Vuong and Shirin A. Enger},
url = {https://www.estro.org/Congresses/ESTRO-2021/610/posterdiscussion34-deep-learningforauto-contouring/3710/deeplearning-basedtumorsegmentationofendoscopyimag},
year = {2021},
date = {2021-08-27},
urldate = {2021-08-27},
abstract = {Purpose or Objective
The objective of this study was to develop an automated rectal tumor segmentation algorithm from endoscopy images. The algorithm will be used in a future multimodal treatment outcome prediction model. Currently, treatment outcome prediction models rely on manual segmentations of regions of interest, which are prone to inter-observer variability. To quantify this human error and demonstrate the feasibility of automated endoscopy image segmentation, we compare three deep learning architectures.
Material and Methods
A gastrointestinal physician (G1) segmented 550 endoscopy images of rectal tumors into tumor and non-tumor regions. To quantify the inter-observer variability, a second gastrointestinal physician (G2) contoured 319 of the images independently.
The 550 images and annotations from G1 were divided into 408 training, 82 validation, and 60 testing sets. Three deep learning architectures were trained; a fully convolutional neural network (FCN32), a U-Net, and a SegNet. These architectures have been used for robust medical image segmentation in previous studies.
All models were trained on a CPU supercomputing cluster. Data augmentation in the form of random image transformations, including scaling, rotation, shearing, Gaussian blurring, and noise addition, was used to improve the models' robustness.
The neural networks' output went through a final layer of noise removal and hole filling before evaluation. Finally, the segmentations from G2 and the neural networks' predictions were compared against the ground truth labels from G1.
Results
The FCN32, U-Net, and SegNet had average segmentation times of 0.77, 0.48, and 0.43 seconds per image, respectively. The average segmentation time per image for G1 and G2 were 10 and 8 seconds, respectively.
All the ground truth labels contained tumors, but G2 and the deep learning models did not always find tumors in the images. The scores are based on the agreement of tumor contours with G1’s ground truth and were thus only computed for images in which tumor was found. The automated segmentation algorithms consistently achieved equal or better scores than G2's manual segmentations. G2's low F1/DICE and precision scores indicate poor agreement between the manual contours.
Conclusion
There is a need for robust and accurate segmentation algorithms for rectal tumor segmentation since manual segmentation of these tumors is susceptible to significant inter-observer variability. The deep learning-based segmentation algorithms proposed in this study are more efficient and achieved a higher agreement with our manual ground truth segmentations than a second expert annotator. Future studies will investigate how to train deep learning models on multiple ground truth annotations to prevent learning observer biases.},
howpublished = {ESTRO Annual meeting},
keywords = {},
pubstate = {published},
tppubtype = {presentation}
}
The objective of this study was to develop an automated rectal tumor segmentation algorithm from endoscopy images. The algorithm will be used in a future multimodal treatment outcome prediction model. Currently, treatment outcome prediction models rely on manual segmentations of regions of interest, which are prone to inter-observer variability. To quantify this human error and demonstrate the feasibility of automated endoscopy image segmentation, we compare three deep learning architectures.
Material and Methods
A gastrointestinal physician (G1) segmented 550 endoscopy images of rectal tumors into tumor and non-tumor regions. To quantify the inter-observer variability, a second gastrointestinal physician (G2) contoured 319 of the images independently.
The 550 images and annotations from G1 were divided into 408 training, 82 validation, and 60 testing sets. Three deep learning architectures were trained; a fully convolutional neural network (FCN32), a U-Net, and a SegNet. These architectures have been used for robust medical image segmentation in previous studies.
All models were trained on a CPU supercomputing cluster. Data augmentation in the form of random image transformations, including scaling, rotation, shearing, Gaussian blurring, and noise addition, was used to improve the models' robustness.
The neural networks' output went through a final layer of noise removal and hole filling before evaluation. Finally, the segmentations from G2 and the neural networks' predictions were compared against the ground truth labels from G1.
Results
The FCN32, U-Net, and SegNet had average segmentation times of 0.77, 0.48, and 0.43 seconds per image, respectively. The average segmentation time per image for G1 and G2 were 10 and 8 seconds, respectively.
All the ground truth labels contained tumors, but G2 and the deep learning models did not always find tumors in the images. The scores are based on the agreement of tumor contours with G1’s ground truth and were thus only computed for images in which tumor was found. The automated segmentation algorithms consistently achieved equal or better scores than G2's manual segmentations. G2's low F1/DICE and precision scores indicate poor agreement between the manual contours.
Conclusion
There is a need for robust and accurate segmentation algorithms for rectal tumor segmentation since manual segmentation of these tumors is susceptible to significant inter-observer variability. The deep learning-based segmentation algorithms proposed in this study are more efficient and achieved a higher agreement with our manual ground truth segmentations than a second expert annotator. Future studies will investigate how to train deep learning models on multiple ground truth annotations to prevent learning observer biases.
Thibodeau-Antonacci, Alana; Jafarzadeh, Hossein; Carroll, Liam; Weishaupt, Luca L.
Mitacs Globalink Research Award award
2021.
@award{Thibodeau-Antonacci2021c,
title = {Mitacs Globalink Research Award},
author = {Alana Thibodeau-Antonacci and Hossein Jafarzadeh and Liam Carroll and Luca L. Weishaupt},
url = {https://www.mitacs.ca/en/programs/globalink/globalink-research-award},
year = {2021},
date = {2021-07-01},
urldate = {2021-07-01},
organization = {MITACS},
abstract = {The Mitacs Globalink Research Award (GRA) supports research collaborations between Canada and select partner organizations and eligible countries and regions. It was awarded to Alana Thibodeau-Antonacci, Hossein Jafarzadeh, Liam Carroll and Luca L. Weishaupt.
Under the joint supervision of a home and host professor, successful senior undergraduate students, graduate students, as well as postdoctoral fellows will receive a $6,000 research award to conduct a 12- to 24-week research project in the other country. Awards are offered in partnership with Mitacs’s Canadian academic partners (and, in some cases, with Mitacs’s international partners) and are subject to available funding. },
howpublished = {Mitacs},
keywords = {},
pubstate = {published},
tppubtype = {award}
}
Under the joint supervision of a home and host professor, successful senior undergraduate students, graduate students, as well as postdoctoral fellows will receive a $6,000 research award to conduct a 12- to 24-week research project in the other country. Awards are offered in partnership with Mitacs’s Canadian academic partners (and, in some cases, with Mitacs’s international partners) and are subject to available funding.
Weishaupt, Luca L.; Thibodeau-Antonacci, Alana; Garant, Aurelie; Singh, Kelita; Miller, Corey; Vuong, Té; Enger, Shirin A.
Inter-Observer Variability and Deep Learning in Rectal Tumor Segmentation from Endoscopy Images Presentation
The COMP Annual Scientific Meeting 2021, 22.06.2021.
@misc{Weishaupt2021c,
title = {Inter-Observer Variability and Deep Learning in Rectal Tumor Segmentation from Endoscopy Images},
author = {Luca L. Weishaupt and Alana Thibodeau-Antonacci and Aurelie Garant and Kelita Singh and Corey Miller and Té Vuong and Shirin A. Enger},
year = {2021},
date = {2021-06-22},
urldate = {2021-06-22},
abstract = {Purpose
To develop an automated rectal tumor segmentation algorithm from endoscopy images.
Material/Methods
A gastrointestinal physician (G1) segmented 2005 endoscopy images into tumor and non-tumor
regions. To quantify the inter-observer variability, a second gastrointestinal physician (G2)
contoured the images independently.
Three deep-learning architectures used for robust medical image segmentation in previous
studies were trained: a fully convolutional neural network (FCN32), a U-Net, and a SegNet.
Since the majority of the images did not contain tumors, two methods were compared for
training. Models were trained using only tumor images (M1) and all images (M2). G1’s images
and annotations were divided into 408 training, 82 validation, and 60 testing sets for M1, 1181
training, 372 validation, and 452 testing sets for M2.
Finally, segmentations from G2 and neural networks' predictions were compared against ground
truth labels from G1, and F1 scores were computed for images where both physicians found
tumors.
Results
The deep-learning segmentation took less than 1 second, while manual segmentation took
approximately 10 seconds per image.
The M1’s models consistently achieved equal or better scores (SegNet F1:0.80±0.08) than G2's
manual segmentations (F1:0.68±0.25). G2's low F1/DICE and precision scores indicate poor
agreement between the manual contours. Models from M2 achieved lower scores than G2 and
M1’s models since they demonstrated a strong bias towards predicting no tumor for all images.
Conclusion
Future studies will investigate training on an equal number of images with/without tumor, using
ground truth contours from multiple experts simultaneously.},
howpublished = {The COMP Annual Scientific Meeting 2021},
keywords = {},
pubstate = {published},
tppubtype = {presentation}
}
To develop an automated rectal tumor segmentation algorithm from endoscopy images.
Material/Methods
A gastrointestinal physician (G1) segmented 2005 endoscopy images into tumor and non-tumor
regions. To quantify the inter-observer variability, a second gastrointestinal physician (G2)
contoured the images independently.
Three deep-learning architectures used for robust medical image segmentation in previous
studies were trained: a fully convolutional neural network (FCN32), a U-Net, and a SegNet.
Since the majority of the images did not contain tumors, two methods were compared for
training. Models were trained using only tumor images (M1) and all images (M2). G1’s images
and annotations were divided into 408 training, 82 validation, and 60 testing sets for M1, 1181
training, 372 validation, and 452 testing sets for M2.
Finally, segmentations from G2 and neural networks' predictions were compared against ground
truth labels from G1, and F1 scores were computed for images where both physicians found
tumors.
Results
The deep-learning segmentation took less than 1 second, while manual segmentation took
approximately 10 seconds per image.
The M1’s models consistently achieved equal or better scores (SegNet F1:0.80±0.08) than G2's
manual segmentations (F1:0.68±0.25). G2's low F1/DICE and precision scores indicate poor
agreement between the manual contours. Models from M2 achieved lower scores than G2 and
M1’s models since they demonstrated a strong bias towards predicting no tumor for all images.
Conclusion
Future studies will investigate training on an equal number of images with/without tumor, using
ground truth contours from multiple experts simultaneously.
Weishaupt, Luca L.; Torres, Jose; Camilleri-Broët, Sophie; Rayes, Roni F.; Spicer, Jonathan D.; Maldonado, Sabrina Côté; Enger, Shirin A.
Deep learning-based tumor segmentation on digital images of histopathology slides for microdosimetry applications Journal Article
In: arXiv:2105.01824 [physics], 2021, (arXiv: 2105.01824).
@article{weishaupt_deep_2021,
title = {Deep learning-based tumor segmentation on digital images of histopathology slides for microdosimetry applications},
author = {Luca L. Weishaupt and Jose Torres and Sophie Camilleri-Broët and Roni F. Rayes and Jonathan D. Spicer and Sabrina Côté Maldonado and Shirin A. Enger},
url = {http://arxiv.org/abs/2105.01824},
year = {2021},
date = {2021-05-01},
urldate = {2021-09-08},
journal = {arXiv:2105.01824 [physics]},
abstract = {$textbackslashbfPurpose:$ The goal of this study was (i) to use artificial intelligence to automate the traditionally labor-intensive process of manual segmentation of tumor regions in pathology slides performed by a pathologist and (ii) to validate the use of a well-known and readily available deep learning architecture. Automation will reduce the human error involved in manual delineation, increase efficiency, and result in accurate and reproducible segmentation. This advancement will alleviate the bottleneck in the workflow in clinical and research applications due to a lack of pathologist time. Our application is patient-specific microdosimetry and radiobiological modeling, which builds on the contoured pathology slides. $textbackslashbfMethods:$ A U-Net architecture was used to segment tumor regions in pathology core biopsies of lung tissue with adenocarcinoma stained using hematoxylin and eosin. A pathologist manually contoured the tumor regions in 56 images with binary masks for training. Overlapping patch extraction with various patch sizes and image downsampling were investigated individually. Data augmentation and 8-fold cross-validation were used. $textbackslashbfResults:$ The U-Net achieved accuracy of 0.91$textbackslashpm$0.06, specificity of 0.90$textbackslashpm$0.08, sensitivity of 0.92$textbackslashpm$0.07, and precision of 0.8$textbackslashpm$0.1. The F1/DICE score was 0.85$textbackslashpm$0.07, with a segmentation time of 3.24$textbackslashpm$0.03 seconds per image, achieving a 370$textbackslashpm$3 times increased efficiency over manual segmentation. In some cases, the U-Net correctly delineated the tumor's stroma from its epithelial component in regions that were classified as tumor by the pathologist. $textbackslashbfConclusion:$ The U-Net architecture can segment images with a level of efficiency and accuracy that makes it suitable for tumor segmentation of histopathological images in fields such as radiotherapy dosimetry, specifically in the subfields of microdosimetry.},
note = {arXiv: 2105.01824},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Weishaupt, Luca L.
Fire-Up - Radiation Treatment Outcome Prediction Presentation
Fire-Up Presentation, 09.02.2021.
@misc{luca_fireup,
title = {Fire-Up - Radiation Treatment Outcome Prediction},
author = {Luca L. Weishaupt},
year = {2021},
date = {2021-02-09},
urldate = {2021-02-09},
howpublished = {Fire-Up Presentation},
keywords = {},
pubstate = {published},
tppubtype = {presentation}
}
Deufel, Christopher; Weishaupt, Luca L.; Sayed, Hisham Kamal; Choo, Chunhee; Stish, Bradley
Deep learning for automated applicator reconstruction in high-dose-rate prostate brachytherapy Journal Article
In: World Congress of Brachytherapy 2021, 2021, (Type: Journal Article).
@article{deufel_deep_2021,
title = {Deep learning for automated applicator reconstruction in high-dose-rate prostate brachytherapy},
author = {Christopher Deufel and Luca L. Weishaupt and Hisham Kamal Sayed and Chunhee Choo and Bradley Stish},
url = {https://www.estro.org/Congresses/WCB-2021/811/poster-physics/3229/deeplearningforautomatedapplicatorreconstructionin},
year = {2021},
date = {2021-01-01},
journal = {World Congress of Brachytherapy 2021},
note = {Type: Journal Article},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Weishaupt, Luca L.; Sayed, Hisham Kamal; Mao, Ximeng; Choo, Chunhee; Stish, Bradley; Enger, Shirin A.; Deufel, Christopher
Approaching automated applicator digitization from a new angle: using sagittal images to improve deep learning accuracy and robustness in high-dose-rate prostate brachytherapy Journal Article
In: 2021 ABS Annual Meeting, 2021, (Type: Journal Article).
@article{weishaupt_approaching_2021-1,
title = {Approaching automated applicator digitization from a new angle: using sagittal images to improve deep learning accuracy and robustness in high-dose-rate prostate brachytherapy},
author = {Luca L. Weishaupt and Hisham Kamal Sayed and Ximeng Mao and Chunhee Choo and Bradley Stish and Shirin A. Enger and Christopher Deufel},
year = {2021},
date = {2021-01-01},
journal = {2021 ABS Annual Meeting},
note = {Type: Journal Article},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2020
Weishaupt, Luca L.; Sayed, Hisham Kamal; Mao, Ximeng; Choo, Chunhee; Stish, Bradley; Enger, Shirin A.; Deufel, Christopher
12.06.2020, (Type: Journal Article).
@misc{weishaupt_approaching_2021,
title = {Approaching automated applicator digitization from a new angle - Using sagittal images to improve deep learning accuracy and robustness in high-dose-rate prostate brachytherapy},
author = {Luca L. Weishaupt and Hisham Kamal Sayed and Ximeng Mao and Chunhee Choo and Bradley Stish and Shirin A. Enger and Christopher Deufel},
url = {https://www.postersessiononline.eu/173580348_eu/congresos/WCB2021/aula/preposter_542171716_3.png},
year = {2020},
date = {2020-06-12},
urldate = {2021-01-01},
journal = {ESTRO 2021},
note = {Type: Journal Article},
keywords = {},
pubstate = {published},
tppubtype = {presentation}
}
Weishaupt, Luca L.
Math And Physics Class Of 1965 Prize award
2020.
@award{Weishaupt2020,
title = {Math And Physics Class Of 1965 Prize},
author = {Luca L. Weishaupt},
year = {2020},
date = {2020-06-01},
urldate = {2020-06-01},
organization = {McGill University},
abstract = {Luca received the award for his academic excellence and research activities in Medical Physics. The prize was established in 2016 by the Math and Physics Class of 1965 and is awarded by McGill Faculty of Science.
Luca is an international student from Germany that has been conducting research activities in my lab for 3 years, since his first month at McGill University. He is an active researcher as an undergraduate student. },
keywords = {},
pubstate = {published},
tppubtype = {award}
}
Luca is an international student from Germany that has been conducting research activities in my lab for 3 years, since his first month at McGill University. He is an active researcher as an undergraduate student.
Weishaupt, Luca L.; Torres, Jose; Camilleri-Broët, Sophie; Maldonado, Sabrina Côté; Enger, Shirin A.
Classification and Segmentation of Tumor Cells and Nuclei On Biopsy Slides Using Deep Learning for Microdosimetry Applications Journal Article
In: 2020 Joint AAPM textbar COMP Virtual Meeting, 2020, (Type: Journal Article).
@article{weishaupt_classification_2020,
title = {Classification and Segmentation of Tumor Cells and Nuclei On Biopsy Slides Using Deep Learning for Microdosimetry Applications},
author = {Luca L. Weishaupt and Jose Torres and Sophie Camilleri-Broët and Sabrina Côté Maldonado and Shirin A. Enger},
url = {https://w3.aapm.org/meetings/2020AM/programInfo/programAbs.php?sid=8797&aid=51830},
year = {2020},
date = {2020-01-01},
journal = {2020 Joint AAPM textbar COMP Virtual Meeting},
abstract = {Purpose:
To automate the classification and segmentation of tumor cells in images of biopsy slides using deep learning to minimize manual labor, the time required, and human error. The segmented tumor cells and nuclei will be used for patient-specific microdosimetry studies.
Methods:
A pathologist manually contoured images of 57 pathology core biopsies in TIFF format, each containing 3750x3750 pixels with a 248 nm per pixel resolution on a pixel by pixel basis. The contoured pixels were used as the ground truth for a three-dimensional deep convolutional neural network model based on a UNet architecture using Keras and Tensorflow. Forty-eight of the core images were used to train the model with data augmentation using binary cross-entropy as the loss function on a 120 GB GPU cluster for 12 hours. The remaining nine core images were used for testing. Testing was done by applying a 50% confidence threshold on the model’s prediction and comparing the results with the manual contours.
Results:
The average time for the pathologist to contour a core image was 20 minutes. The model was able to segment three images per minute with an accuracy of 90.9%, specificity of 91.2%, sensitivity of 90.0%, precision of 73.0%, and a dice coefficient of 80.6%. The model’s predictions were visually similar to the manual segmentation. The model’s predictions were more confident about the center of the tumor regions than the edges.
Conclusion:
The proposed model can closely and consistently replicate tumor cell contours made by a pathologist 60 times faster than manual contouring. It can autonomously and efficiently generate large amounts of contoured pathology data that can be used for further research, such as microdosimetry performed on patient-specific tumor nuclei and cells. Future studies will investigate the accuracy and consistency of the manually contoured data, which was used as the ground truth.},
note = {Type: Journal Article},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
To automate the classification and segmentation of tumor cells in images of biopsy slides using deep learning to minimize manual labor, the time required, and human error. The segmented tumor cells and nuclei will be used for patient-specific microdosimetry studies.
Methods:
A pathologist manually contoured images of 57 pathology core biopsies in TIFF format, each containing 3750x3750 pixels with a 248 nm per pixel resolution on a pixel by pixel basis. The contoured pixels were used as the ground truth for a three-dimensional deep convolutional neural network model based on a UNet architecture using Keras and Tensorflow. Forty-eight of the core images were used to train the model with data augmentation using binary cross-entropy as the loss function on a 120 GB GPU cluster for 12 hours. The remaining nine core images were used for testing. Testing was done by applying a 50% confidence threshold on the model’s prediction and comparing the results with the manual contours.
Results:
The average time for the pathologist to contour a core image was 20 minutes. The model was able to segment three images per minute with an accuracy of 90.9%, specificity of 91.2%, sensitivity of 90.0%, precision of 73.0%, and a dice coefficient of 80.6%. The model’s predictions were visually similar to the manual segmentation. The model’s predictions were more confident about the center of the tumor regions than the edges.
Conclusion:
The proposed model can closely and consistently replicate tumor cell contours made by a pathologist 60 times faster than manual contouring. It can autonomously and efficiently generate large amounts of contoured pathology data that can be used for further research, such as microdosimetry performed on patient-specific tumor nuclei and cells. Future studies will investigate the accuracy and consistency of the manually contoured data, which was used as the ground truth.