Deep Learning Model Helps Target Prostate Cancer Treatments to Individual
Approximately 250,000 men in the U.S. receive a prostate cancer diagnosis each year. While overall morbidity and mortality rates for this type of cancer are low, a subset of cases requires aggressive treatment.
A machine-learning model developed by researchers at the University of Washington provides 3D segmentation of the glandular tissue structures that are used for prostate cancer risk assessment by evaluating microscopy images. The deep learning-based model for gland segmentation could help guide critical treatment decisions for patients with prostate cancer and accelerate future research on how to optimize treatment decisions for individual patients.
To train a model for direct 3D segmentation of prostate glands, professor Jonathan T. C. Liu and his team used nnU-Net, an open-source, 3D segmentation method designed to handle diverse biomedical imaging datasets.
The researchers trained nnU-Net directly on 3D prostate gland segmentation data obtained from a multistep, hybrid deep learning and computer vision-based pipeline previously developed by the team. The researchers had already generated hundreds of 3D segmentation masks in an annotation-free manner using this pipeline. They used these segmentation masks to train an end-to-end deep learning model for single-step gland segmentation.
The inputs to the model were 3D pathology datasets of prostate biopsies stained with an inexpensive fluorescent analog of hematoxylin and eosin. The researchers acquired the 3D datasets through open-top light-sheet microscopes that they developed.
The outputs from the model were 3D semantic segmentation masks of the gland epithelium, the gland lumen, and surrounding stromal compartments within the tissue. The model efficiently generated accurate 3D semantic segmentation of the glands.
Microscopic glands of the prostate are segmented (colored) with the new deep learning pipeline. The image shows a prostate cancer tissue volume, measuring roughly 1 x 1 x 2 mm in size. Orange regions represent the lumen (interior) of the glands, blue regions represent the epithelium (edges) of the glands, and gray regions are the surrounding stroma. Courtesy of Rui Wang, University of Washington.
The 3D gland segmentations produced by nnU-Net provided valuable insights into tissue composition. “Our results indicate nnU-Net’s remarkable accuracy for 3D segmentation of prostate glands even with limited training data, offering a simpler and faster alternative to our previous 3D gland-segmentation methods,” Liu said. “Notably, it maintains good performance with lower-resolution inputs, potentially reducing resource requirements.”
Currently, treatment approaches for prostate cancer are determined primarily through the Gleason score, which evaluates prostate gland appearance based on histology slides. The interpretation of the slides can vary, leading to both undertreatment and overtreatment. Also, only a small fraction of the biopsy is viewed in 2D, making crucial details easier to miss. Interpretations of complex 3D glandular structures can be ambiguous when viewed on 2D tissue sections.
Tissue destruction, in which valuable tissue material is no longer available for downstream assays, is a further disadvantage of conventional histology. Nondestructive 3D pathology can enable complete imaging and analysis of biopsy specimens, providing volumetric visualization and quantification of diagnostically significant microstructures while maintaining entire tissue specimens for downstream assays.
By enabling accurate characterization of glandular structures, the deep learning-based, 3D segmentation model could lead to more effective treatment approaches, ultimately improving patient outcomes. The model underscores the potential of computational approaches to improve medical diagnostics and target interventions to individual patient needs.
nnU-Net is a powerful tool for accurate, efficient 3D gland segmentation within prostate biopsies. With the 3D gland segmentations generated by this tool, the researchers may be able to extract a diversity of quantitative 3D glandular features to train machine classifiers for the purpose of enhancing prostate cancer risk stratification and treatment decisions. The model’s speed and accuracy will simplify and accelerate future research.
The research was published in the
Journal of Biomedical Optics (
www.doi.org/10.1117/1.JBO.29.3.036001).
LATEST NEWS