Using AI in pathology, laboratories can perform histology-based diagnostics that are more efficient, repeatable, cost-effective, and widely accessible.
Nir Pillar, Bijie Bai, and Aydogan Ozcan, University of California, Los Angeles (UCLA)
A virtually stained kidney tissue image showing compact nests and sheets of cells with clear cytoplasm and distinct membrane. Courtesy of the Ozcan Lab/UCLA.
Histology is the microscopic examination of stained and sectioned cells and tissue. Traditional histological examination requires a sequence of essential sample processing steps, including fixation, embedding, sectioning, and staining. In medicine, histological studies are employed for diagnosing disease, prognosticating its development, and forecasting treatment response. Staining highlights important features of the tissue and enhances tissue contrast; histological sections are typically 2 to 10 µm in thickness and are therefore transparent under brightfield microscopy unless they are stained.
In recent years, a new paradigm has been created to apply computational methods to streamline this involved — and often costly — process. Digitally staining label-free tissue samples using AI significantly reduces labor, hazardous staining reagents/chemicals, and sample evaluation time. Pathologists can potentially not only achieve swifter and more efficient diagnoses but also extract more consistent information from each tissue slide.
The most frequently used histochemical staining, hematoxylin and eosin (H&E), has been used for more than a century and is essential for recognizing various tissue types and highlighting the morphological changes that form the basis of pathology. In the modern age of histology, significant improvements have been made in histological stains and techniques. Additional histochemical stains that label specific tissue components were introduced. For example, Masson trichrome stain is used for connective tissues, periodic acid-Schiff stain is used for carbohydrates, and immunohistochemistry (IHC)-based stains, which rely on unique antibodies, each target a specific protein.
Regardless of the type of stain used, histological staining is a time-consuming process that must be performed within a designated lab infrastructure by trained technicians due to the toxicity of most chemical staining reagents. Furthermore, manual staining processes and the use of different chemical reagents lead to high variability in sample preparation, which frequently causes diagnostic challenges. In addition, the staining process alters the tissue, rendering it unavailable for subsequent analysis. Tissue biopsies are becoming gradually smaller, while the diagnostic demands for molecular and genetic tests on these small biopsies are increasing. So it is crucial to minimize the number of tissue sections used for staining.
In 2018, the Ozcan Lab at UCLA introduced a deep learning-based method to computationally transform images of unlabeled histological tissue sections into their histochemically stained counterparts, eliminating the need for chemical staining1,2. This technology, which was named virtual tissue staining, was developed to leverage the speed and computational power of deep learning to improve century-old histochemical staining techniques (Figure 1). The team used a deep convolutional neural network (CNN) that was trained using the concept of generative adversarial networks to learn the accurate transformation of a label-free unstained autofluorescence input image to the corresponding brightfield image of the same sample that is histologically stained.
Figure 1. A comparison of standard histological staining versus deep learning-based virtual staining. Regardless of the type of stain used, standard histological staining is a time-consuming, laborious process that must be performed within a designated lab infrastructure by trained technicians/histotechnologists. In contrast, using the pretrained virtual staining neural networks, the images of label-free tissue specimens can be digitally transformed into different desired stain types, closely matching their histochemically stained counterparts without requiring any chemical staining procedures. Courtesy of the Ozcan Lab/UCLA.
In contrast to brightfield imaging, which measures the transmitted light of a sample and therefore yields almost no contrast when the sample is not stained, autofluorescence imaging reveals the sample contrast by illuminating the biological sample with an excitation light and measures the resulting emission from the localized biomolecules. These autofluorescence emission signatures of biological samples carry rich information about their metabolic state and pathological condition. It was recognized that this virtual staining technology, using autofluorescence microscopic images as input, could generate highly accurate stains across a wide variety of tissue and stain types (Figure 1). For example, the virtual staining technique was trained and tested on liver, lung, kidney, and salivary gland tissues, which were digitally stained with H&E, Masson trichrome, and periodic acid-Schiff.
This work became the foundation for numerous virtual staining projects that used a similar methodology for different tissue-stain combinations and has already been commercialized by Pictor Labs, a spinoff company from the Ozcan Lab at UCLA.
Subsequently, the development and improvement of virtual staining technology and the discovery of new applications has continued. One of the directions the team pursued was stain-to-stain transformations, as seen in Figure 2 (Reference 3). While H&E staining is performed using a streamlined procedure, special stains (such as Masson trichrome and periodic acid-Schiff) often require longer preparation time, manual effort, and monitoring/supervision by a technician, which increases costs and production time. Reducing the stain turnaround time is especially relevant in several clinical scenarios, such as transplanted organ rejection evaluation and rapidly growing tumor classification.
For this task of stain-to-stain transformation, the UCLA team obtained pairs of matched (unlabeled) autofluorescence images and brightfield images of the H&E, Masson trichrome, and periodic acid-Schiff stained tissue. Next, they performed supervised training of the stain transformation network using pairs of perfectly registered training images created by label-free virtual staining. Since the training starting point was label-free tissue autofluorescence for each of the three stains, any stain-to-stain image misalignments in this training data were eliminated. This feature significantly improved the reliability and accuracy of the stain-to-stain transformations learned using the lab’s method, enabling faster preliminary diagnoses that require special stains while also providing significant savings in costs3.
Motivated by the transformative potential of virtual staining technology, the team decided to tackle another challenge in traditional staining and turned its focus to virtual immunohistochemical (IHC) staining. IHC has become an indispensable tool for pathologists in their everyday practice and basic research for elucidating the pathophysiology of various diseases. However, there are many possible causes for poor/nondiagnostic staining results in IHC compared to standard histochemical staining, many of which can be rooted in antibody and reagent variations.
One example of a commonly performed IHC is HER2 (human epidermal growth factor receptor 2) staining, performed on most breast cancer specimens, and it has a significant role in disease diagnosis, prognosis prediction, and treatment optimization. The HER2 IHC staining procedure is delicate and requires accurate control of time, temperature, and concentrations of the HER2 antibody and reagents at each tissue staining step. In fact, the process often fails to generate high-quality outcomes. HER2 stain failure can cause treatment delays and aggravate patient stress and anxiety. To address these existing limitations in HER2 IHC staining, a deep learning model was trained to generate virtual HER2 images from the autofluorescence images of unlabeled tissue sections, matching the brightfield images captured after the standard IHC staining (Figure 2)4. The virtual HER2 staining method was rapid, reproducible, and simpler to produce compared to the actual HER2 IHC staining. Another advantage was its capability to generate highly consistent and repeatable staining results, minimizing the technical variations commonly observed in standard HER2 IHC staining. These results constituted the first demonstration of label-free virtual IHC staining and opened new potential avenues for various applications in life sciences and biomedical diagnostics, such as the digital creation of novel stains that may better highlight cellular structures and organelles, as well as the blending of multiple existing stains on one slide to highlight different subregions.
In fact, many other available microscopic imaging modalities can be incorporated into virtual staining to bring contrast to transparent, label-free, thin tissue sections. For example, phase contrast microscopy, darkfield microscopy, quantitative phase imaging (QPI) microscopy, and nonlinear imaging modalities are also feasible options to be used as the input of label-free virtual staining networks. In 2019, the Ozcan Lab showed that the QPI of label-free tissue sections can be used for virtual staining5. This virtual staining technology using QPI as the input generated different stain types accurately, including H&E, Jones, and Masson trichrome stains, matching their histochemically stained counterparts (Figure 2). These results demonstrate emerging opportunities created by deep learning for label-free QPI and further expand the scope of virtual tissue staining.
Figure 2. Demonstrations of label-free virtual staining and stain-to-stain transformation. Examples include virtual hematoxylin and eosin (H&E), Jones, and Masson trichome staining using label-free autofluorescence images; virtual H&E, Jones, and Masson trichome staining using label-free quantitative phase imaging (QPI); multiplexed H&E, Jones, and Masson trichome staining using a single network with autofluorescence images and a digital staining matrix used as input; virtual immunohistochemistry (IHC) HER2 (human epidermal growth factor receptor 2) staining using label-free autofluorescence images; virtual acetic acid and H&E staining using in vivo reflectance confocal microscopy (RCM) images; and transformation from H&E staining into virtual Jones, Masson trichome, and periodic acid-Schiff staining. Courtesy of the Ozcan Lab/UCLA.
The virtual staining technique was harnessed recently to provide a noninvasive method to rapidly diagnose skin tumors, allowing earlier diagnosis of skin cancer. We introduced a virtual biopsy concept, a technology that bypasses reliance on skin biopsies, which are invasive, cumbersome, and time-consuming; for example, due to tissue processing, it can take days to receive the results of a biopsy6. The deep learning-based virtual biopsy framework uses a CNN to convert in vivo images of unstained skin obtained using reflectance confocal microscopy (RCM), an FDA-cleared medical device used by dermatologists to discriminate benign from malignant lesions into virtually stained, 3D images with microscopic resolution.
Although RCM is a valuable diagnostic tool, its use requires specialized training for image interpretation. Furthermore, RCM does not show the nuclear features of skin cells in a similar fashion as the traditional histological evaluation. Using this 3D virtual staining technology, label-free RCM images were transformed into images that look like H&E-stained images that are familiar to dermatologists and pathologists (Figure 2). The researchers demonstrated that the trained CNN could rapidly transform RCM images captured from intact skin into virtually stained, 3D microscopic images, and the method was successful on normal skin, basal cell carcinoma, and melanocytic nevi6. This work has the potential to improve clinicians’ abilities to perform bedside diagnoses of various pathological skin conditions. This innovative technology could also enable dermatologists to diagnose patients remotely through telemedicine and prevent “loss to follow-up”— where the patient does not return for follow-up appointments — by diagnosing diseases immediately and removing diseased tissue if necessary.
The evolution of these methods could revolutionize the histopathology staining workflow using deep learning-enabled virtual staining technology to achieve faster, more affordable, and more accurate tissue diagnoses and help clinicians better manage patient care. Virtual staining technology will likely be easy to integrate with existing labs that already have a digital pathology infrastructure; eliminating histochemical staining in the slide preparation steps could introduce substantial cost savings.
Meet the authors
Nir Pillar, a postdoctoral scholar at the Ozcan Lab in the Department of Electrical and Computer Engineering at the University of California, Los Angeles, focuses on virtual histology applications development. He earned his M.D. from Ben Gurion University and his Ph.D. in molecular biology from Tel Aviv University, both located in Israel. He then completed a surgical pathology residency in Hadassah Medical Center in Jerusalem; email: [email protected].
Bijie Bai, Ph.D., received her Bachelor of Science degree in measurement, control technology, and instrumentation from Tsinghua University, Beijing, China, in 2018. In 2023, she received her Ph.D. degree in the Electrical and Computer Engineering Department at the University of California, Los Angeles. Her research focuses on computational imaging for biomedical applications, machine learning, and optics; email: [email protected].
Aydogan Ozcan, Ph.D., is the Chancellor’s Professor and the Volgenau Chair for Engineering Innovation at UCLA and an HHMI Professor with the Howard Hughes Medical Institute. He is also the Associate Director of the California NanoSystems Institute. Ozcan is an elected Fellow of the National Academy of Inventors (NAI), Optica, AAAS, SPIE, IEEE, AIMBE, RSC, APS, and the Guggenheim Foundation. Ozcan is also listed as a Highly Cited Researcher by Web of Science, Clarivate. To commercialize virtual staining technology, Ozcan cofounded Pictor Labs, a spinoff company from his lab at UCLA, where he led the company formation, fundraising, and technology transfer from UCLA; email: [email protected].
References
1. Y. Rivenson et al. (2018). Deep learning-based virtual histology staining using auto-fluorescence of label-free tissue. ArXiv, published online March 30, https://arxiv.org/abs/1803.11293.
2. Y. Rivenson et al. (2019). Virtual histological staining of unlabelled tissue-autofluorescence images via deep learning. Nat Biomed Eng, Vol. 3, No. 6, pp. 466-477.
3. K. de Haan et al. (2021). Deep learning-based transformation of H&E stained tissues into special stains. Nat Commun, Vol. 12, No. 1, p. 4884.
4. B. Bai et al. (2022). Label-Free Virtual HER2 Immunohistochemical Staining of Breast Tissue using Deep Learning. BME Front, Vol. 2022, No. 9786242.
5. Y. Rivenson et al. (2019). PhaseStain: the digital staining of label-free quantitative phase microscopy images using deep learning. Light Sci Appl, Vol. 8, p. 23.
6. J. Li et al. (2021). Biopsy-free in vivo virtual histology of skin using deep learning. Light Sci Appl, Vol. 10, No. 1, p. 233.