Search
Menu
Sheetak -  Cooling at your Fingertip 11/24 LB

Interoperability standards are needed for deep learning in medical imaging

Facebook X LinkedIn Email
The life sciences and specifically health care are witnessing a renaissance in innovation for the application of various machine learning techniques, including the now-ubiquitous use of deep learning to assist in classification and data mining of medical imagery. While these developments are exciting, a deeper exploration of the opportunities for making promising solutions widely available reveals a substantial barrier: a lack of standards for how both data and applications are curated.

Coordinated and standardized data representation will ensure that all players in the space will recognize atomic data elements as carrying the same meaning.
A cliché in the medical informatics continuum is “the nice thing about standards is that there are so many of them from which to choose.” Currently, this conundrum is exactly where deep learning deployments that are intended for generalized use in various environments are headed unless the data sciences and clinical informatics fields can meet the challenge with cogent solutions.

There are actually two barriers: the lack of coordinated and standardized data representation, and the lack of a standardized application programming interface.

Coordinated and standardized data representation will ensure that all players in the space will recognize atomic data elements as carrying the same meaning, which is essential to ensuring that algorithms will operate with consistently defined input variables. This data representation challenge can be further compounded by multiple existing units of measurement.

For example, in Europe and Canada, clinical lab values are reported in the international system of units (SI), whereas, in the U.S., they are not. An algorithm trained and subsequently validated on one measurement system will fail if it is presented with raw numerical data in a different scale system or with different units of measure. Many current algorithms are numerical in nature, so the input scale is critically important.

A standardized programming interface will ensure that any developed software will function as intended, on any target computational platform, and standardization is fundamental to generalizability. The absence of a single consistent approach among working groups is tantamount to having no solution at all. For example, a deep learning-based algorithm designed and developed at one institution to detect micrometastases would require extensive onboarding — in terms of mapping inbound primary image data formats and output result metrics — for it to be usable elsewhere.

In software engineering, we have analogous methods to realize this level of interoperability, by utilizing standard data and application conventions such as JSON, XML, and associated data representation ontologies. What is missing, it would appear, is a common nucleating event in the field of life sciences AI — for both industry and academia — in order to convene the critically needed first congress to draft a workable standard. Technically, creating such a standard would certainly be challenging, but not impossible. Within individual organizations, this has so far proved to be unattainable.

Zurich Instruments AG - Challenge Us 10/24 MR

So, where do organizations stand with respect to all this? While professional groups — such as the American Association for Clinical Chemistry (AACC), the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC), the American Society for Clinical Pathology (ASCP), the Association for Pathology Informatics (API), and the Digital Pathology Association (DPA), to name just a few — do have adjudicative bodies ostensibly charged with creating standards for data representation, so far no collective effort has been organized toward the goal of setting up a single collaborative working group. Moreover, standardization of clinically centered machine learning algorithms has not yet reached these groups’ zeitgeist to merit targeted exploration.

However, in the specific case of standards for AI as applied to whole-slide images of histology — which is one of the subject matter areas in which standardization of both the algorithm and the orchestration layers of deep learning deployments is critically needed — one pair of organizations does stand out as the de facto torchbearer of this goal: API and DPA. They have both the membership expertise and the standards committee needed to at least start drafting an initial standard that addresses both data representation and programmatic orchestration.

Perhaps it is time for these largely image-centric organizations to initiate a first effort in this space, hopefully paving the way for other organizations and groups to facilitate further inroads toward the goal of making a single AI-imaging and computational pipeline standard a reality.

Meet the author

Ulysses G.J. Balis, M.D.Ulysses G.J. Balis, M.D., is a board certified pathologist and clinical informaticist at the University of Michigan Department of Pathology, where he serves as the medical director of the Division of Pathology Informatics. He has maintained long-standing interest in medical interoperability standards, having served as one of the principal authors of the original Digital Imaging and Communications in Medicine (DICOM) Visible Light Image Object Definition (IOD); email: [email protected].




The views expressed in ‘Biopinion’ are solely those of the authors and do not necessarily represent those of Photonics Media. To submit a Biopinion, send a few sentences outlining the proposed topic to [email protected]. Accepted submissions will be reviewed and edited for clarity, accuracy, length, and conformity to Photonics Media style.

Published: December 2021
Glossary
machine learning
Machine learning (ML) is a subset of artificial intelligence (AI) that focuses on the development of algorithms and statistical models that enable computers to improve their performance on a specific task through experience or training. Instead of being explicitly programmed to perform a task, a machine learning system learns from data and examples. The primary goal of machine learning is to develop models that can generalize patterns from data and make predictions or decisions without being...
deep learning
Deep learning is a subset of machine learning that involves the use of artificial neural networks to model and solve complex problems. The term "deep" in deep learning refers to the use of deep neural networks, which are neural networks with multiple layers (deep architectures). These networks, often called deep neural networks or deep neural architectures, have the ability to automatically learn hierarchical representations of data. Key concepts and components of deep learning include: ...
BioOpinionmachine learningdeep learningmedical informaticsstandardsapplication programming interfaceAPIinteroperabilityAACCIFCCASCPDPAAI

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.