AI and Medical Diagnosis

10805742466?profile=RESIZE_400xArtificial intelligence (AI) can be trained to recognize whether a tissue image contains a tumor.  However, exactly how it makes its decision has remained a mystery until now.  A team from the Research Center for Protein Diagnostics (PRODI) at Ruhr-Universität Bochum is developing a new approach that will render an AI’s decision transparent and thus trustworthy. The researchers describe the approach in their journal Medical Image Analysis.[1] 

For the study, experts from the Ruhr-Universität’s St. Josef Hospital in Bochum Germany, developed a neural network, i.e.: an AI, that can classify whether a tissue sample contains tumors or not.  To this end, they fed the AI a large number of microscopic tissue images, some of which contained tumors, while others were tumor-free.  “Neural networks are initially a black box: it’s unclear which identifying features a network learns from the training data,” explains one researcher.  Unlike human experts, they lack the ability to explain their decisions.  “However, for medical applications in particular, it’s important that the AI is capable of explanation and thus trustworthy,” add a bioinformatics scientist who collaborated on the study.

AI is based on falsifiable hypotheses.  The team’s explainable AI is therefore based on the only kind of meaningful statements known to science: on falsifiable hypotheses.  If a hypothesis is false, this fact must be demonstrable through an experiment.  Artificial intelligence usually follows the principle of inductive reasoning: using concrete observations, i.e. the training data, the AI creates a general model on the basis of which it evaluates all further observations.

The underlying problem had been described by philosopher David Hume 250 years ago and can be easily illustrated: No matter how many white swans we observe, we could never conclude from this data that all swans are white and that no black swans exist whatsoever.  Science therefore makes use of so-called deductive logic.  In this approach, a general hypothesis is the starting point.  For example, the hypothesis that all swans are white is falsified when a black swan is spotted.

Activation map shows where the tumor is detected.  “At first glance, inductive AI and the deductive scientific method seem almost incompatible,” says a physicist who also contributed to the study.  But the researchers found a way. Their novel neural network not only provides a classification of whether a tissue sample contains a tumor or is tumor-free, it also generates an activation map of the microscopic tissue image.

The activation map is based on a falsifiable hypothesis, namely that the activation derived from the neural network corresponds exactly to the tumor regions in the sample.  Site-specific molecular methods can be used to test this hypothesis.  “Thanks to the interdisciplinary structures at PRODI,[2] we have the best prerequisites for incorporating the hypothesis-based approach into the development of trustworthy biomarker AI in the future, for example to be able to distinguish between certain therapy-relevant tumor subtypes,” concludes the research team. 

Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization.     For questions, comments or assistance, please contact the office directly at 1-844-492-7225, or feedback@wapacklabs. com    

Weekly Cyber Intelligence Briefings:

Weekly Cyber Intelligence Briefings:

REDSHORTS - Weekly Cyber Intelligence Briefings

https://attendee.gotowebinar.com/register/5504229295967742989

[1] https://www.eurasiareview.com/04092022-how-artificial-intelligence-can-explain-its-decisions/

[2] http://www.prodi.rub.de/en/

E-mail me when people leave their comments –

You need to be a member of Red Sky Alliance to add comments!