Explainable AI’s pros ‘not what they appear’ while its cons are ‘worth highlighting’

AIIN, 16/07/2021

Partagé par : 

Beesens TEAM

Explainable AI’s pros ‘not what they appear’ while its cons are ‘worth highlighting’

"Before healthcare AI can achieve widespread translation from lab to clinic, it will need to overcome its proclivity for producing biased outputs that worsen social disparities.

Consensus on this among experts and watchdogs is readily observable.

Increasingly, so is consensus on a cure: explainable healthcare AI.

Not so fast, warns an international and multidisciplinary team of academics.

The latter consensus, the group writes in a paper published by Science July 16, “both overstates the benefits and undercounts the drawbacks of requiring black-box algorithms to be explainable.”

The paper’s lead author is Boris Babic, JD, PhD (philosophy), MSc (statistics), of the University of Toronto. Senior author is I. Glenn Cohen, JD, director of Harvard Law School’s Petrie-Flom Center for Health Law Policy, Biotechnology and Bioethics.

En route to fleshing out their argument, the authors supply a helpful primer on the difference between explainable versus interpretable AI (and machine learning).

Interpretable AI/ML, which is an aside to the paper’s thrust, “uses a transparent [‘white-box’] function that is in an easy-to-digest form,” they write.

By contrast, explainable AI/ML “finds an interpretable function that closely approximates the outputs of the black box. … [T]he opaque function of the black box remains the basis for the AI/ML decisions, because it is typically the most accurate one.”

From there Babic and colleagues walk the reader through several reasons explainable AI/ML algorithms are probably incapable of keeping their implied promise within healthcare to facilitate user understanding, build trust and support accountability..." Lire la suite