Health-related artificial intelligence needs rigorous evaluation and guardrails

STATNEWS, 17/03/2022

Partagé par : 

Équipe Beesens

Health-related artificial intelligence needs rigorous evaluation and guardrails

" algorithms can augment human decision-making by integrating and analyzing more data, and more kinds of data, than a human can comprehend. But to realize the full potential of artificial intelligence (AI) and machine learning (ML) for patients, researchers must foster greater confidence in the accuracy, fairness, and usefulness of clinical AI algorithms.

Getting there will require guardrails — along with a commitment from AI developers to use them — that ensure consistency and adherence to the highest standards when creating and using clinical AI tools. Such guardrails would not only improve the quality of clinical AI but would also instill confidence among patients and clinicians that all tools deployed are reliable and trustworthy.." Lire la suite