How to turn machine learning on its head to protect health data privacy

OWKIN - Marion Oberhuber •, 25/02/2021

Partagé par : 

Beesens TEAM

How to turn machine learning on its head to protect health data privacy

"This first post opens our dedicated series of three articles on federated learning applied to healthcare. Discover its impact on health data privacy and confidentiality now, and catch the next articles on collaboration and traceability in the coming weeks!
Access to large volumes of high quality clinical data is currently one of the biggest bottlenecks for machine learning technology in healthcare. Health data is extremely sensitive and requires handling with particular care, demanding tight regulations. This article explores how this challenge can be overcome through federated learning, a machine learning technology which drastically reduces privacy concerns by keeping patient data stored securely onsite during model training. Owkin brings this privacy-preserving technology to healthcare stakeholders, unlocking the potential for safer, better and more effective medical research.
The potential of machine learning (ML) technology in healthcare is currently limited by fundamental data access challenges, such as ‘siloed’ data storage within many different hospitals. Concerns about insufficient transparency of ML systems and inadequate privacy settings to protect highly sensitive health data make big data access even more taxing.
(ii) Federated learning (FL) is on track to be ‘the next big thing’ in advancing medical research without compromising health data privacy. It allows an ML algorithm to learn on different datasets without removing the data from where they are stored. Hospitals and research centers keep control over data governance and GDPR compliance. Additional privacy-preserving measures such as differential privacy and secure aggregation allow for new ways to protect data in FL..." Lire la suite