In this article, we lay down five important challenges to ensuring cyber assurance in the medical AI business from the contexts of supply and demand side viewpoints driving arguments solely on factors affecting society's trust in medical AI
Big data is central to medicine and health. Consequently, the medical field is becoming increasingly accepting of AI and machine learning (ML) applications in tasks such as (but not limited to) medical image analysis, disease screening, drug discovery, patient experience, healthcare data management, prognostics and diagnostics, decision support, surgical robotics, virtual health assistants, and screening drug targets, and personal healthcare. The increasing reliance of such applications on AI/ML is continuously driving software innovation in the medical AI business, which is currently valued globally at around $5 billion and projected to reach approximately $50 billion by 2026. One of the reasons is that the biggest tech giants (including Google, Apple, IBM, Samsung, Microsoft, NVIDIA, and Amazon) are investing in health/medical AI.
It is well known through the Collingridge dilemma in the social sciences that technological breakthroughs such as (generative) AI/ML significantly amplify society's desire at a rapid rate to apply such technologies in day-to-day workings without being proportionately rational about their adverse consequences. Alternatively, prejudging the negative societal consequences of disruptive path-breaking technology positively affecting (or seducing) every sphere of business, engineering, and governance for the social good is extremely difficult. One such negative consequence is with respect to cyber assurance wherein medical AI companies try to convince the healthy industry stakeholders (that include consumers) that their products and services are relatively far more cyber ethical and trustworthy when compared to the risks they pose to society – in reality, nothing could be farther from the truth.
In this article, we lay down five important challenges to ensuring cyber assurance in the medical AI business from the contexts of both the supply side (i.e., the designers of medical AI technology) and the demand side (i.e., the clients that use medical AI applications sold by the supply side) viewpoints driving arguments solely on factors affecting society's trust in medical AI.
The performance of medical AI relies greatly on the quality of the data it gets to work with. The most common sources of medical data include (but are not limited to) research literature data, electronic health records (EHRs), data from clinical trials, and data obtained from modern mobile and intelligent wearables (including fitness applications). If the data from such sources are not from human entities, it could
1. Generate wrong diagnoses, resulting in poor medicare if used to train medical AI models (e.g., as in the case of IBM Watson's oncology prediction system that often uses synthetic oncology data to train the latter and is known to generate erroneous and unsafe recommendations), and
[This article has been published with permission from IIM Calcutta. www.iimcal.ac.in Views expressed are personal.]