In the dynamic landscape of digital health, one undeniable truth emerges: artificial intelligence isn't just a buzzword—it's the transformative force reshaping how we approach healthcare. From computerised interpretation of graphs and images, to forecasting diagnosis, AI clearly is not an empty promise of technology developers or merely a story told by public health officials. AI is here to stay, and overtime, experts expect the integration of AI throughout the healthcare journey. But while this brings excitement to technology enthusiasts, and offers us reason to be cautiously optimistic, this “game-changer” that promises to improve health is continually undermined by a relentless antagonist: bias. The implications of biased AI are profound, critically affecting individual health outcomes and the equitable distribution of healthcare resources globally. It challenges the ability to automate processes to increase efficiency in different healthcare settings.
The problem of data equity
AI's effectiveness depends heavily on the data it is trained on or, as indicated by the National Institute of Standards and Technology, the institutional and societal factors influencing the development of algorithms. This can be driven by limited and underrepresented data sets or subconscious bias, and this is found throughout society. For example, financial AI systems in the US have perpetuated historical biases through algorithms that inadvertently uphold discriminatory practices like redlining. As such, this has pressured the Consumer Financial Protection Bureau to review the shortcomings of AI models and algorithms to be covered under the standard of unfairness of the Consumer Financial Protection Act.
A similar problem is found in healthcare, as studies have shown systemic biases that affect everything from patient waiting times to diagnosis. A study published in the academic journal Manufacturing & Service Operations Management reported racial biases in medical appointment scheduling, with black patients waiting approximately 30% longer than non-black patients. Studies also revealed racial and ethnic data bias in commercial AI algorithms predicting incorrect health-risk assessments among black patients, as well as algorithmic underdiagnosis of underserved populations, particularly among Hispanic female patients in chest x-ray data sets. One reason stems from the lack of diversity of data sets. Automating processes could perpetuate disparities in resource allocation and diagnosis among underrepresented communities.
In turn, this creates a significant problem in attempting to automate certain decision-making processes, such as risk assessments and diagnosis, to merely increase efficiency. It is risky to deploy these tools without evaluating the bias within data sets used to train an algorithm and in algorithmic design. A study published in the Journal of the American Medical Association found that “even in controlled settings, without the usual pressures on time, clinicians favoured automated decision-making systems, relying on the AI-based tool, despite the presence of contradictory or clinically nonsensical information.” With time pressure to see patients being an inherent challenge of the medical profession, errors will accrue, and mistakes will cost lives.
Investing in equitable AI solutions
Unfortunately, there is not a prescriptive and uniform solution. Rather, addressing these challenges requires a multifaceted approach. Enhancing the data infrastructure and design is essential. Stakeholders need to collaborate to create data infrastructures that promote racial equity by cross-sector datasharing on factors that impact health holistically, such as the social determinants of health. This would also require training and incentivising data collection to ensure reporting consistency, and it may involve modifying health IT systems to ensure interoperability and training staff to handle data appropriately. Beyond data collection, it is critical to consider different types of bias, including subconscious bias, as part of the algorithmic design process.
Additionally, establishing local, national and global collaborative networks can help develop sensitive policies and guidelines that reflect the diversity of populations. This will require transparency to gain the trust of the public and medical professionals, mitigating potential adverse events like inaccurate diagnoses or discriminatory practices. A report by Economist Impact on advancing health and technology integration emphasised the need for auditing AI systems for bias as crucial to maintaining public trust. Incorporating community input and transparently acknowledging data limitations are critical steps towards collecting and interpreting data equitably.
Looking forward
The first step towards progress is to recognise the problem and identify ways society can collaborate to find sustainable and innovative solutions. The journey towards unbiased AI in healthcare is full of challenges, but immediate solutions are essential for the equitable advancement of global health as lives are at risk. By investing in equitable data practices, improving infrastructure and promoting collaboration, the global health community can harness AI's automated potential effectively and ethically. This will ensure that AI is an effective tool that does not perpetuate existing disparities.