From helping doctors diagnose to suggesting advanced treatments, Artificial Intelligence (AI) is transforming health and medicine. But AI has been developed primarily by men, based on data sets that prioritize men’s bodies and health needs. This means that many AI models are loaded with gender and sexual biases – which pose health risks to women as well as non-binary patients.
With these biases in medicine coming into the spotlight in recent years, will AI exacerbate existing health care disparities – or can it be used to help close the gap?
biased data
The potential of AI depends entirely on the quality of the large data sets that are fed into the underlying machine learning algorithms within its software programs.
If data either excludes or under-represents relevant segments of the global population, ill-informed AI can pose serious health risks – ranging from misdiagnosis, to compromising the interpretation of medical imaging, to incorrect intervention recommendations. .
The problems start with implicit gender biases in the coding of AI software languages.
Infiltration of masculine stereotypes has emerged in AI – from the apparently unconscious default to the male pronoun “he” when options are unclear, to dangerous health care applications that jeopardize diagnosis and treatment.
For example, in the field of psychiatry, when men describe symptoms of trauma, they are more likely to be diagnosed with post-traumatic stress disorder (PTSD), while women describing similar symptoms are less likely to receive a personality disorder diagnosis. There is a higher risk of getting it.
This type of gender bias can (and often does) affect women’s access to health care or their management within the health care system – and this bias appears to be replicated in AI models.
A 2020 US study found that natural language processing AI models used in psychiatry exhibit significant gender bias.
The research paper, published in PLoS One, warns that AI models that screen for psychopathy or suicide will make mistakes if they are trained on data written primarily by white men, because language is gendered. Takes shape. For example, men and women express suicidal distress differently.
Importantly, awareness of these types of issues is growing and initiatives to prevent bias are emerging – often led by women, such as Bioinfo4Women-B4W, a program of the Barcelona Supercomputing Centre.
This example also reminds us that ideas about bias and gender in AI must extend beyond the English language to be relevant to AI development around the world.
Inclusive Design Opportunities
But the concerns don’t stop at the language level. What if something as basic as our body structure is not considered while developing AI?
As the use of AI increases in security product design, we have an unprecedented opportunity to create better products by crafting features that adequately complement our human bodies – female and male.
There are proportional differences between the average female and male bodies; We can’t simply scale from one to the other.
This point was driven home during the Covid pandemic, when wearing personal protective equipment (PPE) became mandatory.
Despite approximately 70 percent of global health care workers being women, PPE is designed around the male body. A Canadian survey showed that poorly fitting PPE was not only responsible for failure to provide adequate protection, but also that oversized and ill-fitting gear posed a greater risk of accident.
More study is needed on this topic, but researchers have already proposed the creation of AI-designed PPE. Ensuring that sexual characteristics are considered in PPE design can be expected to improve protection.
moving in the right direction
The accuracy of AI-assisted clinical diagnosis is entirely dependent on the robustness of the underlying data set. Without actively taking into account gender and sex bias in historical data sets, AI may contribute to misses or misdiagnoses.
Fortunately, adjusting for such biases leads to better health care outcomes for women.
For example, the traditional risk assessment score for heart attacks, the Global Registry of Acute Coronary Events (GRACE), was updated in 2022 to include an AI predictive model that takes into account gender-specific disease characteristics.
This update has revolutionized the performance of this assessment tool. The breakthrough stems from separate analysis of male and female data – guiding more female patients to life-saving early intervention, helping to overcome structural biases in patient management.
A practical example of an AI model designed to address and reduce gender bias is the SmartHealth Pregnancy GPT. The tool, developed by The George Institute for Global Health, aims to improve access to guideline-based pregnancy advice for women living in rural and remote communities in India.
The concept was to develop a large language model chatbot that would be contextually sensitive and medically accurate – and avoid harmful stereotypes from perpetuating.
The George Institute team worked closely with community health workers, physicians, and women living in rural communities to co-create and refine the tool’s algorithms. Clinicians also rated AI-generated answers on accuracy, suitability for community health workers, completeness and risk of bias, which helped improve the chatbot’s responses.
The chatbot demonstrates the potential of AI in building health care worker capacity and enhancing health education in resource-limited settings – while avoiding bias and promoting women’s rights.
Gender-sensitive AI development could similarly improve countless other medical technologies that rely on data diversity and integrity for accuracy: for example, tailoring personalized treatments; predicting treatment responses; performing some robot-assisted surgeries; monitoring patients remotely; virtual health care; And the discovery of medicine gained momentum.
Initiatives to advance better gender and gender equity in health care have also begun to emerge in recent years. These include the newly launched Australian Center for Sex and Gender Equity in Health and Medicine and the UK Medical Science Sex and Gender Equity.
These programs are actively advocating for the regular consideration of sex and gender in everything from discovery to translational research, including AI applications, to ensure scientific rigor as a strong foundation for advancing health and medical care.
AI is the future of health care, and we cannot afford to repeat past mistakes of health disparities by ignoring sex and gender. It’s time to program AI to chart its own course toward moral destiny.
(Disclaimer: The information provided in this article is provided for educational purposes and should not be taken as medical advice.)
,Author: Dr Sue Haupt is a Senior Research Fellow in the Center for Sex and Gender Equity in Health and Medicine at the George Institute for Global Health at UNSW, an Honorary Senior Research Fellow at Deakin University and at the Sir Peter McCallum Department of Oncology at the University. Of Melbourne. Professor Bronwyn Graham is Director of the Center for Sex and Gender Equity in Health and Medicine at the George Institute for Global Health and Professor in the School of Psychology, UNSW. Professor Jane Hirst is Program Director of Women’s Health at the George Institute for Global Health, School of Public Health, Imperial College London.
(Originally published by 360info under Creative Commons)
(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)