TY - JOUR
T1 - Bias Discovery in Machine Learning Models for Mental Health
AU - Mosteiro, Pablo
AU - Kuiper, Jesse
AU - Masthoff, Judith
AU - Scheepers, Floortje
AU - Spruit, Marco
N1 - Funding Information:
Funding: This research was funded by the COVIDA project, which in turn is funded by the Strategic Alliance TU/E, WUR, UU en UMC Utrecht.
Publisher Copyright:
© 2022 by the authors. Licensee MDPI, Basel, Switzerland.
PY - 2022/5
Y1 - 2022/5
N2 - Fairness and bias are crucial concepts in artificial intelligence, yet they are relatively ignored in machine learning applications in clinical psychiatry. We computed fairness metrics and present bias mitigation strategies using a model trained on clinical mental health data. We collected structured data related to the admission, diagnosis, and treatment of patients in the psychiatry department of the University Medical Center Utrecht. We trained a machine learning model to predict future administrations of benzodiazepines on the basis of past data. We found that gender plays an unexpected role in the predictions—this constitutes bias. Using the AI Fairness 360 package, we implemented reweighing and discrimination-aware regularization as bias mitigation strategies, and we explored their implications for model performance. This is the first application of bias exploration and mitigation in a machine learning model trained on real clinical psychiatry data.
AB - Fairness and bias are crucial concepts in artificial intelligence, yet they are relatively ignored in machine learning applications in clinical psychiatry. We computed fairness metrics and present bias mitigation strategies using a model trained on clinical mental health data. We collected structured data related to the admission, diagnosis, and treatment of patients in the psychiatry department of the University Medical Center Utrecht. We trained a machine learning model to predict future administrations of benzodiazepines on the basis of past data. We found that gender plays an unexpected role in the predictions—this constitutes bias. Using the AI Fairness 360 package, we implemented reweighing and discrimination-aware regularization as bias mitigation strategies, and we explored their implications for model performance. This is the first application of bias exploration and mitigation in a machine learning model trained on real clinical psychiatry data.
KW - artificial intelligence
KW - bias
KW - fairness
KW - health
KW - machine learning
KW - mental health
KW - psychiatry
UR - http://www.scopus.com/inward/record.url?scp=85130635723&partnerID=8YFLogxK
U2 - 10.3390/info13050237
DO - 10.3390/info13050237
M3 - Article
AN - SCOPUS:85130635723
VL - 13
SP - 1
EP - 15
JO - Information (Switzerland)
JF - Information (Switzerland)
IS - 5
M1 - 237
ER -