TY - JOUR
T1 - Topic Modeling for Interpretable Text Classification From EHRs
AU - Rijcken, Emil
AU - Kaymak, Uzay
AU - Scheepers, Floortje
AU - Mosteiro, Pablo
AU - Zervanou, Kalliopi
AU - Spruit, Marco
N1 - Funding Information:
We acknowledge the COmputing VIsits DAta (COVIDA) funding provided by the strategic alliance of TU/e, WUR, UU, and UMC Utrecht.
Publisher Copyright:
Copyright © 2022 Rijcken, Kaymak, Scheepers, Mosteiro, Zervanou and Spruit.
PY - 2022/5
Y1 - 2022/5
N2 - The clinical notes in electronic health records have many possibilities for predictive tasks in text classification. The interpretability of these classification models for the clinical domain is critical for decision making. Using topic models for text classification of electronic health records for a predictive task allows for the use of topics as features, thus making the text classification more interpretable. However, selecting the most effective topic model is not trivial. In this work, we propose considerations for selecting a suitable topic model based on the predictive performance and interpretability measure for text classification. We compare 17 different topic models in terms of both interpretability and predictive performance in an inpatient violence prediction task using clinical notes. We find no correlation between interpretability and predictive performance. In addition, our results show that although no model outperforms the other models on both variables, our proposed fuzzy topic modeling algorithm (FLSA-W) performs best in most settings for interpretability, whereas two state-of-the-art methods (ProdLDA and LSI) achieve the best predictive performance.
AB - The clinical notes in electronic health records have many possibilities for predictive tasks in text classification. The interpretability of these classification models for the clinical domain is critical for decision making. Using topic models for text classification of electronic health records for a predictive task allows for the use of topics as features, thus making the text classification more interpretable. However, selecting the most effective topic model is not trivial. In this work, we propose considerations for selecting a suitable topic model based on the predictive performance and interpretability measure for text classification. We compare 17 different topic models in terms of both interpretability and predictive performance in an inpatient violence prediction task using clinical notes. We find no correlation between interpretability and predictive performance. In addition, our results show that although no model outperforms the other models on both variables, our proposed fuzzy topic modeling algorithm (FLSA-W) performs best in most settings for interpretability, whereas two state-of-the-art methods (ProdLDA and LSI) achieve the best predictive performance.
KW - electronic health records
KW - explainability
KW - information extraction
KW - interpretability
KW - natural language processing
KW - psychiatry
KW - text classification
KW - topic modeling
UR - http://www.scopus.com/inward/record.url?scp=85130549041&partnerID=8YFLogxK
U2 - 10.3389/fdata.2022.846930
DO - 10.3389/fdata.2022.846930
M3 - Article
C2 - 35600326
SN - 2624-909X
VL - 5
SP - 1
EP - 11
JO - Frontiers in big data
JF - Frontiers in big data
M1 - 846930
ER -