TY - JOUR
T1 - Towards responsible surveillance in preve health data-AI research
AU - Muller, Sam H.A.
AU - van Delden, Johannes J.M.
AU - van Thiel, Ghislaine J.M.W.
AU - Consortium, Hypermarker
N1 - Publisher Copyright:
© 2025 Muller et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution,and reproduction in any medium, provided the original author and source are credited.
PY - 2025/12/29
Y1 - 2025/12/29
N2 - The integration of artificial intelligence (AI) into health data research promises to transform precision medicine, especially by managing complex and chronic conditions like hypertension through decision support. Yet health AI also furthers surveillance, with serious ethical and social impact. Nevertheless, surveillance in health, in particular data-AI research and innovation, is understudied. This paper provides a conceptual analysis of health data-AI surveillance using the Hypermarker research project as a case study. We trace the evolution of surveillance within medicine, public health, data-driven research, and the proliferation of digital health technologies, before examining how the development of AI technologies amplifies and transforms these existing practices. We analyse health data-AI surveillance’s implications of pervasiveness and unobtrusiveness, hypercollection and function creep, hypervisibility and profiling, informational power, and the formation of a surveillant assemblage, followed by an assessment of the safeguards and measures implemented by the Hypermarker project. Our analysis exposes several key challenges for responsible surveillance practices in health data-AI research: strengthening trustworthiness through fairness and equity, ensuring accountability through transparency, and fostering public control and oversight. To this end, we recommend advancing responsible governance by implementing arrangements such as community advisory panels, independent review boards and oversight bodies, data-AI justice frameworks and dialogues, transparency dashboards and public AI portals, stewardship committees, accountability assemblies, and open oversight cycles.
AB - The integration of artificial intelligence (AI) into health data research promises to transform precision medicine, especially by managing complex and chronic conditions like hypertension through decision support. Yet health AI also furthers surveillance, with serious ethical and social impact. Nevertheless, surveillance in health, in particular data-AI research and innovation, is understudied. This paper provides a conceptual analysis of health data-AI surveillance using the Hypermarker research project as a case study. We trace the evolution of surveillance within medicine, public health, data-driven research, and the proliferation of digital health technologies, before examining how the development of AI technologies amplifies and transforms these existing practices. We analyse health data-AI surveillance’s implications of pervasiveness and unobtrusiveness, hypercollection and function creep, hypervisibility and profiling, informational power, and the formation of a surveillant assemblage, followed by an assessment of the safeguards and measures implemented by the Hypermarker project. Our analysis exposes several key challenges for responsible surveillance practices in health data-AI research: strengthening trustworthiness through fairness and equity, ensuring accountability through transparency, and fostering public control and oversight. To this end, we recommend advancing responsible governance by implementing arrangements such as community advisory panels, independent review boards and oversight bodies, data-AI justice frameworks and dialogues, transparency dashboards and public AI portals, stewardship committees, accountability assemblies, and open oversight cycles.
UR - https://www.scopus.com/pages/publications/105026725689
U2 - 10.1371/journal.pdig.0001146
DO - 10.1371/journal.pdig.0001146
M3 - Article
AN - SCOPUS:105026725689
SN - 2767-3170
VL - 4
JO - PLOS digital health
JF - PLOS digital health
IS - 12 December
M1 - e0001146
ER -