Abstract
In 2016, advancements in AI technology generated much excitement, and it was
recognized that AI systems carry significant potential for medical, image-driven fields
such as radiology and pathology. The promise of AI for these medical domains seemed
so great that some academics and computer scientists started to voice the expectation of
replacing healthcare professionals with advanced AI. Geoffrey Hinton even claimed that
radiologists should not be trained anymore, believing that AI would become better at
reading X-rays, CT scans, and MRIs than radiologists in five years and would make human
experts obsolete.
Nevertheless, radiologists and pathologists themselves seemed generally less prone
to participate in the hype and more cautiously optimistic about AI’s promise. Their caution
seemed justified, as, in 2020, four years after Hinton’s prediction and when this PhD
research started, there was still little AI involvement within diagnostic practice, and more
attention was being paid to the safety and desirability of AI technologies. AI faced several
technical challenges, such as application scalability and dataset bias. More fundamentally,
it became clear that social and ethical concerns also challenged the use of AI in medicine.
For instance, medical practitioners’ caution towards AI’s potential effects pointed
to the necessity of determining how AI can and should impact medical expertise in image-
driven medicine. In their interaction with AI, medical professionals must renegotiate
their position and consider what expertise they are willing to outsource to AI systems in
decision-making processes and which (new) competencies and tasks should depend on
human expertise alone. The outcome of these negotiations will likely define the roles and
responsibilities AI may take up in image-driven medicine, requiring proactive scrutiny and
reflection.
This PhD thesis responds to the ethical unclarity about AI’s potential and desired impact
on medical expertise in image-driven medicine and aims to address it. The central aim
of this PhD thesis is twofold: to identify how AI relates to human expertise in image-driven
medicine and to ethically evaluate how a desirable impact on human expertise can be
achieved. The main aim is divided into four research questions: (I) What insights can be
gained from analyzing medical expertise, especially in image-driven medicine, that are
relevant to the implementation of AI? (II) What are the expectations of and experiences
with AI in image-driven medicine? (III) How can the desirability of AI’s impact on expertise
in image-driven medicine be evaluated? (IV) What methodology should ground (future)
ethical reflection on using AI in image-driven medicine? These four questions correspond
with the four parts of this thesis.
recognized that AI systems carry significant potential for medical, image-driven fields
such as radiology and pathology. The promise of AI for these medical domains seemed
so great that some academics and computer scientists started to voice the expectation of
replacing healthcare professionals with advanced AI. Geoffrey Hinton even claimed that
radiologists should not be trained anymore, believing that AI would become better at
reading X-rays, CT scans, and MRIs than radiologists in five years and would make human
experts obsolete.
Nevertheless, radiologists and pathologists themselves seemed generally less prone
to participate in the hype and more cautiously optimistic about AI’s promise. Their caution
seemed justified, as, in 2020, four years after Hinton’s prediction and when this PhD
research started, there was still little AI involvement within diagnostic practice, and more
attention was being paid to the safety and desirability of AI technologies. AI faced several
technical challenges, such as application scalability and dataset bias. More fundamentally,
it became clear that social and ethical concerns also challenged the use of AI in medicine.
For instance, medical practitioners’ caution towards AI’s potential effects pointed
to the necessity of determining how AI can and should impact medical expertise in image-
driven medicine. In their interaction with AI, medical professionals must renegotiate
their position and consider what expertise they are willing to outsource to AI systems in
decision-making processes and which (new) competencies and tasks should depend on
human expertise alone. The outcome of these negotiations will likely define the roles and
responsibilities AI may take up in image-driven medicine, requiring proactive scrutiny and
reflection.
This PhD thesis responds to the ethical unclarity about AI’s potential and desired impact
on medical expertise in image-driven medicine and aims to address it. The central aim
of this PhD thesis is twofold: to identify how AI relates to human expertise in image-driven
medicine and to ethically evaluate how a desirable impact on human expertise can be
achieved. The main aim is divided into four research questions: (I) What insights can be
gained from analyzing medical expertise, especially in image-driven medicine, that are
relevant to the implementation of AI? (II) What are the expectations of and experiences
with AI in image-driven medicine? (III) How can the desirability of AI’s impact on expertise
in image-driven medicine be evaluated? (IV) What methodology should ground (future)
ethical reflection on using AI in image-driven medicine? These four questions correspond
with the four parts of this thesis.
Original language | English |
---|---|
Awarding Institution |
|
Supervisors/Advisors |
|
Award date | 7 Jan 2025 |
Publisher | |
Print ISBNs | 978-94-6473-663-2 |
DOIs | |
Publication status | Published - 7 Jan 2025 |
Keywords
- Medical AI
- digitalization
- ethics of biomedical technology
- ethics and epistemology of AI
- image-driven medicine
- expertise
- ethnographic methods