Abstract
Aim: Expert radiologists are excellent image interpreters. Unfortunately, image interpretation errors are frequent even among experienced radiologists and not much is known about which factors lead to expertise. Increasing assessment quality can improve radiological performance. Progress tests can monitor expertise development over time and identify factors related to the development of radiology expertise, which in turn can help designing successful training programs. Improved tests can detect diagnostic errors and direct training. The aims of this thesis were to contribute to the development of high quality assessment of radiological expertise and to add to our knowledge on the development of it. Methods: Several studies were conducted to evaluate the validity of tests of knowledge and image interpretation skill in radiology using Messick’s model of construct validity. In one study, the results of nine administrations of the Dutch radiology progress test (DRPT), from 2005 to 2009, were analysed. In a second study, the effect of a don’t know option (DKO) and formula scoring (i.e. penalizing incorrect answers) on the validity of progress test results during a DRPT administration was investigated. Two other studies investigated the effect of increased authenticity of a test on its validity. This test model included volumetric radiological images, such as CT-scans, which can be stack viewed and digitally manipulated in a way that is representative of clinical practice. Indications of reliability, difficulty, authenticity and the external aspect of construct validity of the volumetric image tests were obtained by analysing test scores and questionnaire responses of participants. In addition, scores were correlated to scores on human cadaver anatomy tests. Another study investigated test items on different components of the image interpretation process. To gain insight in the development of radiological expertise, DRPT test results between 2005 and 2010 were modelled and predictors of development were investigated. Results: DRPT scores were found to be reliable and valid measures of radiological knowledge and image interpretation skill development. Scores on images were found to be higher than on theoretical knowledge. Participants with a lower risk-taking test behaviour were disadvantaged by progress tests with a DKO, which was found to threaten the construct validity, despite a higher reliability with DKO. Volumetric image questions improved the test quality on all investigated aspects of validity compared to traditional 2D image questions. Questions assessing components of image interpretation were found to contribute to test quality by uncovering errors and partial knowledge. Radiological expertise development shows a rapid initial increase, levels of at end of training and is also affected by hospital. Discussion: Progress tests are reliable and valid tools to monitor radiological expertise development, but may need to include items on volumetric image interpretation. The use of DKOs introduces a bias and is not recommended. The reliability benefit does not outweigh the validity threat. Our findings need further studies, e.g. focussing on different expertise levels, and evaluating additional aspects of test quality. In depth analysis of training environment variables will be helpful to identify success factors for expertise development in radiology training.
Original language | English |
---|---|
Awarding Institution |
|
Supervisors/Advisors |
|
Award date | 24 Mar 2016 |
Publisher | |
Print ISBNs | 978-94-6169-829-2 |
Publication status | Published - 24 Mar 2016 |
Keywords
- Assessment
- Expertise
- Radiology
- Image interpretation
- Test validation
- Progress testing
- Education