Skip to main content

Prioritisation criteria for the selection of new diagnostic technologies for evaluation

Abstract

Background

Currently there is no framework for those involved in the identification, evaluation and prioritisation of new diagnostic technologies. Therefore we aimed to develop prioritisation criteria for the assessment of new diagnostic technologies, by gaining international consensus on not only which criteria should be used, but also their relative importance.

Methods

A two-round Delphi process was used to generate consensus amongst an international panel of twenty-six experts on priority criteria for diagnostic health technology assessment. Participants represented a range of health care and related professions, including government, industry, health services and academia.

Results

Based on the responses to the first questionnaire 18 criteria were placed into three categories: high, intermediate and moderate priority. For 16 of the 18 criteria, agreement with the categorisation of the criteria into the high, intermediate and moderate categories was high at ≥ 70% (10 had agreement ≥ 80%). A further questionnaire and panel discussion reduced the criteria to 16 and two categories; seven were classified as high priority and nine intermediate.

Conclusions

This study proposes an objective structure of prioritisation criteria to use when assessing new diagnostic technologies, based on an expert consensus process. The value of these criteria is that no one single component should be used as the decisive driver for prioritisation of new diagnostic technologies for adoption in healthcare settings. Future studies should be directed at establishing the value of these prioritisation criteria across a range of healthcare settings.

Peer Review reports

Background

A correct diagnosis is central to guiding choice of treatment, monitoring the effects of that treatment, as well as determining prognosis. Diagnosis involves both clinical skills (e.g. physical examination), as well as techniques to measure physiological parameters (e.g. blood pressure), and biochemical, haematological, other pathological and radiological investigations. New diagnostic technologies are continually being developed and marketed, while technologies that are currently used in hospital or traditional laboratory settings are increasingly being repackaged as point of care devices. However, improvements in diagnostic technologies do not inevitably translate to benefits to patient care [1]. For example, a new test may not have a clearly defined role in an existing diagnostic pathway, or may simply contribute to diagnostic uncertainty. Unlike randomised trials of interventions, which have a control arm, most studies of diagnostic accuracy do not compare the outcome from the use of the new test with existing tests [2].

In order for health care purchasers and providers to assess the importance and role of a new diagnostic technology in the diagnostic pathway, it is vital to have a process that can identify which technologies require more detailed or formal assessment, such as technology assessments or evidence-based summary reports. Simply using the efficacy data of a new diagnostic technology derived from a clinical study to determine adoption in the healthcare setting is inadequate [3]. This approach does not take into account all the factors influencing adoption of a test, such as the clinical setting, disease prevalence and proposed use.

Prioritising health technologies for further assessment, particularly for therapeutic interventions, is well established among international health technology assessment (HTA) bodies. Priority setting has involved both quantitative methods or scoring systems [4], as well as consensus guidelines. Quantitative models have been developed, for example by the Institute of Medicine in the USA, who developed a specific quantitative method to calculate priority scores based on criterion weight, as well as scores proposed by the Committee on Priorities for Assessment and Reassessment of Health Care Technologies [4], and the Technology Assessment Priority-Setting System (TAPSS) developed by the Council on Health Care Technologies [5]. In Europe, the EEC-funded EUR-ASSESS project for the coordination of HTA activities produced a set of guidelines for the priority setting of HTA projects [6]. Indeed, a recent systematic review identified 12 different priority setting frameworks used among 11 international HTA agencies, with a total of 59 different priority setting criteria [7]. Although the existing frameworks are intended to be applied to all technologies, there is a paucity of guidance on methods for prioritising diagnostic technologies in particular.

There are several reasons why diagnostic technologies require prioritisation criteria that are distinct from existing frameworks. Diagnosis usually involves a pathway in which a new technology may have a variety of roles such as replacing an existing test, triaging, or as an add-on test. In addition diagnostic accuracy may vary widely between different clinical settings and populations due to variations in prevalence and spectrum of disease. Key issues that are important in the evaluation of new diagnostic technologies have been proposed previously - these fitted into three broad domains, those related to the disease or target condition, the new diagnostic technology itself, and the impact of the diagnostic technology [8]. To address the important gap in the technology prioritisation of diagnostic technologies we aimed to build on these criteria, by refining them and including additional criteria. We also aimed to assess the criteria systematically by developing an international consensus on not only which criteria should be used to prioritise diagnostic technologies, but also their relative importance.

Methods

We used the Delphi method, which is an objective process that gathers consensus opinion from a panel of experts through an iterative questionnaire process interspersed with controlled opinion feedback [9]. Panels generally involve 10 to 50 members and experts who are anonymous, in that other panel members do not know their identity at the time of data collection. The Delphi method has been used extensively in developing criteria frameworks [10, 11]. No ethical approval was required for this study.

We identified an international group of experts as follows: (1) membership of early awareness and alert networks (e.g. The International Information Network on New and Emerging Health Technologies [EuroScan]), (2) recommendation from researchers in the field of health technology assessment, (3) contacts in diagnostic technology industries, (4) contacts in government bodies tasked with health technology assessment, and (5) recommendations from researchers and providers in the field of primary health care and diagnosis. Experts were contacted by email and invited to contribute to the study using either email or a web-based format and were from a variety of health care sectors and related professional disciplines (Table 1).

Table 1 The Panel: Sectors and Main Professional Roles

The first questionnaire used prioritisation criteria developed by Summerton [8], which were compiled into an initial questionnaire following preliminary discussions amongst a small group of experts. The initial questionnaire consisted of 18 criteria, which were grouped into those pertaining to (1) the disease or target condition, (2) the new diagnostic technology, and (3) the impact of the diagnostic technology (Table 2). Participants were asked to rate the importance of, or their level of agreement with, each criterion using a seven-point Likert scale, where 1 indicated low and 7 high importance or levels of agreement. Open comments or clarifications on each item, as well as general comments and suggestions regarding items overlooked in the questionnaire were solicited. In total 26 respondents participated in the first questionnaire.

Table 2 Criteria appraised by experts: Round 1 Questionnaire

Responses to the first questionnaire were categorised according to the proportion of respondents that ranked criterion as a 6 or 7 on the Likert scale. Based on this analysis three priority groups were created: (1) high priority, at least 70% of respondents ranked 6 or 7, (2) intermediate priority, 50-69% of respondents, and (3) moderate priority, less than 50% of respondents.

A second questionnaire was then developed, placing the criteria in the aforementioned three priority groups. The experts were informed of the method used to group the criteria and were asked to indicate whether they agreed or disagreed with the placement of each criterion into its respective priority group. If they disagreed with the placement of a criterion, they were asked to suggest which category the criterion should be placed into (high, intermediate or moderate) and provide clarifying comments for their opinion. General comments were also invited. The final questionnaire responses were reviewed at a focus panel meeting of a smaller group of experts.

Results

The 26 respondents represented a broad section of the health care sector: government (10%), industry (24%), academic (28%) and health services (38%), with diverse professional roles (Table 1). In some instances participants had more than one professional role and/or worked for different sectors (e.g. health care services and academic).

Round 1 Questionnaire

For 5 of the 18 criteria in the first questionnaire (Table 2; criteria 1, 3, 9, 15, 16), we found a high level of consensus existed as to their prioritisation importance. For example, 88% of respondents assigned a high importance to "The potential that the technology will have an impact on morbidity and/or mortality of the disease" and 85% to "The new technology reduces the number of people falsely diagnosed with the disease or target condition". Although 73% of respondents ranked "The disease or target condition to which the diagnostic technology will be applied can be clearly defined" as a 6 or 7 on the scale of importance, 13%, primarily from the academic sector, considered this criterion unimportant, ranking it as 1 or 2 on the Likert scale.

Seven criteria (Table 2; criteria 2, 4, 6, 8, 14, 17, 18) showed a substantial variation in ranking of their importance. For example, 46% of respondents assigned a low level of importance to "The relevance of the disease or target condition to current regional or national health policies and/or priorities", with 23% scoring a 1 or 2, whereas 23% rated this criterion as important, scoring a 6 or 7. Respondents ranking this criterion as being of low importance (1 or 2 on the Likert scale) came from academic, industry and health services sectors, whereas respondents from the government sector gave this criterion more importance (4 to 6 on the Likert scale). Reasons provided for rating this criterion low included: regional and national policies are sometimes out-of-date, priorities can be skewed by political pressure, and it is most important how much morbidity and mortality can be avoided rather than whether it is a government priority. However some respondents from industry and academia ranked this criterion as a 7 in terms of its importance. Regarding "The prevalence or incidence of the disease or target condition", respondents commented uncommon conditions could still be important to consider for research. "It would be feasible to change current practice to incorporate this technology (e.g. additional training, infrastructure, or quality control)" showed the most rankings of 5, indicating that this criterion was deemed somewhat important, but not the most important.

"The potential that the technology will have an impact on morbidity and/or mortality of the disease" scored the highest mean Likert value of 6.5 and the criterion that "The relevance of the disease or target condition to current regional or national health policies and/or priorities" had the lowest mean value of 4.2. Based on the responses to the round 1 questionnaire, we categorised the criteria into high (70% or more respondents ranked criteria as 6 or 7 on the scale of importance), intermediate (50-69% ranked criteria as 6 or 7 on the scale of importance) and moderate priorities (less than 50% ranked criteria as 6 or 7 on the scale of importance) for the second round questionnaire.

Round 2 Questionnaire

A total of 24 (92%) of those who participated in the first questionnaire responded to the second questionnaire. For 16 of the 18 criteria, agreement with the categorisation of the criteria into the high, intermediate and moderate categories was high at ≥ 70% (10 had agreement ≥ 80%). There was low agreement for two criteria: "The prevalence or incidence of the disease or target condition" (54% agreement) and "There is variation in treatment or patient outcomes resulting from current diagnostic variability" (67% agreement). For the latter, a number of respondents commented it should be moved from the intermediate to high priority groups, since practice variation is an important driver of prioritisation and variation in outcomes is a clear sign that the current diagnostic process is not fit for purpose. For "The prevalence or incidence of a disease or target condition", several experts disagreed with its placement into the intermediate category and suggested it should have a high priority. The main reason given for this was technologies that have an impact on highly prevalent diseases would have the greatest impact within the health care system. However, other experts felt this criterion should be moved to moderate priority, as they considered that it would not be a useful criterion in itself if the technology did not meet any of the other criteria.

Several experts also suggested moving the criteria placed into the "moderate priority" category into either the high or intermediate priority categories. Based on the subsequent panel discussion, there did not appear to be a clear reason to differentiate between intermediate and moderate priority categories, so these were combined. Two criteria were removed from the final list as several respondents commented that they overlapped with other criteria. These were the criteria relating to cost-effectiveness and workload, which were already addressed by "The new technology would enhance diagnostic efficiency or be more cost effective than the current diagnostic approach". The latter criterion was then moved to high priority, in line with expert suggestions. The final list consisted of 16 criteria, seven of which were considered high priority and nine of intermediate priority (Table 3).

Table 3 Proposed Criteria for the Prioritisation of Diagnostic Technologies

Discussion

In this study we have developed prioritisation criteria for the evaluation of new diagnostic technologies. With the aid of a two-round Delphi consensus method, which sought the opinions of 26 experts, 16 criteria were agreed, of which seven were classified as high priority and nine as intermediate priority. To our knowledge this is the first study to address prioritisation criteria for diagnostic technologies using the Delphi method.

This study was designed to canvass opinions from a range of experts from industry, academia, government and health services. The aim of such a process is to obtain a range of opinions from diverse perspectives and, unlike quantitative surveys, does not rely on a large sample to determine outputs with confidence. Although our list of experts is not exhaustive, the group provided a wide range of views and represented several different sectors. The prioritisation criteria presented are therefore not dependent on the views of professionals from one specific constituency or working in a particular health care system or country. However, we acknowledge that the generalisability of the findings of the Delphi consensus approach may be limited by the relatively small number of participants, who may have specific views or agendas. Further studies to verify and refine our results in different or extended groups of participants should be considered.

A wide range of quantitative and qualitative prioritisation criteria for health technologies have been published by committees and organisations involved in health care [5, 7]. Generally the criteria lists share three main elements: (1) clinical impact, (2) economic impact and (3) budget impact. A weighted benefit score coupled with cost for prioritising health technologies at the primary care trust level was developed by Wilson et al [12]. The authors applied this score to six proposed services and found that it was practical, however ultimately the primary care trust was not able to use the results of the score as the sole criteria to prioritise which services to fund. Indeed, two of the prioritised services did not receive funding, indicating that the criteria were inadequate. A systematic review of 12 priority-setting frameworks from 11 agencies in 10 countries highlighted differences across HTA agencies regarding categorisation, scoring and weighting of criteria [7]. The review showed that quantitative rating methods and cost benefit considerations for priority setting were seldom used.

Although criteria have been developed for priority setting of new health technologies for early assessment, these are generally applied to novel therapeutic agents and interventions [13]. Selection criteria also differ significantly depending on the early awareness programme, and prioritisation is frequently implicit and undocumented [13, 14]. The requirements for diagnostic technologies are somewhat different. For example, while the prevalence or incidence of a disease is a primary criterion listed in many of the existing prioritisation frameworks, the consensus emerging from our study indicates that, in terms of diagnostic technologies, this criterion carries less weight: a test for a relatively uncommon disease (e.g. pancreatic cancer) may still be very important. Diagnostic technologies may also have different outcomes, for example either ruling in or ruling out a disease. Here it emerged that ruling out a disease is of higher priority in diagnosis, although this may not be the case in all clinical settings: in high acuity situations, such as critical care, ruling in a disease may be more important.

Although in some cases there was disagreement amongst the panel regarding the placement of criteria into their respective categories, overall the level of consensus was high. The criteria set out in this study should be relevant to those involved in identification, evaluation and prioritisation of new diagnostic technologies at national, regional or local levels. Our aim is to provide a framework for the selection of new diagnostic technologies for in-depth assessment or implementation, based on how many and the extent to which the listed criteria are satisfied and whether they fall into the high or intermediate priority category. This could be achieved by assessing which criteria are met via a check-list (Table 3). One strategic use of such a checklist, adopted by our Diagnostic Technology Horizon Scanning Centre [15], is to highlight areas where there is lack of evidence and where further research is required. Different specialities may also apply different weights to the criteria, depending on their priorities. These criteria could be adopted by the 'Evaluation Pathway Programme for Medical Technologies' recommended in the NHS Next Stage review and currently being established by UK National Institute for Clinical Excellence, to which some diagnostic technologies will be subject. In addition, they would potentially also be applicable to the Agency for Healthcare Research and Quality's (AHRQ) proposed new Horizon Scanning System in the USA [16]. Future studies should be directed at establishing the value of these prioritisation criteria in such settings. An important challenge will be to identify the supporting evidence base required to assess the high priority criteria in particular.

Conclusions

The prioritisation criteria presented in this study provide an objective structure for the assessment of new and emerging diagnostic technologies. In effect the value of these criteria is that no one single component should be used as the decisive driver for prioritisation of new diagnostic technologies for adoption in healthcare settings, irrespective of the context. We encourage diagnostic technology programmes to evaluate further the value of these criteria.

Appendix 1: List of expert participants

Anne Mackie, National Screening Committee, UK

Anthony Harnden, Department of Primary Health Care, University of Oxford, UK

Anthony James, NHS Institute for Innovation and Improvement, UK

Birgitte Bonnevie, Danish Centre for Evaluation and Health Technology Assessment, Denmark

Brian Shine, Department of Clinical Biochemistry, University of Oxford, UK

Carl Heneghan, General Practice and Department of Primary Health Care, University of Oxford, UK

Christopher P Price, Department of Primary Health Care, University of Oxford, UK

Danielle Freedman, Royal College of Pathologists, UK

David Horne, Inverness Medical, UK

David Mant, General Practice and Department of Primary Health Care, University of Oxford, UK

Doris-Ann Williams, British In Vitro Diagnostics Association, UK

George Zajicek, Axis-Shield Diagnostics Ltd, UK

Hanns Christian Müller, Roche Diagnostics Ltd, Switzerland

Iñaki Gutiérrez Ibarluzea, Basque Office for Health Technology Assessment, Spain

Jag Grewal, Beckman Coulter United, UK

Janet Hiller, Adelaide Health Technology Assessment, Australia

Jeremy Moss, Roche Diagnostics Ltd, UK

Johan Wallin, Swedish Council on Technology Assessment in Health Care, Sweden

John Clarkson, Atlas Genetics Ltd, UK

Matthew Helbert, Department of Immunology, Directorate of Laboratory Medicine, Manchester, UK

Paul Glasziou, Centre for Evidence Based Medicine, University of Oxford, UK

Philip Wood, Consultant to the Diagnostics Industry

Richard Mayon-White, Department of Primary Health Care and Public Health, University of Oxford, UK

Susannah Fleming, Department of Engineering Science, University of Oxford, UK

Tammy Clifford, Canadian Agency for Drugs and Technologies in Health, Canada

Thierry Buclin, Department of Medicine, University Hospital of Lausanne, Switzerland

References

  1. Jarvik JG: Study design for the new millennium: changing how we perform research and practice medicine. Radiology. 2002, 222 (3): 593-594. 10.1148/radiol.2223011621.

    Article  PubMed  Google Scholar 

  2. Bossuyt PM, Irwig L, Craig J, Glasziou P: Comparative accuracy: assessing new tests against existing diagnostic pathways. BMJ. 2006, 332 (7549): 1089-1092. 10.1136/bmj.332.7549.1089.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Whiteley G: Bringing diagnostic technologies to the clinical laboratory: Rigor, regulation, and reality. Proteomics Clin Appl. 2008, 2 (10-11): 1378-1385. 10.1002/prca.200780170.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  4. Donaldson MS, Sox HC, (Eds): Setting priorities for health technology assessment: a model process. 1992, Washington D.C.: National Academy Press

    Google Scholar 

  5. Eddy DM: Selecting technologies for assessment. Int J Technol Assess Health Care. 1989, 5: 485-501. 10.1017/S0266462300008424.

    Article  CAS  PubMed  Google Scholar 

  6. Henshall C, Oortwijn W, Stevens A, Granados A, Banta D: Priority setting for health technology assessment. Theoretical considerations and practical approaches. Priority setting Subgroup of the EUR-ASSESS Project. Int J Technol Assess Health Care. 1997, 13 (2): 144-185. 10.1017/S0266462300010357.

    Article  CAS  PubMed  Google Scholar 

  7. Noorani HZ, Husereau DR, Boudreau R, Skidmore B: Priority setting for health technology assessments: a systematic review of current practical approaches. Int J Technol Assess Health Care. 2007, 23 (3): 310-315. 10.1017/S026646230707050X.

    Article  PubMed  Google Scholar 

  8. Summerton N: Evaluating diagnostic tests: Selecting diagnostic tests for evaluation. BMJ. 2008, 336 (7646): 683-10.1136/bmj.39525.550764.3A.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Adler M, Ziglio E: Gazing into the oracle: Delphi method and its application to social policy and public health. 1996, Bristol, PA: Jessica Kingsley Publishers;

    Google Scholar 

  10. Elwyn G, O'Connor A, Stacey D, Volk R, Edwards A, Coulter A, Thomson R, Barratt A, Barry M, Bernstein S, et al: Developing a quality criteria framework for patient decision aids: online international Delphi consensus process. BMJ. 2006, 333 (7565): 417-10.1136/bmj.38926.629329.AE.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Verhagen AP, de Vet HC, de Bie RA, Kessels AG, Boers M, Bouter LM, Knipschild PG: The Delphi list: a criteria list for quality assessment of randomized clinical trials for conducting systematic reviews developed by Delphi consensus. J Clin Epidemiol. 1998, 51 (12): 1235-1241. 10.1016/S0895-4356(98)00131-0.

    Article  CAS  PubMed  Google Scholar 

  12. Wilson E, Sussex J, Macleod C, Fordham R: Prioritizing health technologies in a Primary Care Trust. J Health Serv Res Policy. 2007, 12 (2): 80-85. 10.1258/135581907780279495.

    Article  PubMed  Google Scholar 

  13. Douw K, Vondeling H: Selection of new health technologies for assessment aimed at informing decision making: A survey among horizon scanning systems. Int J Technol Assess Health Care. 2006, 22 (2): 177-183. 10.1017/S0266462306050999.

    Article  PubMed  Google Scholar 

  14. Wild C, Langer T: Emerging health technologies: informing and supporting health policy early. Health Policy. 2008, 87 (2): 160-171. 10.1016/j.healthpol.2008.01.002.

    Article  PubMed  Google Scholar 

  15. Oxford Diagnostic Horizon Scanning Centre. [http://www.madox.org]

  16. US Department of Health and Human Services Agency for Healthcare Research and Quality. [http://www.ahrq.gov/]

Pre-publication history

Download references

Acknowledgements

The authors would like to thank Richard Stevens for helpful discussions. This study was funded by the National Institute for Health Research, UK programme grant 'Development and implementation of new diagnostic processes and technologies in primary care'.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Carl Heneghan.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

All authors conceived of the study, participated in its design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Plüddemann, A., Heneghan, C., Thompson, M. et al. Prioritisation criteria for the selection of new diagnostic technologies for evaluation. BMC Health Serv Res 10, 109 (2010). https://doi.org/10.1186/1472-6963-10-109

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1472-6963-10-109

Keywords