Skip to main content

Developing quality indicators for cross-sectoral psycho-oncology in Germany: combining the RAND/UCLA appropriateness method with a Delphi technique

Abstract

Background

Internationally, the need for appropriately structured, high-quality care in psycho-oncology is more and more recognized and quality-oriented care is to be established. Quality indicators are becoming increasingly important for a systematic development and improvement of the quality of care. The aim of this study was to develop a set of quality indicators for a new form of care, a cross-sectoral psycho-oncological care program in the German health care system.

Methods

The widely established RAND/UCLA Appropriateness Method was combined with a modified Delphi technique. A systematic literature review was conducted to identify existing indicators. All identified indicators were evaluated and rated in a two-round Delphi process. Expert panels embedded in the Delphi process assessed the indicators in terms of relevance, data availability and feasibility. An indicator was accepted by consensus if at least 75% of the ratings corresponded to category 4 or 5 on a five-point Likert scale.

Results

Of the 88 potential indicators derived from a systematic literature review and other sources, 29 were deemed relevant in the first Delphi round. After the first expert panel, 28 of the dissented indicators were re-rated and added. Of these 57 indicators, 45 were found to be feasible in terms of data availability by the second round of expert panel. In total, 22 indicators were transferred into a quality report, implemented and tested within the care networks for participatory quality improvement. In the second Delphi round, the embedded indicators were tested for their practicability. The final set includes 16 indicators that were operationalized in care practice and rated by the expert panel as relevant, comprehensible, and suitable for care practice.

Conclusion

The developed set of quality indicators has proven in practical testing to be a valid quality assurance tool for internal and external quality management. The study findings could contribute to traceable high quality in cross-sectoral psycho-oncology by providing a valid and comprehensive set of quality indicators.

Trial registration

“Entwicklung eines Qualitätsmanagementsystems in der integrierten, sektorenübergreifenden Psychoonkologie—AP “Qualitätsmanagement und Versorgungsmanagement” zur Studie "integrierte, sektorenübergreifende Psychoonkologie (isPO)" a sub-project of the “integrierte, sektorenübergreifende Psychoonkologie (isPO)”, was registered in the German Clinical Trials Register (DRKS) (DRKS-ID: DRKS00021515) on 3rd September 2020. The main project was registered on 30th October 2018 (DRKS-ID: DRKS00015326).

Peer Review reports

Background

The incidence of cancer is increasing significantly worldwide [1, 2], with nearly 500,000 new cases diagnosed in Germany each year [3]. Cancer patients are affected by emotional distress and often by psychological disorders [4,5,6,7,8,9]. In Germany, the implementation of cross-sectoral psycho-oncological support is considered an important strategy to improve quality of cancer care for patients of all cancer entities. The German National Cancer Plan (NCP) strongly recommends to further develop "oncological care structures and quality assurance” [10, 11]. The implementation of psycho-oncological care structures along with quality management are not only complex and considered an integral part of oncology care [12, 13], but must also meet the demands for needs-based and accessible care, while being subjected to legally binding quality assurance terms [14].

Monitoring and improving quality in health care is of crucial importance, even if quality itself is not directly observable and measurable. Therefore, quality-related indicators are employed to make health care measurable [13, 14]. A quality indicator is a quantitative measure that can be used to monitor and assess the quality of governance and management, as well as clinical and support functions that impact patient outcomes in a process of care. They do not measure quality directly, but are rather a performance assessment tool that can draw attention to potential performance issues that may require more intense review within an organization [15]. Quality indicators have notably gained momentum because they systematically point out potential for improvement in a functioning quality management system [16, 17]. Indicators in health care are often applied for quality measurement and improvement (e.g., plan-do-check-act cycles (PDCA)), but also, for example, for comparison with other service providers (e.g., benchmarking), public disclosure (e.g., quality reports), or quality-based remuneration of services (pay for performance) as well as for research purposes [18,19,20,21].

To improve routine care of cancer patients in cancer centres in Germany, an intervention called “new form of care integrated, cross-sectoral psycho-oncology” (nFC-isPO) has been developed and piloted. In Germany, the health care system is divided into an inpatient and an outpatient sector. Treatment and diagnostics conducted during a hospital stay belong to the inpatient sector, whereas all treatment and rehabilitation activities outside of the hospital belong to the outpatient sector. “New forms of care” (nFC) are care models that improve cross-sectoral care, optimise intersectoral interfaces, or overcome the separation of sectors [22,23,24]. In the inpatient sector, psycho-oncological care is often provided in acute hospitals and oncological rehabilitation facilities. Although cancer counselling centres are well established in the outpatient sector, they will not be able to meet the demand in the medium term due to the demographic trends and the associated increase in the number of new cases [3, 25, 26]. A detailed examination of the psycho-oncological care structures in Germany by the Federal Ministry of Health (2018) showed that the degree of coverage of inpatient psycho-oncological care by psycho-oncological services in Germany can vary considerably depending on the sector and region. For example, more than half of the regions in the outpatient sector and about 40% in the inpatient sector have a coverage of less than 50% [27].

The nFC-isPO has bridged the gap from bench to bedside by providing a high quality, translational psycho-oncological care program for cancer patients [22, 28,29,30,31]. To ensure that care is delivered as stipulated, an appropriate and reliable set of quality indicators was needed for comprehensive quality management [22, 32]. The aim of this study was to develop, implement and evaluate a set of suitable indicators to systematically measure, manage, and improve the quality of care for a cross-sectoral psycho-oncological care program for cancer patients in routine care in several cancer centres in Germany.

Methods

Design

A procedure of linking the RAND/UCLA Appropriateness Method (RAM) with elements of the Delphi technique was used to develop a set of quality indicators to measure the quality of care regarding structures, processes, and outcomes of a cross-sectoral psycho-oncology care program [33, 34]. This methodology was useful to combine scientific evidence and expert opinion obtained through consensus technique. The iterative approach included a systematic literature review, a two-stage anonymous survey (Delphi rounds), a questionnaire-based reassessment of indicators and a face-to-face expert panel discussion (see Fig. 1) [17, 34, 35]. This project was registered in the German Clinical Trials Register (DRKS) (DRKS-ID: DRKS00021515) on 03/09/2020.

Fig. 1
figure 1

Modified process of developing quality indicators for cross-sectoral psycho-oncology

Systematic literature search and selection of potential quality indicators

In June 2018, a systematic literature search was conducted to identify an initial set of quality indicators and domains of quality of care for cancer patients with emotional distress or mental disorder. Initially, six databases (PubMed, PsychINFO, Livivo, PSYNDEX, SpringerLink, Cochrane Library) were systematically searched for scientific articles. A predefined search strategy was used (see Additional file 1). In addition, bibliographies of relevant secondary publications and grey literature (e.g., reports on quality assurance projects), websites of relevant organizations that have developed or were using quality indicators (e.g., medical societies), and evidence-based guidelines recommending quality indicators were reviewed by hand search. The authors also identified indicators from the four care networks cooperating in the project. Study selection and screening were performed independently by two researchers (LD and CL). Duplicate indicators were removed (see Additional file 2). The identification of potential indicators was done by consensus between the two authors (LD and CL). Subsequently, the results were categorised based on Donabedian’s quality dimension (structure-, process- or outcome quality) [33, 36], and the recommended quality criteria of Joint Commission on Accreditation of Healthcare Organizations (JCAHO) (accessibility, appropriateness, continuity, efficiency, efficacy, patient perspective, safety, timeliness) [37, 38]. A preliminary set of indicators was selected to start the expert consensus and rating process (phase 1).

Selection of survey participants and panel members

In a Delphi, the selection of the panel is based on the members' knowledge of the particular topic. Therefore, a purposive sampling strategy was was used to select the experts [39, 40]. The inclusion criterion was that the panel members had to be involved in the nFC-isPO team, as they had the background information on the development, implementation and testing of the nFC-isPO. The participants needed to be able to assess the project-specific requirements of the nFC-isPO for the development of indicators and for the availability of data. The Delphi rounds involved participants from different fields of health services research and psycho-oncological care who all operated in the care program (e.g., health care professionals, health insurance companies, patient representatives etc.). Although the first and second round of Delphi (phase 2 and 5) had a closed group of participants, participants who did not participate in the first round were allowed to take part in the second Delphi round (e.g., due to staff turnover) [41]. The multidisciplinary expert panel consisted of five nFC-isPO representatives from the fields of psycho-oncology, quality management, health services research and medical statistics.

Rating the indicators

Based on the results of the literature search (phase 1), the survey items for the first Delphi round (phase 2) were developed and set up in the online survey tool “Limesurvey”, before being tested for functionality and comprehensibility. In order to assess the relevance and comprehensibility of the indicators, two assessment questions were developed for each indicator instead of a single global rating. At first, participants were asked to rate the relevance on a verbally named five-point Likert Scale (5 = relevant, 4 = rather relevant, 3 = partly relevant, 2 = rather not relevant, 1 = not relevant). Relevance was defined as the extent to which the characteristics of the indicator are appropriate for the concept being assessed [34]. Secondly, the authors asked for comprehensibility of the indicators, i.e. clarity of definition, by using a binary decision question (yes/no). Additional free-text options enabled the participants to comment on the need for change in definition or to suggest missing indicators based on their professional judgement. This structure allowed to consider specific adjustments when revising and optimizing the indicators in the following process. Phase 2 resulted in an overview of consented and dissented indicators. The results of the Delphi rounds were made available to the participants in the quality circles and the quality workshops.Based on the results of the first Delphi round, the expert panel evaluated the dissented indicators again individually with regard to relevant additions, taking special account of the free-text comments using an assessment sheet. The panel members then individually discussed and rated the operability and feasibility (i.e., data availability) of the preliminary indicator set using a short assessment sheet [42]. The face-to-face discussion took place at the “Centrum für Integrierte Onkologie (CIO)” at University Hospital of Cologne (phase 3). In phase 4, the indicators assessed as feasible were operationalised and systematically implemented into practice. The testing took place over a period of at least six months in four different health care networks.

The implemented and tested indicators were re-rated in the second Delphi round (phase 5) with regard to their practical suitability for assessing and managing the quality of care in the care program. In addition to the rating on the 5-point Likert scale, the participants had the opportunity to leave comments in a free-text box.

Definition of consensus and statistical analysis

The consensus rule for assessing agreement and disagreement of the indicators in the Delphi process was established a priori. For descriptive statistical analysis, the authors used a proportion within a limited range [43]. The determined threshold of consensus is at 75% agreement, summed for categories 4 and 5 (agreement) or 1 and 2 (disagreement) [44].

  • An indicator was considered to have a "moderate consensus" rating if the percentage of ratings of "relevant (5)" or "rather relevant (4)" ( +) reaches at least 75% consensus out of all valid responses.

  • An indicator was considered to have a "strong consensus" rating if the percentage of ratings of "relevant (5)" or "rather relevant (4)" (+ +) reaches at least 90% consensus out of all valid responses.

  • Evaluated as "moderate rejection" (-) if the proportion of evaluations with "not relevant (1)" and "rather not relevant (2)" reaches at least 75% consensus out of all valid responses.

  • Evaluated as "strong rejection" (–) if the proportion of evaluations with "not relevant (1)" and "rather not relevant (2)" reaches at least 90% consensus out of all valid responses.

  • All other indicators that had no unanimous group response (neither agreed nor disagreed), were considered dissent.

Results

An overview of the identified and evaluated indicators can be seen in Fig. 2.

Fig. 2
figure 2

Results of the modified RAM procedure

Participation in the study and characteristics of participants

For the first round of the Delphi, 49 participants were invited. Of the 49 participants in the first round (100% response rate), 27 properly completed the questionnaire (55% completion rate). 21 (41.2%) of the participants of the first round also participated in the second round. Here, especially the care network teams were asked to share the survey in-house with the relevant individuals in nFC-isPO. A total of 51 people participated, 35 (68.6%) completed the second survey in full, 11.8% (6 records) were missing. 24 (47.1%) of the participants in the second round did not partake in the first round. The structure of the participants covered a variety of occupational fields related to psycho-oncological care. Table 1 describes the characteristics of the participants.

Table 1 Participant characteristics

Consensus after round 1

Participants reached a strong consensus for 9 out of 88 (10.2%) indicators and a moderate consensus for another 20 (22.7%) indicators regarding relevance, i.e. the significance of the quality characteristic captured by the quality indicator for the care system. A total of 29 indicators were classified as relevant for the psycho-oncological care program. There was dissent for 58 (67.0%) indicators. No indicator was rejected. Data was missing for one indicator due to a technical error. 10 (11.36%) indicators were fully understandable and clear in definition to all survey participants. The remaining 78 (88.6%) indicators were rated comprehensive and clearly defined by at least 82.4% of the respondents. The results can be seen in Additional file 3.

Results of the expert panels

In total, 28 indicators were added to the set by the authors while rechecking the indicators rated with dissent. In the second round, the members of the expert panel met in person under guidance of a moderator, discussed and evaluated the identified and the complemented indicators regarding data availability. A total of 57 indicators (29 strongly or moderately endorsed plus 28 additions) were evaluated. In 12 cases, implementation in health care practice was rejected due to lack of data availability regarding technical and legal aspects (e.g., data protection, lack of documentation, etc.). Subsequently, the preliminary set was adjusted to reflect care reality, several indicators were combined and the definitions were sharpened by the authors. 45 (79%) indicators were combined into 27. Five of these 27 were deferred as recommendations due to legal and technical uncertainties. The expert panel ended their work with 22 indicators to be implemented and tested in care practice.

Implementation and piloting

The final set of 22 indicators has been operationalised in the information technology (IT)-supported documentation and assistance system (CAPSYS). CAPSYS was developed to record core data of patient care and contractual service provision, and to support the planning, management and monitoring of the pathway-guided and quality-assured patient care in the nFC-isPO [22]. In addition, a quality management module was developed within CAPSYS. Based on the documented data in CAPSYS, the quality indicators could be calculated and queried as a structured and standardised quality report. The quality report could be generated and retrieved internally for any selectable time period. Before the second round of the Delphi survey, the indicators included in the quality report were tested in practice for at least six months in four care networks in internal and cross-facility quality management.

Consensus after round 2

In the second round, participants were asked to assess the 22 indicators in terms of their practicality. Consensus was reached for 16 (72.7%) indicators, thereof 6 (27.3%) with strong consensus and 10 (45.5%) with moderate consensus. There was dissent on 6 (27.3%) indicators. No indicator was rejected. Table 2 shows an overview of the results.

Table 2 Overview of results for the assessed quality indicators for Delphi round 2

Discussion

In this study a feasible and practical set of quality indicators was developed, operationalized in a quality report and pilot tested for a cross-sectoral psycho-oncological care program in the setting nFC-isPO. To date, few indicators related to cross-sectoral care of cancer patients have been integrated into the context of psycho-oncological routine care in Germany [11, 12]. The development of practice guidelines began internationally around 2008. In Germany, since around 2014, every institution has been obliged to develop and implement a written concept for psycho-oncological patient care in terms of a quality feature [10, 45,46,47]. Although there have been important milestones in the last decade, the road from evidence to implementation is still challenging [13, 26, 27, 48].

This research demonstrates the development, piloting, and finally definition of 16 trackable quality indicators. These 16 indicators reflect a relevant and comprehensive set covering psycho-oncology care across sectors, as well as Donabedian’s quality dimensions and numerous quality criteria according to JCAHO. A particular challenge was to overcome the sectoral boundaries in a shared set. In Germany, many cross-sectoral care programs are coordinated, e.g., through shared diagnostics or to save resources. This makes it difficult to apply quality indicators across sectors [49]. To avoid performance measurement for individual providers in the nFC-isPO and to ensure a holistic understanding, nFC-isPO quality indicators are always collected for an entire care network consisting of outpatient and inpatient providers. The nFC-isPO indicator set therefore emphasizes the psycho-oncological care program as a whole. Similar to Großimlinghaus [50], the indicator set consists of cross-sectoral and diagnosis-specific aspects. Despite the diagnosis-specific aspects, many of the indicators such as “average number of consultations per patient” or “time to receive services” could be transferred to other disease patterns with mentally distressed patients and similar organizational care structures. The set allows adaptations for different diagnoses, contextual differences, or even for different countries [50, 51].

Although no indicator was unanimously rejected, some aspects were perceived as significantly more irrelevant (rejection between 20 and 30%). In particular, indicators that go beyond the services provided by the nFC-isPO (e.g., “regular attendance of self-help group”, “number of relatives’ consultations”) and indicators related to documentation (e.g., “average time between data collections and documentation”) were rejected more strongly. One point of discussion on the expert panel was the relevance of the indicators for theoretical psycho-oncological care in general compared to the relevance for the concrete nFC-isPO. While some of the assessed quality demands were inherent in the structure of the nFC-isPO, it would be pointless to operationalize them in this setting. For example, “information availability for patients” would be unnecessary to record, as patient information is automatically given to the patient in the form of a supplementary sheet at enrolment in nFC-isPO. Nevertheless, it might generally be an important measure of the quality of psycho-oncology care. Another example was that the expert panel seemed to lean more towards emphasizing the relevance of the indicator “number of relative’s consultation” in the discussion, but voted only 55% in favour and 27% against (with a mean of 3.41 and standard deviation of 1.476). The wide dispersion suggests that the indicator might be relevant in general but not important for the nFC-isPO due to the structural organization. These aspects need to be considered, when revisiting and adjusting the set for other settings.

The results of this study contribute to national and international demands for improving psycho-oncological care structures. Defining and operationalizing psycho-oncological variables pursuing a uniform, cross-sectoral documentation goes far beyond the seven defined core variables of the first German evidence-based guideline on psycho-oncological diagnosis, counselling and treatment of adult cancer patients [12]. In this respect, the results support the goals of integrated and high-quality, psycho-oncological care [10, 11, 52]. The quality indicators developed can quantitatively cover the formulated goals of the NCP; the identification of psychosocial support needs as well as mental disorders in cancer patients and the provision of the necessary psycho-oncological care in inpatient and outpatient settings [11]. Particularly supportive measures for coping with the cancer (e.g., number of consultations, isPO-onco-guide counselling), relief of psychological and psychosomatic symptoms (e.g., mean difference of HADS total scores) as well as treatment adherence (e.g., reasons of withdrawal, time between services) are reflected in the set. In the medium term, there are considerations to supplement the set with indicators related to psycho-oncological care for relatives, quality of life and social reintegration. The feasibility of data collection and analysis was also tested area-wide as part of the nFC-isPO as required by the NCP [11]. By including inpatient and outpatient caregivers as well as cancer self-help groups (isPO-onco-guide), the set is cross-sectoral and might improve out-of-hospital psycho-oncological care by quantifying process and outcome quality (e.g., isPO-onco-guide consultations and patient satisfaction isPO-onco-guide consultations) [11].

Quality assurance through quality indicators can indirectly contribute to improving quality of care by making effects and outcomes visible [45, 53, 54]. As the lack of integration of indicators into information systems can be an immediate barrier in everyday use [55], this study aimed to link applicable quality indicators with standardized electronic documentation. Großimlinghaus et al. emphasize that the more use is made of existing, electronically available documents that can be extracted and evaluated with as little effort as possible, the better the feasibility of indicators [16, 55]. The strength of this research was that electronic implementation was part of the development process, i.e., evaluating data availability (phase 3) and testing validity in form of a quality report for at least six months (phase 4). Großimlinghaus et al. also emphasize that uniform data collection beyond the data already collected for billing purposes is essential for indicator projects. Therefore, the computerised documentation andassistance program (CAPSYS) [22] developed specifically for the nFC-isPO serves, among other things, as a standardized documentation system. Particularly with regard to numerous, cross-sectoral sites at which the nFC-isPO is carried out, standardized, consistent (electronic) documentation appears to be useful in order to record quality-relevant care data [16, 17]. The consented quality indicators were integrated into CAPSYS in the form of a quality report and enable quality comparisons [22]. By embedding the indicators digitally, the results can be accessed flexibly regardless of location and time. Thus, potentials for quality improvement can be quickly identified and used. The rapid transferability of quality assessments into practice and the linkage with quality improvement measures have been realized, which is important for a systematic approach to continuous quality improvement [56].

Team size was limited by the nature of the project, and there was inevitable turnover in the teams over the four years of the research. Participants were selected on the basis of their knowledge of the topic. Willingness to participate was assumed as all participants were project partners and already committed to the study. The clear inclusion criteria resulted in a relatively small pool of participants with high response rates, but low completion rates (55% and 68.6% respectively). Several studies have shown that the response rates for web surveys are much lower than for traditional surveys [34, 57, 58] and that the higher the number of items, the lower the completion rate [59]. This may explain why many experts abandoned the time-consuming web-based survey, especially in the first round. However, preliminary work on the size of expert panels has shown that a minimum of 20 participants is statistically relevant and can produce a valid expert opinion [60, 61]. In addition, recent studies have shown that small panels can produce reliable results and stable responses, especially when there are only a limited number of experts available in a field [62,63,64]. The high response rates of the small sized panel in this study are consistent with those observed in previous studies due to direct contact with participants [59].

Although consensus on the correct standard of methodological rigour is still lacking, the methodological changes may partially compromise the validity of this study [65]. The authors are aware that the specific sample of participants may threaten the external validity. Internal validity may be affected by the selection of the panel experts and the fact that the results are not necessarily replicable with comparable other groups [61]. In addition, the successive rounds of the survey resulted in ‘natural losses’ due to respondents dropping out. For pragmatic research reasons, dropouts and changes in the expert panel were inevitable as people left their jobs and the research project and/or new positions were filled. The professional heterogeneity of the panel is seen as a strength, as the participation of multi-perspective stakeholders is recommended and can increase the acceptability of quality indicators [34]. In contrast to a classical Delphi approach, only 41% of the participants in the second Delphi round were also present in the first round. Similar to the findings of Boulkedid et al. (2011), this may be equivalent to conducting distinct Delphi procedures, in which case it may be difficult to reach consensus.

Although the methodological design had to be modified due to the clinical practice setting, this study was developed and reported according to several guidelines and recommendations [34, 66,67,68]. Studies have shown that the selection of quality indicators based on consensus techniques is subject to great methodological variability [34, 69], and to date there is no 'one-size-fits-all' approach to identifying quality indicators for different settings. This study follows the methodological approach of the RAND/UCLA Appropriateness Method, combined with a modified consensus method, which is the first choice for identifying credible and valid indicators based on the opinion and experience of stakeholders with knowledge of the issue [34]. However, the further applicability and scientific evidence of the set of quality indicators should be demonstrated in subsequent studies to validate and update them in different care settings [61].

Limitation

This study may have limitations. The indicator set was developed and applied specifically for a cross-sectoral psycho-oncological care program in the setting of nFC-isPO. These indicators proved feasible and appropriate for this purpose. With regard to the transferability of the indicator set to other settings, some adjustments certainly need to be made, but synergies are possible, especially for diseases with mental distress and cross-sectoral care approach [34, 51, 70]. Although no fixed reference ranges were defined in the beginning, initial empirical values for the indicators observed in everyday clinical practice could be determined. These values, in addition to evidence-based ones, can serve as an initial guide for setting a preliminary target range in the course of continuous revision of the set [16, 49]. Because of the SARS-COV-2 pandemic, direct patient involvement was not possible, but patient representatives were included. The research team tried to minimize the additional psychological burden and increased risk of infection for patients by reducing and postponing scheduled face-to-face interviews. The results of the patient interviews are still pending, but will be included into the set in the future [71]. Due to the small sample size and the low completion rate, this study lacks generalisability. Another limitation influencing panellists’ ratings is the level of evidence available for the indicators [55]. In the context of this study with potential indicators retrieving from very different sources, level of evidence was not presented to the participants from widely diverse work contexts to avoid bias. Although the lack of high level of evidence might reduce the generalizability of the findings [55], this is widespread in many health care settings and is the reason for using an expert panel methodology [72]. In addition, the lack of a gold standard for indicator development has been noted in several comparable studies [34, 73]. Counteracting this, the established RAM procedure provides a certain methodological quality by combining several systematic methods and concrete quality criteria [74]. This method presents indicators that are valid and described in sufficient detail so that their results are reproducible, comprehensive and classifiable. The development and use of indicators should be understood as a process, although an important milestone has been reached by creating a set of quality indicators for cross-sectoral, psycho-oncological care. Nevertheless, continuous further development is necessary [56].

Conclusion

This study contributes to improving quality in cross-sectoral psycho-oncological care by providing a valid, comprehensive and feasible set of 16 quality indicators for cancer patients affected by mental disorders and emotional distress. Operationalizing the theoretical concept of quality into a set of quality indicators and integrating it into a standardized and digitized quality management system makes it possible to go beyond a purely descriptive presentation of performance. The practical test has shown that quality assurance and controlling based on a set covering cross-entity and entity-specific aspects of care is successful in this specific psycho-oncological setting. Further work is needed to continuously improve the set and check if these indicators can be transferred to similar settings.

Availability of data and materials

All data generated or analysed during this study are included in this published article and its supplementary files.

Abbreviations

NCP:

German National Cancer Plan

PDCA:

Plan-Do-Check-Act cycle

nFC-isPO:

New form of care integrated, cross-sectoral psycho-oncology

RAM:

RAND/UCLA Appropriateness Method

JCAHO:

Joint Commission on Accreditation of Healthcare Organizations

CIO:

Centrum für Integrierte Onkologie

HADS:

Hospital Anxiety and Depression Scale

References

  1. World Health Organization. WHO report on cancer: setting priorities, investing wisely and providing care for all. Genf; 2020.

  2. Torre LA, Bray F, Siegel RL, Ferlay J, Lortet-Tieulent J, Jemal A. Global cancer statistics, 2012. CA Cancer J Clin. 2015;65:87–108. https://doi.org/10.3322/caac.21262.

    Article  PubMed  Google Scholar 

  3. Erdmann F, Spix C, Katalinic A, Christ M, Folkerts J, Hansmann J, et al. Krebs in Deutschland für 2017/2018: Robert Koch-Institut; 2021.

  4. Mehnert A, Hartung TJ, Friedrich M, Vehling S, Brähler E, Härter M, et al. One in two cancer patients is significantly distressed: prevalence and indicators of distress. Psychooncology. 2018;27:75–82. https://doi.org/10.1002/pon.4464.

    Article  CAS  PubMed  Google Scholar 

  5. Oberoi DV, White VM, Seymour JF, Prince HM, Harrison S, Jefford M, et al. Distress and unmet needs during treatment and quality of life in early cancer survivorship: a longitudinal study of haematological cancer patients. Eur J Haematol. 2017;99:423–30. https://doi.org/10.1111/ejh.12941.

    Article  PubMed  Google Scholar 

  6. Hinz A, Krauss O, Hauss JP, Höckel M, Kortmann RD, Stolzenburg JU, Schwarz R. Anxiety and depression in cancer patients compared with the general population. Eur J Cancer Care (Engl). 2010;19:522–9. https://doi.org/10.1111/j.1365-2354.2009.01088.x.

    Article  CAS  PubMed  Google Scholar 

  7. Mehnert A, Vehling S, Scheffold K, Ladehoff N, Schön G, Wegscheider K, et al. Prävalenz von Anpassungsstörung, Akuter und Posttraumati-scher Belastungsstörung sowie somatoformen Störungen bei Krebspatienten. [Prevalence of adjustment disorder, acute and posttraumatic stress disorders as well as somatoform disorders in cancer patients]. Psychother Psychosom Med Psychol. 2013;63:466–72. https://doi.org/10.1055/s-0033-1347197.

  8. Mitchell AJ, Chan M, Bhatti H, Halton M, Grassi L, Johansen C, Meader N. Prevalence of depression, anxiety, and adjustment disorder in oncological, haematological, and palliative-care settings: a meta-analysis of 94 interview-based studies. Lancet Oncol. 2011;12:160–74. https://doi.org/10.1016/S1470-2045(11)70002-X.

    Article  PubMed  Google Scholar 

  9. Oberoi D, White VM, Seymour JF, Miles Prince H, Harrison S, Jefford M, et al. The influence of unmet supportive care needs on anxiety and depression during cancer treatment and beyond: a longitudinal study of survivors of haematological cancers. Support Care Cancer. 2017;25:3447–56. https://doi.org/10.1007/s00520-017-3766-9.

    Article  PubMed  Google Scholar 

  10. European Partnership for Action against Cancer. National Cancer Plans: Germany. http://www.epaac.eu/from_heidi_wiki/Germany_Working_Document_on_NCP_German_4.1.2012.pdf. Accessed 6 May 2021.

  11. Bundesministerium für Gesundheit. Nationaler Krebsplan. https://www.bundesgesundheitsministerium.de/fileadmin/Dateien/3_Downloads/N/Nationaler_Krebsplan/UEbersicht_Ziele_des_Nationalen_Krebsplans_2020.pdf. Accessed 6 May 2021.

  12. Deutsche Krebsgesellschaft, Deutsche Krebshilfe, AWMF (Leitlinienprogramm Onkologie). Psychoonkologische Diagnostik, Beratung und Behandlung von erwachsenen Krebspatienten: Langversion 1.1, AWMF-Registernummer: 032/051OL. 2014. https://www.leitlinienprogramm-onkologie.de/fileadmin/user_upload/Downloads/Leitlinien/Psychoonkologieleitlinie_1.1/LL_PSO_Langversion_1.1.pdf. Accessed 6 May 2022.

  13. Holland J, Watson M, Dunn J. The IPOS new International Standard of Quality Cancer Care: integrating the psychosocial domain into routine care. Psychooncology. 2011;20:677–80. https://doi.org/10.1002/pon.1978.

    Article  PubMed  Google Scholar 

  14. Deutscher Bundestag. Sozialgesetzbuch (SGB) Fünftes Buch (V) - Gesetzliche Krankenversicherung - (Artikel 1 des Gesetzes v. 20. Dezember 1988, BGBl. I S. 2477) § 135a Verpflichtung der Leistungserbringer zur Qualitätssicherung; 20.12.1988.

  15. Joint Commission on Accreditation of Healthcare Organizations (JCAHO). Primer on indicator Development and Application: Measuring Quality in Health Care. 1st ed. Oakbrook Terrace; 1990.

  16. Großimlinghaus I, Falkai P, Gaebel W, Hasan A, Jänner M, Janssen B, et al. Erhebung von Qualitätsindikatoren anhand von Routinedaten : Darstellung eines Machbarkeitstests in 10 Fachkliniken für Psychiatrie und Psychotherapie. [Assessment of quality indicators with routine data: Presentation of a feasibility test in ten specialist clinics for psychiatry and psychotherapy]. Nervenarzt. 2015;86:1393–9. doi:https://doi.org/10.1007/s00115-015-4357-y.

  17. Campbell SM, Braspenning J, Hutchinson A, Marshall M. Research methods used in developing and applying quality indicators in primary care. Qual Saf Health Care. 2002;11:358–64. https://doi.org/10.1136/qhc.11.4.358.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  18. Reiter A, Fischer B, Kötting J, Geraedts M, Jäckel WH, Döbler K. QUALIFY: Ein Instrument zur Bewertung von Qualitätsindikatoren. [QUALIFY--a tool for assessing quality indicators]. Z Arztl Fortbild Qualitatssich. 2007;101:683–8. doi:https://doi.org/10.1016/j.zgesun.2007.11.003.

  19. Döbler K, Schrappe M, Kuske S, Schmitt J, Sens B, Boywitt D, et al. Eignung von Qualitätsindikatorensets in der Gesundheitsversorgung für verschiedene Einsatzgebiete – Forschungs- und Handlungsbedarf. Gesundheitswesen. 2019;81:781–7. https://doi.org/10.1055/a-1007-0811.

    Article  PubMed  Google Scholar 

  20. van Overveld LFJ, Braspenning JCC, Hermens RPMG. Quality indicators of integrated care for patients with head and neck cancer. Clin Otolaryngol. 2017;42:322–9. https://doi.org/10.1111/coa.12724.

    Article  PubMed  Google Scholar 

  21. Oostra DL, Nieuwboer MS, Olde Rikkert MGM, Perry M. Development and pilot testing of quality improvement indicators for integrated primary dementia care. BMJ Open Qual. 2020. https://doi.org/10.1136/bmjoq-2020-000916.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Kusch M, Labouvie H, Schiewer V, Talalaev N, Cwik JC, Bussmann S, et al. Integrated, cross-sectoral psycho-oncology (isPO): a new form of care for newly diagnosed cancer patients in Germany. BMC Health Serv Res. 2022;22:543. https://doi.org/10.1186/s12913-022-07782-0.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Gemeinsamer Bundesausschuss. Neue Versorgungsformen. https://innovationsfonds.g-ba.de/. Accessed 15 Mar 2023.

  24. Bundesministerium für Gesundheit. The German healthcare system: Strong. Reliable. Proven. 2020. https://www.bundesgesundheitsministerium.de/fileadmin/Dateien/5_Publikationen/Gesundheit/Broschueren/200629_BMG_Das_deutsche_Gesundheitssystem_EN.pdf. Accessed 15 Mar 2023.

  25. Heckl U, Singer S, Wickert M, Weis J. Aktuelle Versorgungsstrukturen in der Psychoonkologie. Nervenheilkunde. 2011;30:124–30. https://doi.org/10.1055/s-0038-1627780.

    Article  Google Scholar 

  26. Mehnert A, Koranyi S. Psychoonkologische Versorgung: eine Herausforderung. [Psychooncological Care: A Challenge]. Dtsch Med Wochenschr. 2018;143:316–23. https://doi.org/10.1055/s-0043-107631.

  27. Schulz H, Bleich C, Bokemeyer C, Koch Gromus U, Härter M. Psychoonkologische Versorgung in Deutschland: Bundesweite Bestandsaufnahme und Analyse. 2018. https://www.bundesgesundheitsministerium.de/fileadmin/Dateien/5_Publikationen/Gesundheit/Berichte/PsoViD_Gutachten_BMG_19_02_14_gender.pdf. Accessed 24 May 2022.

  28. Dieng M, Cust AE, Kasparian NA, Mann GJ, Morton RL. Economic evaluations of psychosocial interventions in cancer: a systematic review. Psychooncology. 2016;25:1380–92. https://doi.org/10.1002/pon.4075.

    Article  PubMed  Google Scholar 

  29. Foster C. The need for quality self-management support in cancer care. BMJ Qual Saf. 2021:bmjqs-2021–013366. https://doi.org/10.1136/bmjqs-2021-013366.

  30. Faller H, Schuler M, Richard M, Heckl U, Weis J, Küffner R. Effects of psycho-oncologic interventions on emotional distress and quality of life in adult patients with cancer: systematic review and meta-analysis. J Clin Oncol. 2013;31:782–93. https://doi.org/10.1200/JCO.2011.40.8922.

    Article  PubMed  Google Scholar 

  31. Myrhaug HT, Mbalilaki JA, Lie N-EK, Hansen T, Nordvik JE. The effects of multidisciplinary psychosocial interventions on adult cancer patients: a systematic review and meta-analysis. Disabil Rehabil. 2020;42:1062–70. https://doi.org/10.1080/09638288.2018.1515265.

  32. Salm S, Cecon N, Jenniches I, Pfaff H, Scholten N, Dresen A, Krieger T. Conducting a prospective evaluation of the development of a complex psycho-oncological care programme (isPO) in Germany. BMC Health Serv Res. 2022;22:531. https://doi.org/10.1186/s12913-022-07951-1.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Donabedian A. The quality of care. How can it be assessed? JAMA. 1988;260:1743–8. https://doi.org/10.1001/jama.260.12.1743.

  34. Boulkedid R, Abdoul H, Loustau M, Sibony O, Alberti C. Using and reporting the Delphi method for selecting healthcare quality indicators: a systematic review. PLoS One. 2011;6:e20476. doi:https://doi.org/10.1371/journal.pone.0020476.

  35. Campbell SM, Cantrill JA. Consensus methods in prescribing research. J Clin Pharm Ther. 2001;26:5–14. https://doi.org/10.1111/j.1365-2710.2001.00331.x.

    Article  CAS  PubMed  Google Scholar 

  36. Donabedian A. Evaluating the quality of medical care. 1966. Milbank Q. 2005;83:691–729. https://doi.org/10.1111/j.1468-0009.2005.00397.x.

  37. Joint Commission on Accreditation of Healthcare Organizations (JCAHO). Guide to Quality Assurance. Chicago: JCAHO; 1998.

  38. Ärztliches Zentrum für Qualität in der Medizin (ÄZQ). 8 Qualitätskriterien und Qualitätsindikatoren. 20.01.2020. https://www.aezq.de/aezq/kompendium_q-m-a/8-qualitaetskriterien-und-qualitaetsindikatoren. Accessed 25 Apr 2022.

  39. Etikan I. Comparison of Convenience Sampling and Purposive Sampling. AJTAS. 2016;5:1. https://doi.org/10.11648/j.ajtas.20160501.11.

    Article  Google Scholar 

  40. Hasson F, Keeney S, McKenna H. Research guidelines for the Delphi survey technique. J Adv Nurs. 2000;32:1008–15. https://doi.org/10.1046/j.1365-2648.2000.t01-1-01567.x.

    Article  CAS  PubMed  Google Scholar 

  41. Trotter RT. Qualitative research sample design and sample size: resolving and unresolved issues and inferential imperatives. Prev Med. 2012;55:398–400. https://doi.org/10.1016/j.ypmed.2012.07.003.

    Article  PubMed  Google Scholar 

  42. Kallus WK. Erstellung von Fragebogen. 1st ed. Wien: facultas.wuv; 2010.

  43. Diamond IR, Grant RC, Feldman BM, Pencharz PB, Ling SC, Moore AM, Wales PW. Defining consensus: a systematic review recommends methodologic criteria for reporting of Delphi studies. J Clin Epidemiol. 2014;67:401–9. https://doi.org/10.1016/j.jclinepi.2013.12.002.

    Article  PubMed  Google Scholar 

  44. Hermes-Moll K, Klein G, Buschmann-Maiworm RE, Baumann W, Otremba B, Lebahn H, et al. WINHO-Qualitätsindikatoren für die ambulante onkologische Versorgung in Deutschland. [WINHO quality indicators for outpatient oncology care in Germany]. Z Evid Fortbild Qual Gesundhwes. 2013;107:548–59. https://doi.org/10.1016/j.zefq.2013.09.002.

  45. Grassi L, Watson M. Psychosocial care in cancer: an overview of psychosocial programmes and national cancer plans of countries within the International Federation of Psycho-Oncology Societies. Psychooncology. 2012;21:1027–33. https://doi.org/10.1002/pon.3154.

    Article  PubMed  Google Scholar 

  46. Jacobsen PB. Clinical practice guidelines for the psychosocial care of cancer survivors: current status and future prospects. Cancer. 2009;115:4419–29. https://doi.org/10.1002/cncr.24589.

    Article  PubMed  Google Scholar 

  47. Turner J. The changing landscape of cancer care - the impact of psychosocial clinical practice guidelines. Psychooncology. 2015;24:365–70. https://doi.org/10.1002/pon.3803.

    Article  PubMed  Google Scholar 

  48. Pirl WF, Greer JA, Gregorio SW-D, Deshields T, Irwin S, Fasciano K, et al. Framework for planning the delivery of psychosocial oncology services: An American psychosocial oncology society task force report. Psychooncology. 2020;29:1982–7. doi:https://doi.org/10.1002/pon.5409.

  49. Egidi G. Das Methodenpapier des AQUA-Institutes zu sektorenübergreifenden Qualitätsindikatoren - eine Kritik. Zeitschrift für Allgemeinmedizin. 2010.

  50. Großimlinghaus I, Falkai P, Gaebel W, Janssen B, Reich-Erkelenz D, Wobrock T, Zielasek J. Entwicklungsprozess der DGPPN-Qualitätsindikatoren. [Developmental process of DGPPN quality indicators]. Nervenarzt. 2013;84:350–65. doi:https://doi.org/10.1007/s00115-012-3705-4.

  51. Marshall MN, Shekelle PG, McGlynn EA, Campbell S, Brook RH, Roland MO. Can health care quality indicators be transferred between countries? Qual Saf Health Care. 2003;12:8–12. https://doi.org/10.1136/qhc.12.1.8.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  52. Adler NE, Page AEK. Cancer Care for the Whole Patient: Meeting Psychosocial Health Needs. In: Institute of Medicine (US) Committee on Psychosocial Services to Cancer Patients/Families in a Country Setting, editor. Washington, DC: National Academies Press (US); 2008.

  53. Hoekstra RA, Heins MJ, Korevaar JC. Health care needs of cancer survivors in general practice: a systematic review. BMC Fam Pract. 2014;15:94. https://doi.org/10.1186/1471-2296-15-94.

    Article  PubMed  PubMed Central  Google Scholar 

  54. Stelzer D, Graf E, Köster I, Ihle P, Günster C, Dröge P, et al. Assessing the effect of a regional integrated care model over ten years using quality indicators based on claims data - the basic statistical methodology of the INTEGRAL project. BMC Health Serv Res. 2022;22:247. https://doi.org/10.1186/s12913-022-07573-7.

    Article  PubMed  PubMed Central  Google Scholar 

  55. Byrne M, O’Malley L, Glenny A-M, Campbell S, Tickle M. A RAND/UCLA appropriateness method study to identify the dimensions of quality in primary dental care and quality measurement indicators. Br Dent J. 2020;228:83–8. https://doi.org/10.1038/s41415-020-1200-z.

    Article  PubMed  Google Scholar 

  56. Backhouse A, Ogunlayi F. Quality improvement into practice. BMJ. 2020;368:m865. https://doi.org/10.1136/bmj.m865.

  57. Sammut R, Griscti O, Norman IJ. Strategies to improve response rates to web surveys: a literature review. Int J Nurs Stud. 2021;123:104058. https://doi.org/10.1016/j.ijnurstu.2021.104058.

  58. Tricco AC, Zarin W, Antony J, Hutton B, Moher D, Sherifali D, Straus SE. An international survey and modified Delphi approach revealed numerous rapid review methods. J Clin Epidemiol. 2016;70:61–7. https://doi.org/10.1016/j.jclinepi.2015.08.012.

    Article  PubMed  Google Scholar 

  59. Gargon E, Crew R, Burnside G, Williamson PR. Higher number of items associated with significantly lower response rates in COS Delphi surveys. J Clin Epidemiol. 2019;108:110–20. https://doi.org/10.1016/j.jclinepi.2018.12.010.

    Article  PubMed  PubMed Central  Google Scholar 

  60. Cuhls K. Die Delphi-Methode – eine Einführung. In: Niederberger M, Renn O, editors. Delphi-Verfahren in den Sozial- und Gesundheitswissenschaften. Wiesbaden: Springer Fachmedien Wiesbaden; 2019. p. 3–31. https://doi.org/10.1007/978-3-658-21657-3_1.

  61. Keeney S, Hasson F, McKenna H. The Delphi technique in nursing and health research. Chichester: Wiley-Blackwell; 2011.

    Book  Google Scholar 

  62. Akins RB, Tolson H, Cole BR. Stability of response characteristics of a Delphi panel: application of bootstrap data expansion. BMC Med Res Methodol. 2005;5:37. https://doi.org/10.1186/1471-2288-5-37.

    Article  PubMed  PubMed Central  Google Scholar 

  63. Alizadeh S, Maroufi SS, Sohrabi Z, Norouzi A, Dalooei RJ, Ramezani G. Large or small panel in the Delphi study? Application of bootstrap technique. J Evol Med Dent Sci. 2020;9:1267–71. https://doi.org/10.14260/jemds/2020/275.

    Article  Google Scholar 

  64. Vogel C, Zwolinsky S, Griffiths C, Hobbs M, Henderson E, Wilkins E. A Delphi study to build consensus on the definition and use of big data in obesity research. Int J Obes (Lond). 2019;43:2573–86. https://doi.org/10.1038/s41366-018-0313-9.

    Article  PubMed  Google Scholar 

  65. Hsu CC, Sandford BA. Minimizing Non-Response in The Delphi Process: How to Respond to Non-Response. Pract Assess Res Eval. 2007;12(17);1-6.

  66. Spranger J, Homberg A, Sonnberger M, Niederberger M. Reporting guidelines for Delphi techniques in health sciences: A methodological review. Z Evid Fortbild Qual Gesundhwes. 2022;172:1–11. https://doi.org/10.1016/j.zefq.2022.04.025.

    Article  PubMed  Google Scholar 

  67. Veugelers R, Gaakeer MI, Patka P, Huijsman R. Improving design choices in Delphi studies in medicine: the case of an exemplary physician multi-round panel study with 100% response. BMC Med Res Methodol. 2020;20:156. https://doi.org/10.1186/s12874-020-01029-4.

    Article  PubMed  PubMed Central  Google Scholar 

  68. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71. https://doi.org/10.1136/bmj.n71.

  69. Sinha IP, Smyth RL, Williamson PR. Using the Delphi technique to determine which outcomes to measure in clinical trials: recommendations for the future based on a systematic review of existing studies. PLoS Med. 2011;8:e1000393. https://doi.org/10.1371/journal.pmed.1000393.

  70. Steel N. Developing quality indicators for older adults: transfer from the USA to the UK is feasible. Qual Saf Health Care. 2004;13:260–4. https://doi.org/10.1136/qshc.2004.010280.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  71. Jenniches I, Lemmen C, Cwik JC, Kusch M, Labouvie H, Scholten N, et al. Evaluation of a complex integrated, cross-sectoral psycho-oncological care program (isPO): a mixed-methods study protocol. BMJ Open. 2020;10:e034141. https://doi.org/10.1136/bmjopen-2019-034141.

  72. McGory ML, Kao KK, Shekelle PG, Rubenstein LZ, Leonardi MJ, Parikh JA, et al. Developing quality indicators for elderly surgical patients. Ann Surg. 2009;250:338–47. https://doi.org/10.1097/SLA.0b013e3181ae575a.

    Article  PubMed  Google Scholar 

  73. Kötter T, Schaefer F, Blozik E, Scherer M. Die Entwicklung von Qualitätsindikatoren - Hintergrund, Methoden und Probleme. [Developing quality indicators: background, methods and problems]. Z Evid Fortbild Qual Gesundhwes. 2011;105:7–12. https://doi.org/10.1016/j.zefq.2010.11.002.

  74. Jandhyala R. Delphi, non-RAND modified Delphi, RAND/UCLA appropriateness method and a novel group awareness and consensus methodology for consensus measurement: a systematic literature review. Curr Med Res Opin. 2020;36:1873–87. https://doi.org/10.1080/03007995.2020.1816946.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

The authors would like to thank all consortium partners of the project who contributed to this study as participants in the modified Delphi process and the expert panel. We also thank PD. Dr. Michael Kusch (University Hospital Cologne), Hildegard Labouvie (University Hospital Cologne) and Prof. Dr. Peter Haas and his team (FH Dortmund) for the development and transition of the quality indicators into a quality report in CAPSYS. We would also like to thank the four care networks for implementing the trial and including the quality report in the quality development process in nFC-isPO.

Funding

Open Access funding enabled and organized by Projekt DEAL. This study was conducted as part of the interdisciplinary joint project “integrated, cross-sectoral psycho-oncology (isPO)” (Innovation Fund of the Joint Federal Committee of Germany; funding code: 01NVF17022) led by PD Dr. Michael Kusch, as a sub-project “Entwicklung eines Qualitätsmanagementsystems in der integrierten, sektorenübergreifenden Psychoonkologie—AP "Qualitätsmanagement und Versorgungsmanagement” (Development of a quality management system in integrated, cross-sectoral psycho-oncology—WP "Quality management and care management”), which was led by Prof. Dr. med. Stephanie Stock. We acknowledge support for the Article Processing Charge from the DFG (German Research Foundation, 491454339).

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization, LD and CL.; methodology, LD and CL; formal analysis, LD; writing—original draft preparation, LD; writing—review and editing, CL; supervision, CL and SSt; project administration, LD; funding acquisition, DS and SSt. All the authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Lisa Derendorf.

Ethics declarations

Ethics approval and consent to participate

Entwicklung eines Qualitätsmanagementsystems in der integrierten, sektorenübergreifenden Psychoonkologie—AP "Qualitätsmanagement und Versorgungsmanagement" zur Studie "integrierte, sektorenübergreifende Psychoonkologie (isPO)" a sub-project of the “integrierte, sektorenübergreifende Psychoonkologie (isPO), was registered in the German Clinical Trials Register (DRKS) (DRKS-ID: DRKS00021515) on 3rd September 2020. The main project was registered on 30th October 2018 (DRKS-ID: DRKS00015326). The project was approved by the Ethics Commission of the Faculty of Medicine of Cologne University (18–092) on 15th October 2018. Informed consent was obtained from all subjects and/or their legal guardian(s). The participants agreed to take part in the modified Delphi process at the start of phase 2. All methods were carried out in accordance with relevant guidelines and regulations.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Derendorf, L., Stock, S., Simic, D. et al. Developing quality indicators for cross-sectoral psycho-oncology in Germany: combining the RAND/UCLA appropriateness method with a Delphi technique. BMC Health Serv Res 23, 599 (2023). https://doi.org/10.1186/s12913-023-09604-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12913-023-09604-3

Keywords