Skip to main content

Global core indicators for measuring WHO’s paediatric quality-of-care standards in health facilities: development and expert consensus

Abstract

Background

There are currently no global recommendations on a parsimonious and robust set of indicators that can be measured routinely or periodically to monitor quality of hospital care for children and young adolescents. We describe a systematic methodology used to prioritize and define a core set of such indicators and their metadata for progress tracking, accountability, learning and improvement, at facility, (sub) national, national, and global levels.

Methods

We used a deductive methodology which involved the use of the World Health Organization Standards for improving the quality-of-care for children and young adolescents in health facilities as the organizing framework for indicator development. The entire process involved 9 complementary steps which included: a rapid literature review of available evidence, the application of a peer-reviewed systematic algorithm for indicator systematization and prioritization, and multiple iterative expert consultations to establish consensus on the proposed indicators and their metadata.

Results

We derived a robust set of 25 core indicators and their metadata, representing all 8 World Health Organization quality standards, 40 quality statements and 520 quality measures. Most of these indicators are process-related (64%) and 20% are outcome/impact indicators. A large proportion (84%) of indicators were proposed for measurement at both outpatient and inpatient levels. By virtue of being a parsimonious set and given the stringent criteria for prioritizing indicators with “quality measurement” attributes, the recommended set is not evenly distributed across the 8 quality standards.

Conclusions

To support ongoing global and national initiatives around paediatric quality-of-care programming at country level, the recommended indicators can be adopted using a tiered approach that considers indicator measurability in the short-, medium-, and long-terms, within the context of the country’s health information system readiness and maturity. However, there is a need for further research to assess the feasibility of implementing these indicators across contexts, and the need for their validation for global common reporting.

Peer Review reports

Background

Globally, 60% of preventable deaths are due to poor-quality care, and it has been estimated that one out of three patients across low- and middle-income countries (LMICs) report suboptimal client-centred care [1, 2]. Although coverage of lifesaving interventions for many priority health conditions—including child health—has improved globally, this has not consistently translated into survival for preventable health conditions [3]. Providing health services without guaranteeing quality is ineffective, wasteful, and unethical [2].

In 2015, the World Health Organization (WHO) and partners articulated a vision in which “Every woman, newborn, child and adolescent receives quality health services throughout the continuum of their life course and level of care” [4]. To accompany this vision, in 2016, WHO developed Quality Standards (QSs) for Maternal and Newborn Health (MNH) in health facilities [5]. This was followed by the publication in 2018 of WHO’s Paediatric and Young Adolescent QSs (hereinafter referred to paediatric QSs) [6], and later, the WHO’s Small and Sick Newborn QSs published in 2020 [7]. All these QSs encompass both the provision and experience of care as key dimensions of quality and define eight domains of quality (Fig. 1) that should be assessed, improved, and monitored across health system levels [6,7,8].

Fig. 1
figure 1

Structure of the WHO QSs for improving the quality of paediatric and young adolescent care in health facilities

There is currently a plethora of paediatric QoC indicators available in both the published and grey literature. However, there are yet to be global recommendations for a robust set of core indicators that can be measured routinely or periodically in health facilities to track and compare progress and drive improvement and accountability at every level of the health system [2, 8,9,10]. In this paper, we describe a systematic methodology we used to prioritize and define a robust set of core paediatric and young adolescent QoC indicators (hereinafter referred to as “core indicators”) to support global efforts around paediatric and young adolescent health service quality improvement, progress tracking, and accountability across the health system.

Given the current gaps in the standardization, availability, and comprehensiveness of paediatric and young adolescent health information in most LMICs, we aimed to develop core indicators that would measure different aspects of the quality standards, without regard to feasibility of measurement across health information systems. The proposed set of core indicators therefore contains tiers of indicators that can be measured either immediately or in future depending on the maturity and readiness of health information systems in different countries. As such, we also make a case for transformative efforts to reform national health information systems and technologies to accommodate more robust and essential quality indicators, as opposed to promoting the adoption and use of “convenient-to-measure” quality indicators. We argue that the proposed set of core indicators would allow for the measurement of critical input, process, outcome, and impact dimensions of care which can serve as high level “signals” of paediatric and young adolescent QoC in health facilities, while retaining utility at (sub)national, national, and global levels.

Methods

The organizing framework for indicator development

Figure 1 shows the structure of WHO’s paediatric QSs. The 8 QSs describe what should be provided to achieve high-quality health care for children and young adolescents across the 8 quality domains (QDs). The 40 corresponding quality statements (see Additional File 1) are designed to drive continuous improvement to achieve positive care outcomes and a positive experience of care. Five-hundred and twenty (520) quality measures (QMs) were identified from the 8 domains as a means to measure whether specific aspects of quality are provided or achieved.

The structure and content of the MNH QSs and Paediatric QSs were adopted in the maternal, newborn, and child health (MNCH) QoC monitoring framework as the organizing framework for developing core QoC indicators [11]. The MNCH QoC monitoring framework recognized the different QoC measurement needs by stakeholders across different levels of the health system and proposes three QoC measurement components, including a) core (common) indicators: which is a small set of prioritized input, process, outcome, and impact indicators for use by all stakeholders at every level of the health system to track and compare process across and within regions and countries; b) Quality improvement (QI) indicator catalogue: which consists of a flexible menu of indicators to support QI facility and subnational levels led—respectively—by facility-based QI teams and district or regional health authorities to support improving and sustaining QoC in health facilities; and c) implementation milestones which help track the progress of country-specific or subnational implementation activities to develop and maintain quality improvement programmes. However, the latter are not directly linked to QSs but rather to national QoC implementation roadmaps, policy and strategies.

Core indicators and the QI indicator catalogue are directly linked to and measure the QSs. The focus of this article is core indicators, and we describe how they were selected from the QI indicator catalogue to maintain the linkages with the QSs.

Approach

The proposed core indicators were developed using a deductive approach. This approach is typically built upon a specific conceptual framework and links the indicator to clinical and/or patient-centered inputs, processes, and outcomes [12]. The proposed core indicators were therefore linked to the specific QSs and QMs. To derive these indicators, we followed nine complementary steps (Fig. 2) which allowed for an iterative process of indicator development. These steps included a rapid literature review, development of a methodology, the application of a systematic algorithm for indicator systematization and prioritization, and multiple iterative expert consultations.

Fig. 2
figure 2

A stepwise process used to develop the core indicators

Step 1

A gold standard methodology or approach for QoC indicator development is yet to be defined [13]. Therefore, we first conducted a rapid review of the published and grey literature to scope available methodological approaches for developing core QoC indicators. The most-commonly used approaches identified were: reviewing existing health information systems to identify and adopt or adapt already available QoC indicators [14], using guideline-based approaches to systematically align QoC indicators to specific clinical and non-clinical guidelines [15], deriving QoC indicators from the available evidence base (e.g., indicators that have been validated in the literature as good measures of quality), combining expert consensus with evidence review, or a combination of multiple approaches [13, 16]. Using a peer-review process, we analysed and synthesized the strengths and weaknesses of available approaches, their relevance to LMIC settings and used the results to develop a technical protocol that outlined our methodology for selecting core indicators.

Steps 2&3

Following the development of the methodology document in step 1, WHO convened a global expert consultative meeting (Step 2a) to review and reach consensus on the proposed methodology and process for developing the quality indicators. Participants included 24 independent technical experts and 10 experts from WHO, USAID, and UNICEF that represented a range of expertise in paediatric QoC programming and measurement, paediatric research, health informatics, paediatric care, and child rights. During the consultation, experts were placed in small heterogenous groups based on their expertise to discuss and make recommendations on the methodology. Each group’s recommendations were presented in a plenary session for discussion and consensus building, and the resulting recommendations were presented in a separate consultative process (Step 2b) to the Child Health Accountability Tracking (CHAT) Technical Advisory Group [17]. The recommendations from the initial expert meeting and CHAT were all used to revise the methodology before implementation (Step 3).

Step 4

The expert panel recommended the use of indicator development criteria and principles (described later) that recognize the complexity of measuring quality of clinical care and patient or caregiver-reported experience of care. These principles guided the generation of the most critical input, process, output, outcome, and impact indicators that can be used as “signals of quality”, thereby eliminating the need for measuring every aspect of service provision and related outcomes. These principles were applied to the 520 QMs during the indicator development process using a criterion- and score-based indicator prioritization algorithm which we built in an interactive Microsoft Excel spreadsheet (see Additional File 2). Initially, the algorithm allowed for the prioritization of a non-prescriptive menu of QMs linked to the QS. Based on this set, the core indicators were selected, identifying those which were most aligned with the QSs. The indicator prioritization algorithm is illustrated in Fig. 3.

Fig. 3
figure 3

Flow diagram of core indicator prioritization and development process (*System categories: each QM was systematized by the QoC element it measured: input, process (adherence to EB practices, non-evidence-based, harmful practices), or a related outcome/impact. Input measures were further systematized by various input categories: a) Medicines, supplies and equipment, infrastructure; b) Clinical guidelines, protocols, job aides; c) Operational guidelines, protocols; d) trained human resources; e) availability of health services f) financing g) health information system; h) organization of care processes; i) oversight and management. **Importance: This criterion was used to prioritize all measurement subdomains, but, as shown in Table 1, it was framed differently under each measurement domain or subdomain. #Clinical content area: List of prioritized clinical content areas used to select QMs is provided in the Additional File 3)

Development of QI indicator catalogue

As shown in Fig. 3, the QI indicator catalogue was generated from the 520 QMs. All QMs were first categorised into 3 measurement domains and 11 measurement subdomains as shown in Table 1. The objective of classifying QMs by measurement domains and measurement subdomains was to identify potential measurement areas to which distinct prioritization criteria should be applied. For example, the prioritization criterion for measurement domain 1 (adherence to evidence-based practices) aimed to identify areas for which there is strong evidence that links the care process to a desired health outcome. The same criterion, however, could not be applied to the QMs for child- and family-centered practices (measurement domain 3) which required consideration of patients’ rights, involvement in care, and elimination of harmful practices.

Table 1 Measurement domains and subdomains used to systematize QM

Two categories of prioritization criteria were used: a) Criteria applied to all QMs to prioritize valid, relevant, actionable, and feasible QMs; and b) Additional criteria applied to the resultant QMs in different measurement domains or subdomains to select QI indicators for the catalogue. As described in Table 2, these criteria were built in a standardized Microsoft Excel template, which facilitated a systematic and sequential approach to selecting specific QMs based on a scoring algorithm (Additional File 2) developed using a Microsoft Excel visual basic application. The final set included only QMs that met the minimum cut-off score for both prioritization steps. The final QMs were assigned non-prescriptive numerators and denominators to constitute the QI indicator catalogue which formed the basis for prioritizing core indicators.

Table 2 Prioritization steps, criteria and scoring mechanism used to prioritize QI catalogue and core indicators

Development of core indicators

The following criteria were applied to all indicators in the QI indicator catalogue to prioritize a smaller set of core indicators: usefulness, impact, and international comparability (see Table 2 for detailed rationale of each criterion). These criteria helped prioritise indicators for measuring QoC of paediatric services that are: 1) useful for guiding decisions around resource allocation and programming and especially at national and global levels; 2) sensitive to detecting change in QoC interventions at the service delivery level for priority paediatric conditions, and 3) aligned to the extent possible with standardized and validated global child health care indicators to enable comparisons.

These criteria and associated scores were applied to the QI indicator catalogue by a measurement expert (TC) who drafted a small set of core indicators for further review by two other experts (WW & MM). Details of the scoring mechanisms and cut-off scores are provided in Table 2.

Steps 5–9

During Step 5, the draft set of core indicators generated from step 4 were reviewed internally by a small group of 3 experts from WHO and University Research Co (TC, WW, MM) to determine the extent to which they fulfilled the selection criteria. Once the review was completed, the draft core indicators went through a global expert consultation process (Step 6) to generate the feedback on the content, clarity, definition, prioritization and other elements of the indicator metadata. These included program officials responsible for child health in various WHO country offices and ministries of health across three WHO regional offices which participated (Regional office for Africa, Regional Office for South-East Asia, and the Regional Office for Europe); individual child health QoC programming and measurement experts from academic and research institutions; WHO’s implementing partners at country level; global technical working groups; as well as technical focal points from other UN and multilateral agencies. All the resulting recommendations from the review were synthesized and revised by a small panel of experts (TC, WW, MM), as part of Step 7, and integrated into the draft core indicators.

The WHO expert team made final decisions on the core indicator selection based on the following guiding principles derived from the literature and expert recommendations:

  • Alignment with the QS and system categories: The recommended core indicator measures at least one QS.

  • Focus on impact: The recommended core indicator can assess the clinical or QI interventions that would have the highest impact on child health or child and family-centered outcomes such as mortality, morbidity, respectful care, etc.

  • Emphasis on child- and family-centered practices: The recommended core indicator can help to inform the development of interventions and practices that improve both child and family-centered care.

  • Guiding QI actions at all levels: While collected from each health facility, aggregated data from the recommended core indicator can provide strategic and timely information to be used across all levels of the health system (district, region, national, global levels) for comparable analysis to guide decision-making and planning for QI.

  • Provider or health system control: The recommended core indicator can measure attributes of service delivery and outcomes which are within the control of the health system or the provider.

  • Sample size adequacy: The recommended core outcome and impact indicators should typically generate enough data that allow for subgroup analysis and statistical testing to explain whether the difference in performance levels is greater than what would be expected by chance.

  • Relationship with quality: For recommended input and process indicators, there is sufficient evidence or reasonable assumption on their correlation with the outcome(s) of interest, even when there is no sufficient evidence on context-specificity or summative effects of these inputs and processes on the outcomes of interest

During step 8 the small expert group (TC, WW, MM) developed a comprehensive metadata and indicator dictionary for each recommended core indicator. Finally, (Step 9) these indicators and metadata were prepared for eventual field testing across different settings.

Results

In Round 1 of the prioritization process, 295 quality measures were pre-selected. The review and prioritization of these measures in Round 2 resulted in a smaller set of 172 catalogue measures (Fig. 3). Initially, from Round 3, all 172 measures were retained and defined to constitute the QI indicator catalogue (not the focus of this paper). Finally, Round 3 generated 19 core indicators which, following the final expert consultation, were increased to 25 core indicators (Tables 3 & 4). Six more indicators were recommended to account for the respectful and experience of care components.

Table 3 Distribution of core indicators by MD, QS, indicator classification, and service level
Table 4 List of recommended core indicators and their definitions

The final recommended set of core indicators are not evenly distributed across QS, MD, system categories, and service levels. Most of them are process-related (64%) followed by outcome/impact indicators (20%). A large proportion of the core indicators (84%) can be measured both at outpatient and inpatient levels. Fourteen core indicators were related to MD-1 to measure prioritized clinical content areas along with unnecessary, harmful practices, patient safety, and rational use of medications. The indicator “Institutional Child Mortality Rate” as an impact indicator cuts across different MD and QS. For simplicity, this indicator was mapped to MD-1. Four core indicators were mapped to MD-2, two of which relate to QS-2, and the other two on QS 7&8. Lastly, seven core indicators reflected MD-3 on child- and family-centred practices and experiences of care. Three of these were mapped to QS-4 while the other two were mapped to QS-5. The distribution of the parent QI catalogue indicators across different measurement typologies is presented in the Additional File 4.

Discussion

Paediatric QoC measurement has long been a complex area with limited global guidance and recommendations on how to develop paediatric QoC indicators. In many settings, there are general challenges of measuring QoC, including those related to data sources and quality. However, there are other specific challenges of measuring QoC among children due to their different needs and capabilities. At a minimum, the well balanced set of paediatric QoC indicators should be balanced to include measures of preventive care; treatment for acute conditions; treatment for diseases and disabilities requiring long term care; as well as different indicator typologies including input, process, and outcome indicators. Furthermore, paediatric QoC indicators should include measures of experience of care from the child and the caregiver perspective.

In response to the current gaps in normative guidance around paediatric QoC measurement, we make recommendations for a core set of QoC indicators for measuring and monitoring paediatric QoC in health facilities, and reporting at all levels for learning, accountability, as well as progress tracking.

The recommended indicators in context

We make recommendations for core indicators that are mapped to the paediatric QSs for health facilities. Almost all QSs are represented by at least one core indicator. However, certain QSs have greater representation than others because they are more complex in terms of the number of quality dimensions to be considered and their relative importance in influencing health care outcomes. This approach resulted in prioritization of more provision and experience of care indicators, compared to indicators that measure availability of inputs. While important, inputs alone do not guarantee that the service is provided correctly and consistently.

Our goal was to develop a parsimonious set of robust indicators that provide high level insights into the quality of paediatric care in health facilities, as opposed to a comprehensive suite of indicators that provide detailed measurement of the quality of paediatric care. Less emphasis was thus placed on overall feasibility of measurement in the short term, especially in LMICs, mainly because of two reasons.

Firstly, QoC measurement is a relatively new area of measurement in most LMICs. Many countries in such settings are yet to institutionalise QoC measurement within their mainstream health information systems [18]. Developing paediatric QoC indicators with strong emphasis on feasibility of measurement would have yielded an even more imbalanced set of indicators, from the perspective of their ability to measure the QSs from which they were derived, and the indicator metadata that defines critical attributes for QoC indicators. We therefore recognize that not all the recommended core indicators can be measured across all settings in the short-term. Globally, the ability of countries and institutions to measure, collect, report and use data for most health care indicators in general varies greatly across countries, including in high income settings [19,20,21]. Our argument is therefore that the audience for these indicators—which includes countries and partners working in the paediatric QoC programming and QI space – should, in the short term, select and use the recommended indicators for which data are available in their health information systems. Users can then continue to improve their data systems to accommodate indicators that are not currently reported in standardized medical documentation, or are more complex to measure, or for which more data elements are required to collect the full set of core indicators. We also note that even in high-income settings, introducing new indicators in hospital settings may come with administrative challenges. The existing health information systems may require some changes such as the adaptation of patient administration forms and facility registers, and an increase in the administrative burden of data collection, collation and reporting [22].

Secondly, the recommended set is a response to urgent programmatic needs on a global scale in paediatric QoC implementation and measurement. Field-testing of indicators takes time and resources and there are methodological weakness of assessing the feasibility and sustainability of measuring and monitoring newly-developed indicators outside research contexts [23, 24]. Field-testing of these indicators is the next step that WHO and its partners will lead in multiple countries.

Strength and limitations

The implementation of expert recommendations on the methodology to develop the core indicators was not without challenges. The health status and risk factors for illness for young infants, infants, children under 5, and young adolescents differ and change over time. This created some challenges in prioritizing core indicators that are reflective of all age categories. The age categories where quality measurement is most crucial were thus prioritized, which introduced some imbalances. Furthermore, the indicators recommended for measuring experience of care will likely require the collection of views of children and/or their caregivers, as appropriate, and the content of the information provided would depend on several individual and contextual factors, including the age of the child and local cultural norms regarding care, age of majority, and parental rights. For example, meaningful participation and input from children can vary based on age and condition. Similarly, child health rights, especially young adolescents, may vary from one country to another which would render global reporting a challenge. In some cases, the experience of the caregiver and the child may be different and future work will be needed to understand how to measure these both together and separately.

A systematic review of methods used to develop quality indicators showed that patient participation during development of quality of care indicators remains uncommon and there is no standardized methodology on how best to involve patients and communities in the process [27]. To address this limitation, organizations and experts working on paediatric experience of care and patient care advocacy and child rights were included in development of experience of care standards and the indicator prioritization processes. Furthermore, during the field-based evaluation of the proposed indicators, WHO will seek to involve children and their caregivers to test the smaller set of indicators that focus on respectful and experience of care in various settings.

Some clinical indicators are compound metrics which require computation using several data elements, in some instances from different data sources. This was inevitable given the focus on robust measurement of quality as opposed to a convenient approach. While compound indicators provide high level, comprehensive understanding of the QoC at the national and global levels, these indicators need further disaggregation or additional catalogue indicators at the subnational and facility levels to identify and address the root-causes of poor quality services.

Furthermore, we acknowledge that the indicator prioritization algorithm we used was complex, and in some places, the algorithm involved the use of subjective criteria due to lack of scientific evidence to guide the process. For example, the criteria used to prioritize some input measures involved weights which are arguably subjective.

Similarly, because of the goal of keeping the list of core indicators short, there are few areas (e.g., referral systems) that are not included in the core list but could be part of a future research agenda. Despite all these challenges, however, the iterative peer-review process with various expert groups added value to the robustness of the proposed suite of indicators. Furthermore, the criteria used to prioritize the indicators are evidence-based and presented an opportunity to select and define indicators that are core for paediatric QoC measurement in health facilities where the QSs are supposed to be implemented.

Lastly, the excel tool which we developed to operationalize the indicator prioritization methodology, allows the user to add additional indicators, change assigned scores and prioritization weights, and designate cut-off scores. After this, the tool automatically selects the core and catalogue indicators. These features, along with the ability to systematize and sort the measures by various categories, are particularly important in the context of constantly changing evidence and emerging local or global priorities.

Implications for research and practice

The proposed suite of indicators and their metadata are part of WHO’s living normative guidance on MNCH QoC measurement. They are not a final product and may be refined or revised based on the findings from future research on their validity in various contexts, especially in LMICs. An important next step will therefore be a large-scale, multicounty study to determine how these indicators can best be measured in different contexts. Immediate utilization of these indicators in practice and on a global scale will require a tiered approach that considers varying country contexts. Countries will need to decide which indicators they can measure and report on in the short-term, and which ones require further health information system adaptation in the medium to long terms.

Conclusions

The deductive, multistep approach we used to develop core paediatric QoC indicators yielded a robust set of indicators that represent all the 8 QSs. The recommended indicators can be adopted at country level using a tiered approach to support urgent paediatric and young adolescent quality-of-care programming and improvement work. However, there is a need for further research to assess the feasibility of their measurement across settings as well as standardization and validation for global reporting in future.

Disclaimer

The authors alone are responsible for the views expressed in this article and they do not necessarily represent the views, decisions, or policies of the institutions with which they are affiliated.

Availability of data and materials

All data generated or analysed during this study are included in this published article [and its supplementary information files].

Abbreviations

LMICs:

Low- and middle-income countries

WHO:

World Health Organization

QSs:

Quality Standards

MNH:

Maternal and Newborn Health

QoC:

Quality of Care

QDs:

Quality Domains

QMs:

Quality Measures

MNCH::

Maternal, Newborn, and Child Health

QI:

Quality Improvement

CHAT:

Child Health Accountability Tracking

USAID:

United States Agency for International Development

UNICEF:

United Nations Children's Fund

SL:

Service Level

SC:

System Category

CA:

Clinical Area

MD:

Measurement Domain

MSD:

Measurement Sub-Domain

References

  1. Boerma T, Requejo J, Victora CG, Amouzou A, George A, Agyepong I, et al. Countdown to 2030: tracking progress towards universal coverage for reproductive, maternal, newborn, and child health. The Lancet. 2018;391(10129):1538–48.

    Article  Google Scholar 

  2. Kruk ME, Gage AD, Arsenault C, Jordan K, Leslie HH, Roder-DeWan S, et al. High-quality health systems in the Sustainable Development Goals era: time for a revolution. Lancet Glob Health. 2018;6(11):e1196–252.

    Article  Google Scholar 

  3. Kruk ME, Gage AD, Joseph NT, Danaei G, García-Saisó S, Salomon JA. Mortality due to low-quality health systems in the universal health coverage era: a systematic analysis of amenable deaths in 137 countries. The Lancet. 2018;392(10160):2203–12.

    Article  Google Scholar 

  4. Tunçalp Ӧ, Were WM, MacLennan C, Oladapo OT, Gülmezoglu AM, Bahl R, et al. Quality of care for pregnant women and newborns-the WHO vision. BJOG Int J Obstet Gynaecol. 2015;122(8):1045–9.

    Article  Google Scholar 

  5. World Health Organization. Standards for improving quality of maternal and newborn care in health facilities. 2016.

    Google Scholar 

  6. World Health Organization. Standards for improving the quality of care for children and young adolescents in health facilities. 2018.

    Google Scholar 

  7. World Health Organization. Standards for improving the quality of care for small and sick newborns in health facilities. 2020.

    Google Scholar 

  8. Chitashvili T, et al. University Research Co. Effectiveness and cost-effectiveness of quality improvement interventions for integrated management of newborn and childhood illness in Northern Uganda. Rockville: University Research Co., LLC (URC); 2020.

  9. Cherkezishvili E, Mutanda P, Nalwadda G, Kauder S. USAID Health Care Improvement Project. Chevy Chase, MD: University Research Co., LLC (URC); 2020. Rockville, MD, USA.

  10. World Health Organization. Ending preventable child deaths from pneumonia and diarrhoea by 2025: the integrated Global Action Plan for Pneumonia and Diarrhoea (GAPPD). 2013.

    Google Scholar 

  11. World Health Organization. Quality of Care for Maternal and Newborn Health: a Monitoring Framework for the Network Countries. 2019 Feb; Available from: https://www.who.int/docs/default-source/mca-documents/advisory-groups/quality-of-care/quality-of-care-for-maternal-and-newborn-health-a-monitoring-framework-for-network-countries.pdf?sfvrsn=b4a1a346_2

  12. Stelfox HT, Straus SE. Measuring quality of care: considering measurement frameworks and needs assessment to guide quality indicator development. J Clin Epidemiol. 2013;66(12):1320–7.

    Article  Google Scholar 

  13. Kötter T, Blozik E, Scherer M. Methods for the guideline-based development of quality indicators–a systematic review. Implement Sci. 2012;7(1):21.

    Article  Google Scholar 

  14. van der Ploeg E, Depla M, Shekelle P, Rigter H, Mackenbach JP. Developing quality indicators for general practice care for vulnerable elders; transfer from US to The Netherlands. BMJ Qual Saf. 2008;17(4):291–5.

    Article  Google Scholar 

  15. Shield T, Campbell S, Rogers A, Worrall A, Chew-Graham C, Gask L. Quality indicators for primary care mental health services. BMJ Qual Saf. 2003;12(2):100–6.

    Article  CAS  Google Scholar 

  16. Mainz J. Developing evidence-based clinical indicators: a state of the art methods primer. Int J Qual Health Care. 2003;15(suppl_1):i5-11.

    Article  Google Scholar 

  17. Strong K, Requejo J, Agweyu A, McKerrow N, Schellenberg J, Agbere DA, et al. Child Health Accountability Tracking—extending child health measurement. Lancet Child Adolesc Health. 2020;4(4):259–61.

    Article  Google Scholar 

  18. Kruk ME, Kelley E, Syed SB, Tarp F, Addison T, Akachi Y. Measuring quality of health-care services: what is known and where are the gaps? Bull World Health Organ. 2017;95(6):389.

    Article  Google Scholar 

  19. Campbell SM, Kontopantelis E, Hannon K, Burke M, Barber A, Lester HE. Framework and indicator testing protocol for developing and piloting quality indicators for the UK quality and outcomes framework. BMC Fam Pract. 2011;12(1):85.

    Article  Google Scholar 

  20. Winslade N, Taylor L, Shi S, Schuwirth L, Van der Vleuten C, Tamblyn R. Monitoring Community Pharmacist’s Quality of Care: A feasibility study of using pharmacy claims data to assess performance. BMC Health Serv Res. 2011;11(1):12.

    Article  Google Scholar 

  21. Peña A, Virk SS, Shewchuk RM, Allison JJ, Dale Williams O, Kiefe CI. Validity versus feasibility for quality of care indicators: expert panel results from the MI-Plus study. Int J Qual Health Care. 2010;22(3):201–9.

    Article  Google Scholar 

  22. Rubin HR, Pronovost P, Diette GB.  Methodology Matters. From a process of care to a measure: the development and testing of a quality indicator. Int J Qual Health Care. 2001;13(6):489–96.

    Article  CAS  Google Scholar 

  23. Madaj B, Smith H, Mathai M, Roos N, Van Den Broek N. Developing global indicators for quality of maternal and newborn care: a feasibility assessment. Bull World Health Organ. 2017;95(6):445.

    Article  Google Scholar 

  24. Ntoburi S, Hutchings A, Sanderson C, Carpenter J, Weber M, English M. Development of paediatric quality of inpatient care indicators for low-income countries-A Delphi study. BMC Pediatr. 2010;10(1):1–11.

    Article  Google Scholar 

  25. Who U, Mathers C. Global strategy for women’s, children’s and adolescents’ health (2016–2030). Organization. 2016;201:4–103.

    Google Scholar 

  26. World Health Organization. Global reference list of 100 core health indicators (plus health-related SDGs). Geneva: World Health Organization; 2018.

  27. Kötter T, Blozik E, Scherer M. Methods for the guideline-based development of quality indicators–a systematic review. Implement Sci. 2012;21(7):21. https://doi.org/10.1186/1748-5908-7-21.PMID:22436067;PMCID:PMC3368783.

    Article  Google Scholar 

Download references

Acknowledgements

Special thanks go to different experts and expert groups that contributed to this work including program officials responsible for child health in various WHO country offices and ministries of health across three WHO regional offices; the Child Health Task Force; individual experts from various academic and research institutions; and CHAT advisory group. The authors would also like to recognize and thank Patricia Jodrey from USAID for her careful review of and helpful contributions to this article.

Funding

This work was made possible by the United States Agency for International Development (USAID) through a grant to the World Health Organization (grant number US GH/MCHN HL6 PED QOC).

Author information

Authors and Affiliations

Authors

Contributions

MM, TC, and WW conceptualized the methodology for developing the core indicators, TC was the lead implementer of the methodology, AC prepared the first draft, MM coordinated the review and write up of subsequent iterative drafts, all authors were involved in the review of the methodology and the proposed indicators and commented on iterative drafts of this article. All authors have approved the final version.

Corresponding author

Correspondence to Moise Muzigaba.

Ethics declarations

Competing interests

The authors declare no competing interests.

Ethics approval and consent to participate

Not Applicable.

Consent for participation

Not Applicable.

Consent to publication

Not Applicable.

Competing interest

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: Supplementary file 1.

 Summary of standards and quality statements

Additional file 2: Supplementary file 2.

 Prioritization algorithm

Additional file 3: Supplementary file 3.

 Listof prioritized clinical content areas for the selection of measures

Additional file 4: Supplementary file 4.

Distributionof QI catalogue indicators by MDs, QSs, indicator classification, and servicelevel

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Muzigaba, M., Chitashvili, T., Choudhury, A. et al. Global core indicators for measuring WHO’s paediatric quality-of-care standards in health facilities: development and expert consensus. BMC Health Serv Res 22, 887 (2022). https://doi.org/10.1186/s12913-022-08234-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12913-022-08234-5

Keywords