Study sites and participants
Eight rural district hospitals from four of the eight provinces were purposively chosen to represent rural district hospitals in Kenya and have been described in full elsewhere . Neither the Ministry Of Health nor the hospitals had any defined procedures for implementing new clinical guidelines prior to the study. The Kenya Medical Research Institute National Ethics and Scientific review committees approved the study.
Randomization and masking
Before randomization, meetings were held with the management of the eight shortlisted hospitals in which the study design, mode of data collection, longevity of the study and intervention were discussed. Each hospital held internal discussion after which the study team sought assent regarding the hospital’s participation in the study. After obtaining assent from all eight hospitals, the hospitals were allocated to either the full intervention or partial intervention using restricted randomization. Hospitals coded as H1-H4 received the full intervention while hospitals H5-H8 received a partial intervention (control). Of the 70 possible randomization outcomes to the intervention and control groups, seven gave relatively balanced groups in terms of hospital level covariates, and one of these was randomly chosen using a “blind draw” procedure. It was not possible to mask treatment of participating hospitals, but details on group allocations were not publicly disseminated, geographical distance between hospitals was relatively large and there is typically little formal effort to transfer knowledge and practice between hospitals.
The intervention was delivered over an 18 month-period (from September 2006 to April 2008) and aimed to improve quality of pediatric admission care through implementation of best practice guidelines and local efforts to tackle organizational constraints. The partial intervention delivered in control hospitals involved a 1.5 day lecture-based introductory seminar explaining the evidence based clinical practice guidelines followed by dissemination of these guidelines and accompanying job aides, and regular hospital performance assessment surveys conducted every 6 months followed by written feedback. Conversely, the intervention hospitals received 5.5 day training on ETAT + , a local hospital facilitator responsible for on-site problem solving who received external supervisory support by telephone from the implementation team, and face to face feedback of survey findings at the end of each survey (see  for full description). The package delivered in intervention hospitals was in addition to written feedback and clinical guideline and job aide dissemination.
Data relevant to this study were collected at baseline, six months post baseline and at the end of intervention (18 months) in both control and intervention hospitals. Data collection teams received three-weeks training including a pilot survey prior to baseline data collection with further details of procedures supplied elsewhere . At baseline and for each subsequent round up to four data collection teams working concurrently spent two weeks at each hospital collecting retrospective data from a random sample of medical records of children discharged over the preceding six months . During each survey one team member was assigned to collect prospective data on process of care for all children present on the wards at the start of the survey and every child admitted during the two weeks period that followed. Therefore the retrospective dataset covers children admitted during the six months period preceding the survey while the prospective data set covers children admitted during the two weeks period that data collection was taking place. The aim was to enroll 50 cases per survey per hospital based on estimated admission rates for the size of hospital studied. Data were abstracted on standardized forms from medical records and other supporting documents such as nursing charts and laboratory requests with clarification being sought form health workers or children’s caretakers as needed. For quality assurance purposes team leaders assessed data quality for all cases and independently re-evaluated a 10% sample of retrospective and prospective case records during data collection. Ethical approval was granted for confidential abstraction of data from case records without individuals’ consent.
The primary outcome was change in quality of pediatric care measured using 13 process of care indicators in intervention versus control hospitals. These process indicators, the same as those used for the retrospective data analysis , were derived from evidence based clinical guideline recommendations for management of pneumonia, malaria and diarrhea and/or dehydration. Three additional indicators focusing on key policy recommendations for paediatric care were vitamin A administration on admission, provider initiated HIV testing and identification of missed opportunities for vaccination. Mortality was not a primary outcome.
All the 13 process indicators were dichotomous variables indicating whether the patient assessment, treatment and supportive care were implemented according to guidelines. An overall score for assessment was calculated representing the proportion of relevant assessment tasks completed for each child. This score, constrained between zero and one, was derived from a maximum possible number of assessment indicators for each child, with 5 indicators for all children, and extra indicators for children diagnosed with malaria (4), pneumonia (4) and diarrhea/dehydration (2) (Additional file 1).
The data collection period for each hospital in each survey was restricted to two weeks limiting the number of prospectively observed patient episodes that could be collected to approximately 50 cases per hospital per survey given the workload in the selected hospitals. Since the number of prospective observations was limited, we explored the ability to detect important effect sizes using typical values of power of 80%, 95% precision and with only 4 clusters per arm with intra-class correlation coefficients derived from retrospective data analysis (Additional file 2). For example, assuming 25 malaria cases per site at the 18 months survey and 50% correct management in control hospitals, the differences between intervention and control arms that would seem an unlikely chance finding ranged from greater than 9% to greater than 20% for ICC values of 0.008 and 0.226 respectively (see Additional file 2). Pooling data across the two post intervention surveys would allow identification of smaller apparent differences (but taking no account of multiple comparisons).
Data from the retrospective study were a sub-set of the data from four surveys (including one at 12 months) used by Ayieko et al. . For the prospective study, data entry was conducted independently by two clerks using Microsoft Access databases and verified. For both studies (prospective and retrospective) data analyses were conducted using Stata version 11 . Descriptive sample characteristics were calculated at the hospital level for each survey period, using medians, and inter-quartile range (IQR) for skewed continuous variables and means or proportions with 95% confidence intervals (95% CI) for continuous and binary categorical variables respectively.
The effect of the intervention was assessed in 2 ways for both the retrospective data and the prospective data. The first set of analyses combined data from surveys 2 and 4 (post intervention) for each hospital, and for each process indicator used an unpaired t-test to compare summary measures in each hospital (4 intervention and 4 control), in order to assess the effect of the intervention . Summary measures of each indicator were used to obtain unadjusted risk differences and risk ratios for the effect of the intervention. The adjusted risk ratios and risk differences were calculated using covariate adjusted cluster residuals, whereby a logistic regression model is fitted using only personal (child-level) factors, and residuals computed for each of the 8 hospitals by subtracting observed hospital means from predicted hospital means . These hospital residuals were compared using an unpaired t-test across the 4 intervention and 4 control hospitals.
A second analysis was undertaken using a multi-level logistic regression model for each process indicator, taking into account the clustering at the hospital level. The multi-level model used data from all three surveys, not adjusting for any individual level characteristics, or hospital level characteristics, to obtain crude odds ratios (OR) to assess the impact of the intervention as the interaction between intervention hospitals in survey 2 and 4 pooled together compared to the main effects of the intervention at baseline. This was reported as ratios of odds ratios for dichotomous indicators and differences in differences for assessment score. A bar chart comparing adjusted risk ratios for prospective and retrospective datasets was plotted to provide a visual indication of how well the results from the two datasets agree.