Research article | Open | Open Peer Review | Published:
Assessment of paediatric inpatient care during a multifaceted quality improvement intervention in Kenyan District Hospitals – use of prospectively collected case record data
BMC Health Services Researchvolume 14, Article number: 312 (2014)
In assessing quality of care in developing countries, retrospectively collected data are usually used given their availability. Retrospective data however suffer from such biases as recall bias and non-response bias. Comparing results obtained using prospectively and retrospectively collected data will help validate the use of the easily available retrospective data in assessing quality of care in past and future studies.
Prospective and retrospective datasets were obtained from a cluster randomized trial of a multifaceted intervention aimed at improving paediatric inpatient care conducted in eight rural Kenyan district hospitals by improving management of children admitted with pneumonia, malaria and diarrhea and/or dehydration. Four hospitals received a full intervention and four a partial intervention. Data were collected through 3 two weeks surveys conducted at baseline, after 6 and 18 months. Retrospective data was sampled from paediatric medical records of patients discharged in the preceding six months of the survey while prospective data was collected from patients discharged during the two week period of each survey. Risk Differences during post-intervention period of16 quality of care indicators were analyzed separately for prospective and retrospective datasets and later plotted side by side for comparison.
For the prospective data there was strong evidence of an intervention effect for 8 of the indicators and weaker evidence of an effect for one indicator, with magnitude of effect sizes varying from 23% to 60% difference. For the retrospective data, 10 process (these include the 8 indicators found to be statistically significant in prospective data analysis) indicators had statistically significant differences with magnitude of effects varying from 10% to 42%. The bar-graph comparing results from the prospective and retrospective datasets showed similarity in terms of magnitude of effects and statistical significance for all except two indicators.
Multifaceted interventions can help improve adoption of clinical guidelines and hence improve the quality of care. The similar inference reached after analyses based on prospective assessment of case management is a useful finding as it supports the utility of work based on examination of retrospectively assembled case records allowing longer time periods to be studied while constraining costs.
Current Controlled Trials ISRCTN42996612. Trial registration date: 20/11/2008
Effective organization and provision of care for children admitted to hospital with acute illness in developing countries is advocated through World Health Organization and Kenyan guidance [1–4]. The effectiveness and cost-effectiveness of one approach to their implementation in Kenya is reported . As with other studies evaluation, the study was based on retrospective examination of case records [6, 7] that are typically more feasible than examining inpatient care prospectively by observation. The appropriateness of using medical records in epidemiological research depends on the accuracy of transfer of information from patient to clinician, the accuracy and level of detail in recording that information onto medical records by the clinician and how well abstraction of information from medical records is done . The quality of data obtained from medical records can be compromised by unclearly specified research questions, vague specification of variables, poorly designed abstraction tools, poor understanding of the data by the abstractors and incompleteness of the data in medical records . Comparison of self-reported data with data abstracted from health records on initial treatment of prostate cancer showed good agreement between the two datasets  while comparison of prospectively collected data with retrospectively collected data in studying risk factors for coronary artery disease showed that medical records are less accurate and less complete compared to prospectively collected data . Eder et al. discusses some of the strategies that can be used in enhancing the reliability of data abstracted for medical records. These include establishment of priority in the source of information if there are multiple sources of information and having standardized terminology . Abstracting data from medical records can be preferable to prospective designs because they are less resource intensive, they can be easily used in exploring possible associations and can be performed at researchers’ convenience. However, such data are subject to numerous sources of bias, usually have a lot of missing data especially where documentation of care is poor and it’s often difficult to establish true causal effect relationships . Here we use prospectively collected case data to examine the effects of an intervention aimed at improving paediatric practices in Kenya and contrast these findings with those previously reported based on retrospectively collected data . Our aims is both to examine the intervention effects using arguably a more robust data set and triangulate the findings with those based on retrospective case record review.
Study sites and participants
Eight rural district hospitals from four of the eight provinces were purposively chosen to represent rural district hospitals in Kenya and have been described in full elsewhere . Neither the Ministry Of Health nor the hospitals had any defined procedures for implementing new clinical guidelines prior to the study. The Kenya Medical Research Institute National Ethics and Scientific review committees approved the study.
Randomization and masking
Before randomization, meetings were held with the management of the eight shortlisted hospitals in which the study design, mode of data collection, longevity of the study and intervention were discussed. Each hospital held internal discussion after which the study team sought assent regarding the hospital’s participation in the study. After obtaining assent from all eight hospitals, the hospitals were allocated to either the full intervention or partial intervention using restricted randomization. Hospitals coded as H1-H4 received the full intervention while hospitals H5-H8 received a partial intervention (control). Of the 70 possible randomization outcomes to the intervention and control groups, seven gave relatively balanced groups in terms of hospital level covariates, and one of these was randomly chosen using a “blind draw” procedure. It was not possible to mask treatment of participating hospitals, but details on group allocations were not publicly disseminated, geographical distance between hospitals was relatively large and there is typically little formal effort to transfer knowledge and practice between hospitals.
The intervention was delivered over an 18 month-period (from September 2006 to April 2008) and aimed to improve quality of pediatric admission care through implementation of best practice guidelines and local efforts to tackle organizational constraints. The partial intervention delivered in control hospitals involved a 1.5 day lecture-based introductory seminar explaining the evidence based clinical practice guidelines followed by dissemination of these guidelines and accompanying job aides, and regular hospital performance assessment surveys conducted every 6 months followed by written feedback. Conversely, the intervention hospitals received 5.5 day training on ETAT + , a local hospital facilitator responsible for on-site problem solving who received external supervisory support by telephone from the implementation team, and face to face feedback of survey findings at the end of each survey (see  for full description). The package delivered in intervention hospitals was in addition to written feedback and clinical guideline and job aide dissemination.
Data relevant to this study were collected at baseline, six months post baseline and at the end of intervention (18 months) in both control and intervention hospitals. Data collection teams received three-weeks training including a pilot survey prior to baseline data collection with further details of procedures supplied elsewhere . At baseline and for each subsequent round up to four data collection teams working concurrently spent two weeks at each hospital collecting retrospective data from a random sample of medical records of children discharged over the preceding six months . During each survey one team member was assigned to collect prospective data on process of care for all children present on the wards at the start of the survey and every child admitted during the two weeks period that followed. Therefore the retrospective dataset covers children admitted during the six months period preceding the survey while the prospective data set covers children admitted during the two weeks period that data collection was taking place. The aim was to enroll 50 cases per survey per hospital based on estimated admission rates for the size of hospital studied. Data were abstracted on standardized forms from medical records and other supporting documents such as nursing charts and laboratory requests with clarification being sought form health workers or children’s caretakers as needed. For quality assurance purposes team leaders assessed data quality for all cases and independently re-evaluated a 10% sample of retrospective and prospective case records during data collection. Ethical approval was granted for confidential abstraction of data from case records without individuals’ consent.
The primary outcome was change in quality of pediatric care measured using 13 process of care indicators in intervention versus control hospitals. These process indicators, the same as those used for the retrospective data analysis , were derived from evidence based clinical guideline recommendations for management of pneumonia, malaria and diarrhea and/or dehydration. Three additional indicators focusing on key policy recommendations for paediatric care were vitamin A administration on admission, provider initiated HIV testing and identification of missed opportunities for vaccination. Mortality was not a primary outcome.
All the 13 process indicators were dichotomous variables indicating whether the patient assessment, treatment and supportive care were implemented according to guidelines. An overall score for assessment was calculated representing the proportion of relevant assessment tasks completed for each child. This score, constrained between zero and one, was derived from a maximum possible number of assessment indicators for each child, with 5 indicators for all children, and extra indicators for children diagnosed with malaria (4), pneumonia (4) and diarrhea/dehydration (2) (Additional file 1).
The data collection period for each hospital in each survey was restricted to two weeks limiting the number of prospectively observed patient episodes that could be collected to approximately 50 cases per hospital per survey given the workload in the selected hospitals. Since the number of prospective observations was limited, we explored the ability to detect important effect sizes using typical values of power of 80%, 95% precision and with only 4 clusters per arm with intra-class correlation coefficients derived from retrospective data analysis (Additional file 2). For example, assuming 25 malaria cases per site at the 18 months survey and 50% correct management in control hospitals, the differences between intervention and control arms that would seem an unlikely chance finding ranged from greater than 9% to greater than 20% for ICC values of 0.008 and 0.226 respectively (see Additional file 2). Pooling data across the two post intervention surveys would allow identification of smaller apparent differences (but taking no account of multiple comparisons).
Data from the retrospective study were a sub-set of the data from four surveys (including one at 12 months) used by Ayieko et al. . For the prospective study, data entry was conducted independently by two clerks using Microsoft Access databases and verified. For both studies (prospective and retrospective) data analyses were conducted using Stata version 11 . Descriptive sample characteristics were calculated at the hospital level for each survey period, using medians, and inter-quartile range (IQR) for skewed continuous variables and means or proportions with 95% confidence intervals (95% CI) for continuous and binary categorical variables respectively.
The effect of the intervention was assessed in 2 ways for both the retrospective data and the prospective data. The first set of analyses combined data from surveys 2 and 4 (post intervention) for each hospital, and for each process indicator used an unpaired t-test to compare summary measures in each hospital (4 intervention and 4 control), in order to assess the effect of the intervention . Summary measures of each indicator were used to obtain unadjusted risk differences and risk ratios for the effect of the intervention. The adjusted risk ratios and risk differences were calculated using covariate adjusted cluster residuals, whereby a logistic regression model is fitted using only personal (child-level) factors, and residuals computed for each of the 8 hospitals by subtracting observed hospital means from predicted hospital means . These hospital residuals were compared using an unpaired t-test across the 4 intervention and 4 control hospitals.
A second analysis was undertaken using a multi-level logistic regression model for each process indicator, taking into account the clustering at the hospital level. The multi-level model used data from all three surveys, not adjusting for any individual level characteristics, or hospital level characteristics, to obtain crude odds ratios (OR) to assess the impact of the intervention as the interaction between intervention hospitals in survey 2 and 4 pooled together compared to the main effects of the intervention at baseline. This was reported as ratios of odds ratios for dichotomous indicators and differences in differences for assessment score. A bar chart comparing adjusted risk ratios for prospective and retrospective datasets was plotted to provide a visual indication of how well the results from the two datasets agree.
All eight participating hospitals completed the study providing 1295 admission episodes followed up prospectively. These included 505 episodes from the pre-intervention period and 790 from the post intervention period (413 at 6 months and 377 at 18 months). A total of 6302 retrospective case records were available for analysis from similar surveys. The characteristics of children whose admissions were evaluated prospectively were similar to those in the retrospective dataset, Table 1. Full description of process indicators for both prospective and retrospectively datasets provided in Additional files 3 and 4 respectively show improvement for many of those indicators between baseline and follow up at 6 months and 18 months. No hospital received additional training from external sources or other intervention components during the study period.
The adjusted risk difference and risk ratio between intervention and control hospitals for the process indicators obtained from the pooled (6 months plus 18 months) prospective data and for the retrospective data pooled over the same periods are shown in Table 2 (unadjusted results are shown in Table 3). The magnitude of the intervention effect varied across the process indicators. For the prospective data there was strong evidence of an intervention effect for 8 of the indicators and weaker evidence of an effect for one indicator, with magnitude of effect sizes varying from 23% to 60% difference. For the retrospective data, 10 process indicators had statistically significant differences with magnitude of effects varying from 10% to 42%. The mean risk difference between intervention and control hospitals using data collected at six months and 18 months surveys only in both the prospective and retrospective datasets are displayed graphically in Figure 1. This illustrates their similarity in terms of magnitude of effects and statistical significance for all except two indicators, the proportion of patients receiving quinine that received an overdose and the proportion of patients receiving gentamicin that received an under dose. In both of these indicators, the retrospective dataset gave more favorable results. These discrepancies were possibly caused by the small number of patients who had either a gentamicin or quinine prescription on which to calculate these dosage error indicators particularly in the prospective data set.
The results from multilevel models comparing the effect of intervention during the post-intervention period to that at baseline for both prospective and retrospective datasets are presented in Table 4. The prospective data showed statistically significant differences between baseline and post-intervention periods for 8 process indicators. These differences were also observed in the retrospective dataset except for HIV testing that showed favorable results in retrospective data. It was not possible to calculate ratio of odds ratios for one indicator (correct fluid prescription for patients presenting with diarrhoea and/or dehydration) in the prospective dataset due to a small number of observations. Analysis of retrospective data suggested a favorable effect of the intervention for this indicator.
We evaluated a multi-faceted approach to implementation of clinical guidelines aimed at treatment of illnesses that cause most deaths in Kenyan district hospitals. We used data collected by personnel present in the hospital for two week survey periods prior to intervention and 6 and 18 months after intervention. The data showed that prior use of guideline recommended practices for treatment of children with severe illness was poor at baseline. Data further showed marked improvement in adoption of guideline recommended practices in both partial and full intervention groups but improvements were more marked in the full intervention group. It is worth noting that improvements were sustained between 6 to 18 months after initial training despite very high staff turn-over amongst the junior clinicians responsible for much care. Indeed, of 109 clinical staff responsible for attending to the patients sampled as part of the retrospective data analysis in survey 4 in the intervention hospitals, only nine (8.3%) had received any specific formal or ad hoc guideline-related training .Retrospective data are more efficiently collected than prospective data, and can cover a wider time period reducing possible effects of seasonal or other temporal variations. However, problems of missing data may be greater when using retrospective case record data, a data quality problem potentially overcome by using prospective collection. As demonstrated in Figure 1, the analysis of prospectively collected data gave similar results to those obtained when using retrospective data. This consistency helps validate, through triangulation, the methods used for retrospectively collecting case record data in assessing the quality of care. The results provide reassurance that assessing practice performance using retrospective sampling is of value.
The underlying hypothesis behind these analyses is that the assessment of quality of care will be similar in retrospective and prospective patients. The analysis recommended by Hayes is robust and simple, which makes it ideal for hospital surveys in countries with limited resources. In this paper we had individual level data allowing the analysis to adjust for confounders that may be associated with the quality of care received by these children. The analysis shows the results from the retrospective and prospective patients gave similar results for the impact of the intervention on quality of care. In the multi-level the assessment of the impact of the intervention is made through an interaction effect between survey time (6 & 18 months vs. baseline at 0 months) and the intervention. While this may have theoretical advantages, it may be more sensitive to small numbers, especially in the estimation of the interaction, and the results were much more varied than with the simpler analysis. The distributional assumptions of multilevel models are difficult to verify when we have few clusters as in our case but these models allow for examination of intervention effect and effect of other covariates simultaneously which is not possible using the method proposed by Hayes.
The main limitation of the study design was the fact that hospitals were not selected at random from a list of all eligible Kenyan hospitals. While primarily for logistical reasons the ability to include only a few hospitals (clusters) undermine notions of generalizability and balance even if random selection and random allocation are used . Masking could clearly not be done and this could have resulted in bias during data collection, arguably particularly during prospective data collection. We however tried to minimize such bias by extensive training in survey methods and use of standard operating procedures. This report also illustrates how we have tried to triangulate findings using different datasets likely subject to different potential biases. Finally as there were only four hospitals in each treatment group, efforts to adjust for baseline characteristics may not have been as successful as we would have liked.
This study helps strengthen the growing evidence [17, 18] that multifaceted interventions can help improve adoption of guidelines and more generally the quality of pediatric care. Carrying out studies of similar intervention packages to improve paediatric care in other low income settings would help strengthen the evidence base most applicable to these settings and provide opportunities for understanding the influence of different contexts on effectiveness . While earlier reports based on retrospectively collected data supported the effectiveness of the intervention examined, the analysis of prospectively collected data described here serves to support those findings while overcoming potential bias inherent to the use of retrospective case record review. The similar inference reached after analyses based on prospective assessment of case management is a useful finding as it supports the utility of work based on examination of retrospectively assembled case records allowing longer time periods to be studied while constraining costs. Further we have been able to develop and apply a suite of analytical methods for assessing the impact of interventions aimed at improving paediatric hospital care that should inform the design and conduct of future studies in this field.
Emergency triage assessment and treatment
English M, Ntoburi S, Wagai J, Mbindyo P, Opiyo N, Ayieko P, Opondo C, Migiro S, Wamae A, Irimu G: An intervention to improve paediatric and newborn care in Kenyan district hospitals: understanding the context. Implement Sci. 2009, 4: 42-10.1186/1748-5908-4-42.
Irimu G, Wamae A, Wasunna A, Were F, Ntoburi S, Opiyo N, Ayieko P, Peshu N, English M: Developing and introducing evidence based clinical practice guidelines for serious illness in Kenya. Arch Dis Child. 2008, 93 (9): 799-804. 10.1136/adc.2007.126508.
Molyneux E: Paediatric emergency care in developing countries. Lancet. 2001, 357 (9250): 86-87. 10.1016/S0140-6736(00)03536-4.
Tamburlini G, Di Mario S, Maggi RS, Vilarim JN, Gove S: Evaluation of guidelines for emergency triage assessment and treatment in developing countries. Arch Dis Child. 1999, 81 (6): 478-482. 10.1136/adc.81.6.478.
Barasa EW, Ayieko P, Cleary S, English M: A multifaceted intervention to improve the quality of care of children in district hospitals in Kenya: a cost-effectiveness analysis. PLoS Med. 2012, 9 (6): e1001238-10.1371/journal.pmed.1001238.
Ayieko P, Ntoburi S, Wagai J, Opondo C, Opiyo N, Migiro S, Wamae A, Mogoa W, Were F, Wasunna A, Fegan G, Irimu G, English M: A multifaceted intervention to implement guidelines and improve admission paediatric care in Kenyan district hospitals: a cluster randomised trial. PLoS Med. 2011, 8 (4): e1001018-10.1371/journal.pmed.1001018.
Irimu GW, Gathara D, Zurovac D, Kihara H, Maina C, Mwangi J, Mbori-Ngacha D, Todd J, Greene A, English M: Performance of health workers in the management of seriously sick children at a Kenyan tertiary hospital: before and after a training intervention. PLoS One. 2012, 7 (7): e39964-10.1371/journal.pone.0039964.
Schwartz RJ, Panacek EA: Basics of research (Part 7): Archival data research. Air Med J. 1996, 15 (3): 119-124. 10.1016/S1067-991X(96)90037-1.
Allison JJ, Wall TC, Spettell CM, Calhoun J, Fargason CA, Kobylinski RW, Farmer R, Kiefe C: The art and science of chart review. Jt Comm J Qual Improv. 2000, 26 (3): 115-136.
Clegg LX, Potosky AL, Harlan LC, Hankey BF, Hoffman RM, Stanford JL, Hamilton AS: Comparison of self-reported initial treatment with medical records: results from the prostate cancer outcomes study. Am J Epidemiol. 2001, 154 (6): 582-587. 10.1093/aje/154.6.582.
Nagurney JT, Brown DF, Sane S, Weiner JB, Wang AC, Chang Y: The accuracy and completeness of data collected by prospective and retrospective methods. Acad Emerg Med. 2005, 12 (9): 884-895. 10.1111/j.1553-2712.2005.tb00968.x.
Eder C, Fullerton J, Benroth R, Lindsay SP: Pragmatic strategies that enhance the reliability of data abstracted from medical records. Appl Nurs Res. 2005, 18 (1): 50-54. 10.1016/j.apnr.2004.04.005.
Nzinga J, Ntoburi S, Wagai J, Mbindyo P, Mbaabu L, Migiro S, Wamae A, Irimu G, English M: Implementation experience during an eighteen month intervention to improve paediatric and newborn care in Kenyan district hospitals. Implementation Sci. 2009, 4: 45-10.1186/1748-5908-4-45.
StataCorp: Stata Statistical Software. Release 11 edn. 2009, College Station, TX: StataCorp LP
J Hayes R, H Moulton L: Cluster randomised trials. 2009, London: CRC Press
English M, Schellenberg J, Todd J: Assessing health system interventions: key points when considering the value of randomization. Bull World Health Organ. 2011, 89 (12): 907-912. 10.2471/BLT.11.089524.
Althabe F, Buekens P, Bergel E, BelizÃ¡n JM, Campbell MK, Moss N, Hartwell T, Wright LL: A Behavioral Intervention to Improve Obstetrical Care. N Engl J Med. 2008, 358 (18): 1929-1940. 10.1056/NEJMsa071456.
Scales DC, Dainty K, Hales B, Pinto R, Fowler RA, Adhikari NK, Zwarenstein M: A multifaceted intervention for quality improvement in a network of intensive care units: a cluster randomized trial. JAMA. 2011, 305 (4): 363-372. 10.1001/jama.2010.2000.
Mackenzie M, Oâ€™Donnell C, Halliday E, Sridharan S, Platt S: Do health improvement programmes fit with MRC guidance on evaluating complex interventions?. BMJ. 2010, 340.
The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6963/14/312/prepub
The work of a large number of colleagues in the KEMRI-Wellcome Trust Research Programme, individuals in hospitals and the Ministry of Health in Kenya supported the surveys drawn on in this report and their contribution is gratefully acknowledged. We also acknowledge the Director of KEMRI, with whose permission this work is published.
ME is been supported by funds from The Wellcome Trust (#076827 and currently #097170). Additional funds from a Wellcome Trust Strategic Award (#084538) and a Wellcome Trust core grant awarded to the KEMRI-Wellcome Trust Research Programme (#092654) made this work possible. PA is supported by a Post-Doctoral Fellowship awarded by the Consortium for National Health Research (Kenya). The Wellcome Trust and other funders had no role in developing this manuscript or in the decision to submit for publication.
There are no competing interests.
PM performed statistical analysis and drafted the manuscript. PA participated in study design, statistical analysis and drafting of the manuscript. JT was involved in statistical analysis. ME conceived the study, obtained the funding for the project, participated in study design and helped to draft the manuscript. All authors read and approved the final manuscript.