Research article | Open | Open Peer Review | Published:
Training health care professionals in root cause analysis: a cross-sectional study of post-training experiences, benefits and attitudes
BMC Health Services Researchvolume 13, Article number: 50 (2013)
Root cause analysis (RCA) originated in the manufacturing engineering sector but has been adapted for routine use in healthcare to investigate patient safety incidents and facilitate organizational learning. Despite the limitations of the RCA evidence base, healthcare authorities and decision makers in NHS Scotland – similar to those internationally - have invested heavily in developing training programmes to build local capacity and capability, and this is a cornerstone of many organizational policies for investigating safety-critical issues. However, to our knowledge there has been no systematic attempt to follow-up and evaluate post-training experiences of RCA-trained staff in Scotland. Given the significant investment in people, time and funding we aimed to capture and learn from the reported experiences, benefits and attitudes of RCA-trained staff and the perceived impact on healthcare systems and safety.
We adapted a questionnaire used in a published Australian research study to undertake a cross sectional online survey of health care professionals (e.g. nursing & midwifery, medical doctors and pharmacists) formally trained in RCA by a single territorial health board region in NHS Scotland.
A total of 228/469 of invited staff completed the survey (48%). A majority of respondents had yet to participate in a post-training RCA investigation (n=127, 55.7%). Of RCA-experience staff, 71 had assumed a lead investigator role (70.3%) on one or more occasions. A clear majority indicated that their improvement recommendations were generally or partly implemented (82%). The top three barriers to RCA success were cited as: lack of time (54.6%), unwilling colleagues (34%) and inter-professional differences (31%). Differences in agreement levels between RCA-experienced and inexperienced respondents were noted on whether a follow-up session would be beneficial after conducting RCA (65.3% v 39.4%) and if peer feedback on RCA reports would be of educational value (83.2% v 37.0%). Comparisons with the previous research highlighted significant differences such as less reported difficulties within RCA teams (P<0.001) and a greater proportion of respondents taking on RCA leadership roles in this study (P<0.001).
This study adds to our knowledge and understanding of the need to improve the effectiveness of RCA training and frontline practices in healthcare settings. The overall evidence points to a potential organisational learning need to provide RCA-trained staff with continuous development opportunities and performance feedback. Healthcare authorities may wish to look more critically at whom they train in RCA, and how this is delivered and supported educationally to maximize cost-benefits, organizational learning and safer patient care.
Root cause analysis (RCA) is a structured approach to the investigation of patient safety incidents that is commonly applied in many modern health systems worldwide, particularly in acute hospital settings . The RCA technique originated in the engineering industry as a method of identifying latent systems-based issues that contributed to underperformance, variations or design failures in mechanical production processes . Its inherent principles have been adapted in many high reliability organisations – such as the petro-chemical, nuclear power, aerospace and aviation industries – to systematically uncover and improve underlying systems problems, and ergonomic and cultural issues identified as contributory factors in work-related accidents and incidents [3, 4].
In healthcare, safety-based RCA investigations (or variants of this approach) were first introduced in the 1990s to facilitate organisational learning . There is general consensus that RCA utilises a ‘toolbox rather than a single method’ with team-led investigations typically attempting to ascertain the ‘what, how and why’ of identified patient safety incidents . To achieve this, a small multi-disciplinary team of appointed investigators often draws on a multitude of analytical and problem-solving techniques [4–6] such as Brainstorming, Pareto Analysis, the Five-Whys technique and Fault-Tree-Analysis to accomplish these aims using recommended step-wise processes (Table 1). Once ‘root causes’ are established by investigators, different levels of local team-based and wider organizational learning needs are determined and a series of improvement recommendations formulated which, if implemented, should minimize the risk of incident recurrence [2–5]. It should be noted that, broadly speaking, in general medical practice settings particularly in the United Kingdom (UK), significant event analysis (a less rigorous investigative method based on reflective learning theory) rather than RCA is the routinely applied technique of choice for historical and feasibility reasons .
The evidence base underpinning the effectiveness of RCA in healthcare as a method to gain an in-depth understanding of safety issues and facilitate improvements to prevent future incidents is equivocal [9, 10]. Controlled trials to test the efficacy of the RCA framework are lacking. Some individual studies of single incident investigations report positive evidence such as the implementation of ‘strong’ corrective actions to prevent recurrence of events as judged by the authors of a recent RCA review in medicine .
However in an evaluation of 445 RCAs undertaken in New South Wales to identify and theme learning needs related to patient, human (staff) and systems factors, the authors concluded that the effectiveness of RCA as a means by which staff can achieve the desired improvements in patient care that were recommended was limited . A recent literature review of RCA effectiveness by Percarpio et al. (2008) identified a small number of formal published studies of relevance . They highlighted ‘numerous theoretical problems with the analytical framework’ and called for more research ‘at the system level and cost-benefits analysis…to determine the effectiveness of RCA’. Additionally, Vincent (2004) describes the RCA methodological approach as ‘misleading’ and suggests that “incident analysis, properly understood, is not a retrospective search for root causes” but should be framed in terms of the incident acting as a ‘window’ on the ‘gaps and inadequacies’ of the healthcare system .
Despite this, it is evident that healthcare authorities and decision-makers have high expectations for the transferability of RCA as an improvement tool, particularly since it seems to be successfully established in non-healthcare industry to investigate safety-critical issues, albeit in arguably more linear and less complex institutional settings [12–15]. Moreover a range of external healthcare bodies with regulatory, accreditation or quality management oversights expect care provider organisations to have transparent incident reporting and investigation mechanisms in place. In the past decade the National Patient Safety Agency in England and Wales has strongly promoted the adoption of RCA and developed an in-depth training programme and online educational resources to build capacity in this area and support healthcare organisations and staff in explicit efforts to make patient care safer [16, 17].
Similarly in the National Health Service in Scotland (NHSiS) many territorial health authorities have invested heavily in providing internal or external training in RCA methods to a range of staff groups, and this is a cornerstone of their organizational policies for investigating patient safety incidents. However, to our knowledge there has been no systematic attempt to follow-up and evaluate the post-training experiences, benefits and attitudes of NHSiS staff who have received formal instruction in RCA methods, and then put this knowledge into practice in the workplace when required to by their employing organisation. Given this significant investment in people, time and funding, it is important to capture and learn from the reported outcomes of this type of educational intervention, and its subsequent impact in the healthcare system and on improving patient safety.
In this study we aimed to:
Measure the extent to which respondents had subsequently participated in formal RCA investigations and also assumed a lead investigator role.
Determine the extent to which improvement recommendations arising from RCA investigations were actually implemented.
Ascertain if respondents encountered a range of barriers to RCA practice previously cited in the literature.
Establish participants’ perceptions of the adequacy of RCA training provided and their attitudes to the value of the tool as an improvement method in the healthcare workplace.
Compare relevant aspects of our findings with those reported in a previous Australian study  as one way of gauging progress with RCA training and impact in a different setting and at a different point in time.
Design, participants and setting
Cross sectional design utilizing an online questionnaire survey of healthcare professionals (e.g. medical and nursing & midwifery) based in a single (anonymised) territorial health board region in NHSiS who had attended RCA training either internally-run or externally sanctioned by the clinical risk department in a previous 36-month period. We left a 2-month gap to allow newly-trained staff the opportunity to experience involvement in RCA investigations.
We adapted a questionnaire used in the aforementioned Australian study  by Braithwaite et al. (2006) and piloted this with six colleagues who had previously attended RCA training. Minor alterations to questionnaire wording and style were then made to suit local circumstances. For example, we altered the rating scale for two items: ‘RCAs should be conducted by colleagues with a clinical background and not by staff out-with your department’ and ‘patients and relatives should be part of the RCA team’, to free-text responses to provide opportunities for more detailed answers from respondents. We added a statement ‘when you were involved in an RCA(s), to what extent did you encounter interference from internal/external sources’, to reflect a recent research finding. Unlike the Australian survey we included those healthcare professionals who were trained in RCA but did not subsequently participate in or lead an incident investigation and compared their responses with those who had.
We identified staff names and email addresses from the organizational database of all those who had attended RCA training in the chosen study period. During May and June 2011, we emailed a cover note explaining the study purpose and the online link to the web-based survey tool (QuestBack) to all participants. Non-respondents were followed up on three occasions via automatically generated email reminders.
Data were collected from respondents on: participation rates and leadership roles in RCA investigations; the extent to which RCA improvement recommendations were implemented; encountered barriers to conducting RCA; and attitudes to the adequacy of RCA training and the analytical process. A range of Likert-type scales was used to assess attitudinal strength. Free text responses were thematically analysed  by the authors independently with consensus reached over any discrepancies.
The data were coded in Microsoft Excel software and exported to SPSS version 17.0. Characteristics of respondents including gender, professional groups, healthcare sector, job experience and RCA training details were summarized using simple descriptive statistics. We divided respondents into two groups: Group One consisted of those who had led or participated in one or more RCA investigations since training; and Group Two consisted of those who had not achieved either task. We compared group responses to attitudinal statements using chi-square analyses to determine statistical differences, and also Fisher’s Exact Test where necessary. Levene’s test was used to confirm the assumption of equal variance between groups. Differences in perceptions and characteristics between groups were considered statistically significant if p<0.05. We also calculated differences in proportions of responses to selected questionnaire items along with 95% confidence intervals and made direct comparisons with the findings of Braithwaite et al. (2006) .
The study was pre-screened by the west of Scotland Research Ethics Committee but did not require formal ethical approval.
Response rate, respondent characteristics and demographics
A total of 228/469 of invited healthcare professionals completed the survey (48%). Table 2 outlines details of respondent characteristics and demographics together with background information on when they attended RCA training, the type of training received and how long the training lasted. The great majority of participants were female (176, 77.2%), with the largest group of respondents coming from the nursing & midwifery professions (99, 43.4%), and those based in the acute sector (91, 39.9%). Most participants attended a one-day in-house RCA training event (181, 81.1%).
We compared the professional characteristics and demographic details of Group One and Group Two respondents (Table 2). Three factors were statistically associated with a greater likelihood of involvement in or leading an RCA: being based in the acute sector or NHS Board head quarters (P<0.001); increasing duration of time since training (P<0.05) and belonging to the management or nursing & midwifery professional groups (P<0.05). Other factors such as the type and duration of training and respondents’ gender and job experience were not statistically associated with RCA involvement or leadership.
Post-training involvement in RCA investigations
The majority of respondents reported that they had acquired no post-training involvement in any RCA investigations since being trained (127, 55.7%) and cited the following reasons: ‘no opportunity’ to do so (101, 86.6%); ‘lack of support’ (5, 3.9%); and ‘inadequate training’ (1, 0.8%).
Table 3 displays a numerical breakdown of respondents’ reported levels of RCA involvement. 101 respondents (44.3%) indicated that they had participated in an RCA investigation with 71 assuming a lead investigator role (70.3%) on one or more occasion. Of this group, around 41% had led one RCA investigation with almost 20% reporting a leadership role in five or more investigations since undergoing training.
RCA recommendations: implementation, benefits and barriers
A clear majority of respondents (83, 82%) indicated that the improvement recommendations made as part of their RCA investigations were generally implemented, or partly implemented. Table 4 outlines selected examples of the perceived benefits of participating in the RCA process and other comments about RCA practices that were reported by study participants.
Table 5 outlines a range of commonly known barriers to conducting RCA investigations. In descending order, the top three barriers cited as ‘always’ or ‘sometimes’ encountered were lack of time (54, 54.6%), unwilling colleagues (33, 34%) and inter-professional differences (30, 31%). Just under 40% of respondents indicated that outside interference from internal or external sources was also a barrier to some degree during their RCA investigations.
Attitudes to RCA training and the RCA process: group comparisons
A clear majority of all respondents (178, 76.8%) indicated ‘yes’ or ‘partly’ to the question on whether they had sufficient understanding/confidence by the end of the training to conduct an RCA (Table 6). A smaller majority ticked ‘yes’ or ‘partly’ (49, 119, 51.3%) when reporting if their work practices regarding safety and reporting errors had changed since attending the RCA training course. Overall, respondents agreed that they were now: better trained in methods of dealing with incidents (76.5%); more able to improve work processes for the provision of safer clinical care; and of the belief that RCA training can contribute to the advancement of safety in healthcare. There was no clear statistical differences in responses to statements by both groups apart from one very obvious statement on whether the training provided respondents with the skills to be involved in or lead RCA in the workplace, with a greater proportion of those who had actually participated in RCA in agreement (P=0.008).
Respondents with post-training experience of RCA investigations were more likely to agree that the benefits associated with training are worth the investment (73.2% v (65.4%) and that conducting RCA was a good use of staff time (86.2% v 77.6%) than those with no post-training experience, although overall agreement with the statements was high (Table 7). Similar differences in levels of agreement between both groups were noted when asked if a follow-up session after conducting RCA would be beneficial (65.3% v 39.4%) and if receiving confidential peer feedback on the RCA report would benefit their learning (83.2% v 37.0%), although significant minorities were unsure in both instances. In terms of who should be involved in RCA investigations, respondents had divided views on whether these should be conducted by clinical staff only, with well over half disagreeing or indicating that they were unsure about this (116/179, 64.8%). A small majority of respondents (101/178, 56.7%) answered ‘agree’ or ‘maybe’ to the statements on whether patients or relatives should be part of the RCA team (see Table 4, for related comments).
Comparison with selected study findings of Braithwaite et al. (2008)
Similar to the previous Australian study  we found no differences in the professional characteristics of those who had conducted a post-training RCA and those who had not, while those staff in both studies with RCA experience had undertaken multiple investigations. However, we found that a greater proportion of respondents in our study had led a post-training RCA investigation compared with the earlier Australian study [(76/101, 75.2% v 133/252, 52.8%, diff=22.4%, 95% CI 11.4 to 32.1%, P<0.001)]. We also noted that our respondents cited (‘always’ and ‘sometimes’) the following barriers less frequently than their Australian counterparts: ‘lack of time’ [(54/99, 54.6% v 189/252, 75.0%, diff=20.5%, 95% CI 9.4 to 31.4%, P<0.001)]; ‘difficulty within RCA teams’ [(4/93, 4.3% v 86/252, 34.2%, diff=29.8%, 95% CI 21.5 to 36.4%, P<0.001)]; ‘unwilling colleagues’ [(33/97, 34.0% v 112/252, 44.5%, diff=10.4%, 95% CI −1.1 to 21.1%, P=0.077)]; and ‘lack of feedback and data’ [(19/94, 20.2% v 96/251, 38.3%, diff=8.1%, 95% CI 7.2 to 27.3%, P=0.002)]. A further significant difference was highlighted in terms of RCA improvement recommendations with a greater proportion of Scottish respondents reporting that these were implemented or partly implemented [(83/98, 84.5% v 175/252, 69.4%, diff=15.1%, 95% CI 5.3 to 23.6%, P=0.004)].
The main findings provide a measure of the extent to which healthcare staff trained in RCA actually participated in subsequent investigations in the workplace. It is clear in this study that a majority has yet to do so with the greatest proportion of trained but inexperienced staff being based out-with the acute care sector. This is a concern given the resources committed to RCA training and the implications in terms of lost opportunities to improve patient safety and organizational learning, as well as the potentially discouraging impact on staff morale and attitudes. Previous research has reported concerns from RCA-trained staff that they may lack ‘personal control’ over participation in subsequent incident investigations which is likely, for many, to be a decision for local managers in their organisations and heavily dependent upon workload priorities [16–18, 20].
The ‘failure of work schedules’ to provide staff with protected time for RCA investigations was cited as the major difficulty by Braithwaite et al. . Edmonson (2004) suggests that leadership has a critical role in this regard in developing an environment of ‘psychological safety’ to encourage a greater willingness for transparency, questioning and sharing of concerns, and also in ‘supporting and empowering’ team learning across the organisation . Nicoloni et al. (2011) also suggest that leaders need to openly endorse RCA (or a variant) as an improvement method and the staff who have been trained to implement it . Without this type of approach then it is possible that local leaders will continue to forfeit opportunities to learn from RCA by occasionally or even frequently assigning a low priority to investigations or blocking participation in the process by trained staff.
Most respondents with RCA experience report multiple exposures to incident investigations. Similar to previous research [9–11, 16–19, 21, 22], a clear majority also report that their recommendations to make care safer are at least partly or fully implemented in their organisations, although determining how effective these potential improvements actually were was not a study aim but nonetheless should be a subject of further research given the limited evidence.
It is possible, perhaps even likely, that RCA investigators will be subject to some form of ‘criticism’ or ‘conflict’ from ‘powerful’ individuals or groups, or those with vested interests, within a healthcare organisation . It should also be remembered that in essence RCA involves healthcare professionals ‘not just scrutinizing each other but scrutinizing each others’ errors’. Many respondents encountered a range of organizational barriers to conducting RCA investigations similar to those reported previously . A few significant differences were apparent, however, with respondents in our study more likely to adopt a post-training RCA leadership role and also report less difficulty with some of the organizational barriers outlined. One interpretation is that the differences indirectly hint at a slightly more positive organizational safety culture being reported in our study, which is possible given the large-scale national initiatives to improve patient safety in the Scottish health service over recent years . However a more likely explanation is that the Australian study  is more than five years older and so associated cultural factors such as the prevailing attitudes and behaviours towards RCA investigations and patient safety in general may have improved over time to match those in our study findings – not that the results reported here offer some type of benchmark.
Similar to other studies, our RCA-experienced respondents were generally positive about the training they received and the cost-benefits of this investigation technique in terms of the ‘advancement of safety in healthcare’ and being a ‘good use of staff time and resources’ [16, 18, 20]. Respondents displayed mixed views - supported by seemingly logical arguments on both sides - on whether non-clinical colleagues and patients should have a role in incident investigations. However, many risk managers are non-clinical and this does not seem to have been a barrier to them leading on RCA training or advising on, or participating in, related investigations. The involvement of patients and relatives in these investigations is strongly encouraged in national policy [12–15], but the reality is that this appears to be a rarity perhaps because of the many sensitivities and difficulties outlined by our respondents, even if many were positive about the prospect.
Most respondents had a sufficient ‘understanding/confidence’ in conducting RCA and indicated that their work practices and reporting of errors have changed since being trained. A follow-up training session and the use of confidential peer feedback on RCA reports were viewed as potentially beneficial educational interventions, suggesting that many respondents may have a level of insight into the need for further learning around the often complex and problematic issues involved in applying the technique. The inconsistent quality of RCA attempts and the need for additional post-training support has been noted previously [17, 19]. The study by Wallace et al. (2009) reports high levels of satisfaction with RCA training, but low levels of correct responses when study participants were subsequently tested on their acquired knowledge of RCA using pre-designed vignettes .
Overall the evidence demonstrates that the standard of incident analysis and report writing is frequently variable [7, 16, 17, 19]. Given that the written report is a key proxy for the quality of the investigation undertaken then it is likely that some type of educational feedback intervention is necessary. Offering developmental support and mentorship, particularly to guide less experienced staff confronting and dealing with some of the aforementioned barriers to RCA, and when writing comprehensive unambiguous reports that offer realistic recommendations for improvement (either during training or after training or both), arguably makes sense in closing this educational gap. Taken together the combined evidence from this and other studies cited may point to an organisational learning need for continuous development and feedback for RCA-trained staff, or at least in the short-term [7, 16, 18, 20].
The study findings are not generalisable beyond the RCA training practices in this single health authority. But given the degree of congruence with Braithwaite et al.  and specific findings in previous research [9–11, 16, 17, 19, 21, 22], it is possible that similar issues would be uncovered in other regions and countries with comparable training arrangements, particularly with regard to the significant proportion of trained staff who do not gain any post-training investigation experience. A key consideration, therefore, will be the cost-benefits involved in taking healthcare staff out of frontline clinical duties to provide them with RCA training and then failing to utilize or support them in the post-training phase.
One potential option is to better select staff for more intensive RCA training, while training less staff numbers so that organisations develop a strong core grouping with the requisite experience, expertise and leadership skills - augmented by the provision of continuous developmental support. Potentially this offers a number of advantages over current arrangements. A better trained and dedicated RCA staff group which is afforded greater opportunities to gain experience may retain and strengthen their analytical knowledge and skills and also benefit from shared peer-to-peer learning – leading to more meaningful and effective incident investigations. In developing this ‘expert community of practice’, these individuals (or as a group) may also become better equipped to highlight and challenge existing institutional barriers to engaging in and learning from incident investigation and start to make progress in developing a more positive safety culture. However, this will require organisations – and, perhaps more specifically, local healthcare leaders - to give greater priority to investigations and provide some element of protected time for these staff to continue to develop related experience and expertise when necessary. In some cases this may require a paradigm shift in local middle management and executive level attitudes and behaviours towards improving patient safety that goes beyond purely rhetorical endorsement of this concept as the single most important healthcare priority. A recent high profile media exposure of inconsistencies around serious patient safety incident investigation practices across NHSiS may have some impact in this regard .
Strengths and limitations
Our survey generated a moderate response rate although respondent numbers were still significantly large enough for useful statistical inferences and adding to our knowledge and understanding in this topic area. A number of limitations are associated with this type of descriptive cross sectional survey. It is likely that a proportion of non-respondents will have changed posts and therefore email addresses since RCA training and so could not be tracked using the online survey system. There may have been response bias as we were unable to compare and explore the characteristics of responders and non-responders as well as recall bias given the time lag between training and completing the questionnaire experienced by some. Also, self-report data may not be fully reliable as there is no means of independent verification. Caution should therefore be exercised when extrapolating these findings for more general purposes.
The study quantified some important problems within a single NHS board’s RCA training programme which will be of wider interest in Scotland and internationally. This adds to our knowledge and understanding of the need to improve the effectiveness of related training and frontline practices in healthcare settings. There is an assumption that organizations can train staff in RCA and learn from associated outcomes as if it is a linear, “rational, robust and rigorous process” [8, 22]. However, healthcare authorities may wish to look more critically at the system and cultural complexities which impact RCA investigations; the professional groupings and numbers of staff whom they select to attend training; and how these programmes are delivered and supported educationally in the longer term to maximize cost-benefits, organizational learning and safer patient care. A deeper understanding of the socio-cultural issues at play is also necessary, but this will require a policy commitment to resource more in-depth social research and evaluation, particularly if developing, testing and implementing new training paradigms.
Bagian JP, Gosbee J, Lee CZ, Wlliams I, McKnight SD, Mannos DM: The veterans affairs root cause analysis system in action. Jt Comm J Qual Improv. 2002, 28: 531-45.
Amo M: Root cause analysis: a tool for understanding why accidents occur. Balance. 1998, 2: 12.
Walshe K, Boaden R, Rogers S, Taylor-Adams S, Woloshynowych M: Techniques used in the investigation analysis of critical incidents in healthcare. Patient safety: research into practice. Edited by: Walshe K, Boaden R. 2005, Maidenhead: Open University Press, 130-43.
Woloshynowcy M, Rogers S, Taylor-Adams S, Vincent C: The investigation and analysis of critical incidents and adverse events in healthcare. Health Technol Assess. 2005, 9: 1-158.
Wald H, Shojani KG: Root cause analysis. Making health care safer: a critical analysis of patient safety practices. Edited by: Shojani KG, Duncan BW, McDonald KM, Wachter RW. 2001, Rockville, MD: Agency for Healthcare Research & Quality
Vincent CA: Analysis of clinical incidents: a window on the system not a search for root causes. Qual Saf Health Care. 2004, 13: 242-243. 10.1136/qshc.2004.010454.
Bowie P, Pope L, Lough M: A review of the current evidence base for significant event analysis. J Eval Clin Pract. 2008, 14 (4): 520-536. 10.1111/j.1365-2753.2007.00908.x.
Nicolini D, Waring J, Mengis J: The challenges of undertaking root cause analysis in health care: a qualitative study. J Health Serv Res Policy. 2011, 16: 34-41. 10.1258/jhsrp.2010.010092.
Wu A, Lipshutz A, Pronovost P: Effectiveness and efficiency of root cause analysis in medicine. JAMA. 2008, 299: 685-10.1001/jama.299.6.685.
Taitz J, Genn K, Brooks V, et al: System-wide learning from root cause analysis: a report from the New South Wales root cause analysis review committee. Qual Saf Health Care. 2010, 19: e63-10.1136/qshc.2008.032144.
Percarpio KB, Watts V, Weeks WB: The effectiveness of root cause analysis: what does the literature tell us?. Jt Comm J Qual Patient Saf. 2008, 34 (7): 391-398.
House of Commons Committee of Public Accounts: A safer place for patients: learning to improve patient safety. Fifty first report of session 2005–06. 2000, London: The Stationery Office
Department of Health: An organisation with a memory: report of an expert group on learning from adverse events in the NHS. 2000, London: HMSO
Department of Health: Doing less harm: improving the safety and quality of care through reporting, analysing and learning from adverse incidents involving NHS patients– Key requirements for healthcare providers. 2001, London: HMSO
Scottish Patient Safety Alliance: http://www.patientsafetyalliance.scot.nhs.uk/programme [Accessed 4th September 2012}
Wallace LM, Spurgeon P, Adams S, Earl L, Bayley J: Survey evaluation of the national patient safety agency’s root cause analysis training programmed in England and Wales: knowledge, beliefs and reported practices. Qual Saf Health Care. 2009, 18: 288-291. 10.1136/qshc.2008.027896.
Wallace LM: From root causes to safer systems: international comparisons of nationally sponsored healthcare staff training programmes. BMJ. 2006, 15: 388.
Braithwaite J, Westbrook MT, Mallock NA, Travaglia JF, Iedema RA: Experiences of health professionals who conducted root cause analyses after undergoing a safety improvement programme. Qual Saf Health Care. 2006, 15: 393-10.1136/qshc.2005.017525.
Simons L, Lathlean J, Squire C: Shifting the focus: sequential methods of analysis with qualitative data. Qual Health Res. 2008, 18: 120-132. 10.1177/1049732307310264.
Middleton A, Walker C, Chester R: Implementing root cause analysis in an area health service: views of the participants. Aust Health Rev. 2005, 29 (4): 422-428. 10.1071/AH050422.
Edmondson AC: Learning from failure in health care: frequent opportunities, pervasive barriers. Qual Saf Health Care. 2004, 13 (Suppl II): ii3-ii9.
Idema RA, Jones C, Long D, Braithwaite J, Travaglia J, Westbrook M: Turing the medical gaze in upon itself: root cause analysis and the investigation of clinical error. Soc Sci Med. 2006, 62: 1605-1615. 10.1016/j.socscimed.2005.08.049.
British Broadcasting Company (BBC) News Scotland: How safe is your hospital?. http://www.bbc.co.uk/news/uk-scotland-20411901 [Accessed 4th December 2012]
The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6963/13/50/prepub
We would like to offer sincere thanks to all healthcare professionals who responded to this survey, NES PSMG for funding this work and our partner NHS Board for collaborating in this study.
The study was funded by the Patient Safety Multi-Professional Steering Group, NHS Education for Scotland, Scotland, UK.
The authors declare there are no competing financial or non-financial interests.
PB conceived the study idea, acquired funding, co-led the study design, data collection, analysis and interpretation, and drafted the initial manuscript. JS co-led the study design and data collection, and contributed to the content and critical review of the manuscript. CdW led on the statistical analysis and interpretation, and contributed to the content and critical review of the manuscript. All authors read and approved the final manuscript.
Paul Bowie, Joe Skinner and Carl de Wet contributed equally to this work.