Skip to main content

A scoping review of registry captured indicators for evaluating quality of critical care in ICU

Abstract

Background

Excess morbidity and mortality following critical illness is increasingly attributed to potentially avoidable complications occurring as a result of complex ICU management (Berenholtz et al., J Crit Care 17:1-2, 2002; De Vos et al., J Crit Care 22:267-74, 2007; Zimmerman J Crit Care 1:12-5, 2002). Routine measurement of quality indicators (QIs) through an Electronic Health Record (EHR) or registries are increasingly used to benchmark care and evaluate improvement interventions. However, existing indicators of quality for intensive care are derived almost exclusively from relatively narrow subsets of ICU patients from high-income healthcare systems. The aim of this scoping review is to systematically review the literature on QIs for evaluating critical care, identify QIs, map their definitions, evidence base, and describe the variances in measurement, and both the reported advantages and challenges of implementation.

Method

We searched MEDLINE, EMBASE, CINAHL, and the Cochrane libraries from the earliest available date through to January 2019. To increase the sensitivity of the search, grey literature and reference lists were reviewed. Minimum inclusion criteria were a description of one or more QIs designed to evaluate care for patients in ICU captured through a registry platform or EHR adapted for quality of care surveillance.

Results

The search identified 4780 citations. Review of abstracts led to retrieval of 276 full-text articles, of which 123 articles were accepted. Fifty-one unique QIs in ICU were classified using the three components of health care quality proposed by the High Quality Health Systems (HQSS) framework. Adverse events including hospital acquired infections (13.7%), hospital processes (54.9%), and outcomes (31.4%) were the most common QIs identified. Patient reported outcome QIs accounted for less than 6%. Barriers to the implementation of QIs were described in 35.7% of articles and divided into operational barriers (51%) and acceptability barriers (49%).

Conclusions

Despite the complexity and risk associated with ICU care, there are only a small number of operational indicators used. Future selection of QIs would benefit from a stakeholder-driven approach, whereby the values of patients and communities and the priorities for actionable improvement as perceived by healthcare providers are prioritized and include greater focus on measuring discriminable processes of care.

Background

Critical illness, including (but not limited to) care for the sickest surgical, trauma, and communicable diseases patients, causes an enormous health and economic burden globally. Patients with critical illness are at high risk for poor outcomes and often require intensive care unit (ICU) admission [1]. Specialties synonymous with critical care, traumatic brain injury, infectious diseases, and perioperative care have all benefited from high-quality clinical trials informing treatment and outcomes, and have all included critically ill populations [2,3,4,5,6,7]. However, large differences in daily ICU practice and patient outcomes remain, with excess morbidity and mortality following critical illness increasingly attributed to potentially avoidable complications occurring as a result of complex ICU management [8, 9] As a consequence, research has increasingly focused on strategies to improve the effectiveness of common interventions synonymous with ICU management, in an effort to reduce avoidable harm, reduce mortality, and promote quality of life following recovery [1, 10,11,12].

Routine quality measurement using appropriate indicators can guide care improvement, for example, through identifying existing good practice, and evaluating strategies aimed at targeting sub optimal care. The potential of quality indicators to improve care has already been demonstrated in other clinical areas including maternal and child health, and in the management of sepsis and stroke patients [10, 13,14,15]. In parallel, clinically facing registries for critical care are expanding internationally. Registries are increasingly seen as a tool to enable the evaluation of existing care by systematically capturing quality of care indicators used to benchmark and compare performance [1, 8, 9]. However, existing indicators of quality for intensive care are derived almost exclusively from relatively narrow subsets of the ICU patient population selected by experts working in high-income healthcare systems. Despite many indicators of quality being developed and advocated for as a measure of performance, the exact number and level of scientific evidence of these indicators remains unclear. Furthermore, there is potential for wide variation in the measurement and reporting of indicators of quality internationally. Such heterogeneity of definition and of measurement impedes utility of QIs for both replicable evaluation of performance over time within an institution, and benchmarking of performance between units. Similarly, an absence of literature on the challenges of “real-world” implementation of indicators, perhaps suggestive  of the variation of definitions of QIs in use, further hinders those seeking to evaluate and improve care from identifying meaningful measures of quality and using them to drive practice and policy change [8, 12, 15, 16]. To enable global registry networks, such as CRIT Care Asia [17], which aims to support communities of practice to measure existing critical care performance and achieve actionable improvement, greater understanding is needed of the indicators of quality currently being used internationally, their evidence base, and the barriers to measurement and reporting. This review aimed to scope the literature on indicators currently being used to evaluate quality of care in intensive care units, map the indicators' evidence base, and describe the variances in both their definition and measurement. In addition, the reviewers summarized challenges of implementation of the indicators and relevant advantages of measurement of the indicators on ICU practice as described in the literature.

Methods

Search strategy

Relevant articles were identified by searching the following databases: MEDLINE, EMBASE, CINAHL, Cochrane Database of Systematic Reviews, Cochrane Database of Abstracts of Reviews of Effects, and Cochrane Central Register of Controlled Trials from the earliest available date through to January 31st, 2019. Searches were performed with no language of publication restrictions. Combinations of the following search terms were used: critical care, ICU, intensive care, quality indicator, quality assurance, quality control, benchmarking, performance improvement, quality measure, best practice, and audit, registry, electronic database, surveillance system. The Cochrane Library was searched using the search term critical care. To increase the sensitivity of the search strategy, we also searched the grey literature. This search included identifying and searching websites of relevant critical care societies which have associated registry networks (ICNARC [18], SICSAG [19], ANZICS [20], EpiMed [21], ESICM [22], ICS [23], JSICM [24]). Appropriate wildcards were used in all searches to account for plural words and variations in spelling. Additional articles were identified by searching the bibliographies of those articles identified in the searches and contacting experts in the field of ICU registries.

Article selection

We selected all articles that identified or proposed one or more QIs to evaluate the quality of care used to evaluate ICU care through registries. For this study, a QI was defined as “a performance measure that compares actual care against an ideal criteria,” in order to help assess quality of care [15]; minimum inclusion criteria was a description of one or more QIs designed to evaluate clinical performance in intensive care. This included measures associated with admission to the ICU, and outcomes following discharge from the ICU within the same hospital admission. Covidence (an online review tool) was used to collate and curate the stages of the literature review [25].

Article review

Eligible articles were identified using a two phase process: a published method of the Joanna Briggs Institute scoping review framework [26]. In the first phase, two reviewers (SR and CS) independently reviewed the titles and abstracts of retrieved publications and selected relevant articles for possible inclusion in the review. Disagreements between the two reviewers were discussed, and, if agreement could not be reached, the article was retained for further review by AB [11, 26].

In the second phase, the full texts of the remaining articles were independently reviewed by the same two reviewers using a checklist to determine eligibility criteria. Disagreements between the two assessors were discussed, and a third author was consulted if agreement could not be reached [26]. Reviewers were not masked to author or journal name [11]. Two reviewers independently reviewed all full-text articles that satisfied the minimum inclusion criteria and extracted data using a standardized format. Extracted information included (i) QI definition, (ii) variables and guidance for measuring the QI, (iii) modality and frequency of measurement, and (iv) level of evidence [10]. The level of evidence underpinning the definition and application of the indicator was graded using a published classification of evidence for practice guidelines [27]. Quality indicators were classified using the three components of health care quality proposed by the High Quality Health Systems (HQSS) framework [13]: foundation (which includes human resources and governance structures), processes (encompassing measures of safety and timeliness inalongside patient and user experience), and quality impacts (which extends beyond mortality to quality of recovery and life and social economic welfare). The indicators were also categorized as pre, in, or post ICU. Reviewers further judged whether QIs were operational (yes vs. no) and appraised the literature for barriers and enablers to operationalizing or actioning the QIs which were described. Disagreements in assessment and data extraction were resolved by reviewer consensus, and if agreement could not be reached, the article was independently reviewed by a third reviewer (AB). QIs were summarized as counts and proportions using the packages “highcharter” and “epiDisplay” in R statistical software, version 4.0.2 [28]. The protocol for this study follows the guidelines for Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) included as an additional file [see Additional File 1].

Results

The primary search of the literature yielded 4780 citations. Review of abstracts led to the retrieval of 276 full-text articles for assessment, of which 123 articles were retained for review [see Additional file 3]. The most common reason for excluding articles after full-text review was the absence of an operational QI (Fig. 1).

Fig. 1
figure 1

PRISMA flowchart. PRISMA flowchart summarizing study review and inclusion

The majority of articles synthesized were original research (98.3%, 121), of which 88.6% (109) were cohort studies (Table 1).

Table 1 Literature characteristics

Characteristics of literature included in full review

The majority of the literature was reported from registries evaluating quality of critical care in high income country healthcare systems with 27.85% of research originating from Australia, Canada, Northern Europe, and the USA (Fig. 2).

Fig. 2
figure 2

Geographical origin of literature. Origin of literature by country [using the UN Geoscheme classification—Accessed: at http://millenniumindicators.un.org/unsd/methods/m49/m49regin.htm]

QIs, definition, measurement, and evidence base

From the 123 articles retained for full review, 253 indicators were identified, of which 51 were unique. These unique indicators were classified using prespecified components of the HQSS framework [13] and then by phase of ICU encounter (pre, in, and post ICU) (Table 2) [13].

Table 2 Results table of unique quality indicators

Foundational indicators accounted for 13.7% (7) of the 51 unique QIs identified, processes 54.9% (28), and quality impact 31.3% (16) respectively. Health care associated events (including healthcare associated infections) accounted for 37.1% (10) of process indicators and were present in 39.8% (49) of articles included in full text review. In contrast, patient reported outcome related measures accounted for less than 6% of quality impact indicators and were present in 1.6% (2) of articles reviewed.

The majority of QIs (58%) had a single definition and method of measurement. The mean number of different definitions for a single indicator was 1.61 (SD 0.72 to 2.50). Indicators with the most variation in definition were composite measures of mortality and adverse events, including hospital associated adverse events. Definitions varied based on inclusion or exclusion of laboratory tests, radiological imaging and constellations of clinical signs and symptoms, and varied depending on the country from which the underpinning evidence originated (Supplementary file 1). Of the 123 articles included in the full review, 96% (118) included a description of how the indicators were measured, and a further 88% (108) reported guidance on data collection (including frequency) and analysis.

Barriers to implementation of quality indicators

Barriers to implementation were described in 35.7% of articles included in the full text review. The barriers identified broadly related to two aspects of implementation; “operational” (51%) (which included issues with data collection and data quality), and “acceptability and actionability” (49%) (which included users’ perceptions of the indicator, its validity, and the subsequent actionability of the information for care improvement) (Table 3).

Table 3 Barriers to implementation of quality indicators

Operational

Reliance on manual data entry (as opposed to direct extraction from an electronic source-such as patient monitor or EHR) and the associated burden of data capture were reported impediments to operationalisation of the QIs in the ICU [36, 40, 44]. Absence of data (missingness) resulting in exclusion of patient encounters from analysis, was also described; inhibiting both implementation and utilisation of QIs. Missingness was described both as a consequence of manual data entry and as a result of unavailability of information at source [35, 39, 42, 52]. The impact of this missingness being that several studies had to exclude data in analysis [41, 43, 47, 48]. Challenges with data quality were described at point of data capture and extraction [29, 30, 32, 35, 38,39,40, 42, 46, 51, 54, 64, 72]. Ability to accurately measure processes of care associated with the indicator (e.g., time of antibiotic administration in the context of recognition of infection/ prescription of antibiotics) hindered utilisation of the indicator in care evaluations [33, 46, 53, 57, 58, 66, 67, 70, 71, 73].

Acceptability and actionability

Concerns about the ramifications for individual clinicians and ICU team performance was a barrier to implementing indicators of quality in ICU. Indicators relating to adverse events including hospital associated (nosocomial) infections and to measures of performance for which there may be significant public pressure (such as antimicrobial prescribing) were particularly associated with fear of blame [36]. These concerns impeded healthcare providers willingness to engage in the reporting of incidence and associated quality of care benchmarking within their department and a willingness to contribute data to interdepartmental and interhospital reporting (regional and national) [35,36,37, 56, 59, 68]. Similarly, it was identified as a contributing factor to missingness of data (described above) and a driver for revision of definitions [29, 52, 65]. These barriers were most evident in literature originating from North America and China.

Reliability and reproducibility of the data was a barrier to acceptability (and actionability) of the indicators. Concerns over information that was not captured electronically, but by direct observation from clinicians delivering care was considered at risk of responder biases [56, 60]. These concerns hindered researchers and clinicians’ willingness to accept findings, and in turn, impeded subsequent efforts to use the data to drive quality improvement initiatives in the clinical settings [36, 74]. Challenges in interpretability of indicators, which further limited acceptability were also identified. Indicators relating to ICU resource utilisation (occupancy, turnover and staff utilisation) were described as “difficult” to interpret given the dynamic nature of ICU service activity and patient acuity [46, 67]. Similarly, indicators pertaining to hospital associated infections (and other adverse events) were described as undergoing frequent revision of definition and therefore data capture methods, due to difficulties in interpreting the findings in the context of patient groups and care processes [29, 35, 52, 65]. As a consequence, these indicators were associated with frequent revision of definition and method of measurement. Interconnected, these revisions to definitions and data set further impeded quality of data collection and their acceptability of the findings for the healthcare team [35, 65].

Impact of indicators on practice in the ICU

Of the literature reviewed 49.6% articles reported positive impacts on ICU practice following the reporting of quality indicators in the ICU. Measurement of process and quality impact indicators; infection control practice, and the incidence of hospital acquired infections were cited as being the catalyst for quality improvement interventions [30, 31, 35, 40, 61, 65, 82, 87, 99, 119, 134]. Interventions described included development of guidelines, the establishment of an infection control team and the use of screening tools for patients at risk of avoidable infections. However there was limited explanation of how the indicator resulted in identification of modifiable care processes, or if the subsequently reported reduction in incidence of adverse events resulted in improvement in patient outcomes.

A positive impact associated with the measurement of indicators was an observed shift in organizational culture toward greater awareness of care quality. The process of establishing routine measurement of indicators of quality in the ICU was perceived to contribute to an increased awareness of quality among healthcare team members and greater attention from staff regarding adherence to existing practice guidelines. This positive impact was most notable when clinical teams used the indicators as part of an audit and feedback initiatives, and as a target to drive forward actionable change as part of the wider quality improvement cycle.

Discussion

Given the complexity, intensity, and risks of care in ICU, this review of published literature revealed a limited number of indicators in operation internationally. Despite critical care being an increasingly central tenet of healthcare service provision internationally, both the origin of the literature describing these indicators, and the evidence on which they are founded was concentrated from high-income country health systems—most notably North America, Canada, and China. Unusually for health systems globally, the USA and Canada notably have well-established EHRs. Such infrastructure is not representational of the majority of health systems, especially in resource constrained settings, where access to EHRs are uncommon. Absence of investment in such infrastructure is well described in low-and middle-income countries (LMICS) as a barrier to both operationalizing quality indicators in ICUs internationally, and as a contributor to the lack of published literature evaluating quality of critical care services [13, 15].

The majority of indicators identified described HQSS measures of care processes and quality impacts [13]. These indicators are composite, meaning that measurement of the indicator requires more than one type of data, and data to be captured at more than one time point [10]. Example composite indicators included device-associated bloodstream infection, antimicrobial resistance, and ventilation-associated pneumonia. Consequently, implementing such indicators can be problematic due to the burden of data capture they necessitate. The density and volume of information needed to accurately determine such indicators (which in the example of hospital-acquired infections (HAIs) may include multiple timepoints in every 24 h) may be especially troublesome to implement in resource constrained settings, where data capture is likely to by hand, drawing information from multiple sources. The wide variation and iteration in both definition and measurement of indicators described in the literature highlights perhaps how stakeholders have attempted to overcome the challenge of data capture and interpretation. Iterations to definitions include attempts to reduce complexity of measure, reduce the burden of data capture, or remove dependency on diagnostic markers to simplify definitions used to determine incidence [29].

Whilst commonly used to benchmark care, it is widely accepted that such composite measures are difficult to elucidate, not least because their interpretation is complicated by both complexity of disease and care processes within ICU. Moreover, they are both subject to underreporting and difficult to risk adjust, further complicating their utility for benchmarking. Tending to focus on the presence of an adverse event or omission in care processes, they are used to infer that quality of care may be suboptimal. However in focusing on the negative, these indicators and their measurement provides little insight for teams seeking to inform actionable improvement or reinforce good practice. This reinforces the perception that quality indicators are used as a weapon to criticise care delivery. Described as a barrier to operationalising QIs, stakeholders questioned the validity of the indicator and failed to engage in using them to guide practice improvement. Very few of the indicators focus on the presence or inclusion of actions which contribute to positive care outcomes, and for which definition, measurement and interpretation may be arguably more acceptable to the clinical team. Notable exceptions were, indicators focusing on compliance with antimicrobial guidelines, and administration of therapies to prevent adverse events associated with critical illness including anticoagulation for Venous Thromboembolism (VT) prevention and gastric ulcer prophylaxis. It was these indicators that were most closely associated with positive change in the ICU. Particularly it was their use as part of a cycle of audit and feedback to drive improvement that was perceived to have a positive impact on care [75]. Increasing research to select and develop indicators includes assessment of their ability to lead to actionable improvement, in addition to their prognostic validity, feasibility and applicability of capture. The repeated adjustment to definition or measurement of composite indicators described in literature may well be a further illustration of healthcare providers (and researchers) growing uncomfortableness and distrust of such indicators being used to measure clinical performance [29, 35, 65].

Despite a growing acknowledgment of the need to better understand the organizational, social, and economic impact of and recovery following critical illness, very few articles described the operationalisation of QIs reporting the quality of recovery following ICU care or patient-centered measures of experience, suggesting a need for further empirical research in these aspects of care.

Limitations

There are limitations to this review. Despite the search of multiple databases using comprehensive search strategies with the assistance of experts in critical care registries, it is likely that our search missed broad categories of important QIs. This is in part due to the heterogeneity of language used to describe the indicators. In addition, it was difficult to extract accurate data from all publications. Two percent of articles were unavailable for full text and some did not disclose the materials or methods used, and in 9% of articles the exact mechanism of capture or definition was not described. Finally, despite categorizing articles using predefined data abstraction tools and classification schemes, classification remains subjective. To minimize this, a third reviewer reviewed 10% of articles independently to verify consistency of categorization.

Conclusion

This scoping review has evaluated the growing body of literature on the implementation of QIs in critical care settings and found that, despite the complexity and risk associated with ICU care, there are only a small number of operational indicators used. These mostly focus on processes of care, especially healthcare-associated adverse events. These predominantly composite measures requiring multiple data points captured at more than one time, are associated with a high burden of data capture and are difficult to evaluate in the context of heterogeneous patient populations and diverse health systems.

Similarly, the majority of literature (and evidence underpinning the definition of indicators) originates from high-income country health systems, and are not necessarily representational of the case mix, care processes, or patient-centered priorities for recovery from low- and middle-income countries. Future selection of QIs would benefit from a stakeholder-driven approach, whereby the values of patients and communities and the priorities for actionable improvement as perceived by healthcare providers are prioritized. Such an approach may go some way to address not only the paucity of patient centered outcome measures in operation but also the barriers of acceptability and actionability identified by this review. In doing so reduce the observations of fear and blame associated with QIs used to benchmark care and drive improvement.

Availability of data and materials

Not applicable.

Abbreviations

CDC-NHSN:

Center for Disease Control - National Healthcare Safety Network

CFU:

Colony-forming unit

CPIS:

Clinical Pulmonary Infection Score

EHR:

Electronic Health Record

HAIs:

Hospital acquired infections

HELICS:

Hospitals in Europe Link for Infection Control through Surveillance

HQSS:

High Quality Health Systems

ICED:

Intensive Care Experience Definition

ICNARC:

Intensive Care National Audit And Research Centre

ICU:

Intensive care unit

KONIS:

Korean Nosocomial Infections Surveillance System

LMICS:

Low- and middle-income countries

NICE:

National Institute for Clinical Excellence

PEEP:

Positive end expiratory pressure

QI:

Quality indicator

QoL:

Quality of life

SMR:

Standardized mortality ratio

VT:

Venous thromboembolism

WHO:

World Health Organisation

References

  1. Berenholtz SM, Dorman T, Ngo K, Pronovost PJ. Qualitative review of intensive care unit quality indicators. J Crit Care. 2002;17(1):1–2.

    Article  PubMed  Google Scholar 

  2. Dewan Y, Komolafe EO, Mejía-Mantilla JH, Perel P, Roberts I, Shakur H. CRASH-3-tranexamic acid for the treatment of significant traumatic brain injury: study protocol for an international randomized, double-blind, placebo-controlled trial. Trials. 2012;13(1):1–4.

    Article  CAS  Google Scholar 

  3. Myles P, Bellomo R, Corcoran T, Forbes A, Wallace S, Peyton P, et al. Restrictive versus liberal fluid therapy in major abdominal surgery (RELIEF): rationale and design for a multicentre randomised trial. BMJ Open. 2017;7(3):e015358.

    Article  PubMed  PubMed Central  Google Scholar 

  4. POISE Study Group. Effects of extended-release metoprolol succinate in patients undergoing non-cardiac surgery (POISE trial): a randomised controlled trial. Lancet. 2008;371(9627):1839–47.

    Article  CAS  Google Scholar 

  5. Angus DC, Berry S, Lewis RJ, Al-Beidh F, Arabi Y, van Bentum-Puijk W, et al. The REMAP-CAP (randomized embedded multifactorial adaptive platform for community-acquired pneumonia) study. Rationale and design. Ann Am Thorac Soc. 2020;17(7):879–91.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Angus DC, Derde L, Al-Beidh F, Annane D, Arabi Y, Beane A, et al. Effect of hydrocortisone on mortality and organ support in patients with severe COVID-19: the REMAP-CAP COVID-19 corticosteroid domain randomized clinical trial. Jama. 2020;324(13):1317–29.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  7. Aryal D, Beane A, Dondorp AM, Green C, Haniffa R, Hashmi M, et al. Operationalisation of the randomized embedded multifactorial adaptive platform for COVID-19 trials in a low and lower-middle income critical care learning health system. Wellcome Open Res. 2021;6:14.

    Article  PubMed  PubMed Central  Google Scholar 

  8. De Vos M, Graafmans W, Keesman E, Westert G, van der Voort PH. Quality measurement at intensive care units: which indicators should we use? J Crit Care. 2007;22(4):267–74.

    Article  PubMed  Google Scholar 

  9. Zimmerman JE. Commentary: quality indicators: the continuing struggle to improve the quality of critical care. J Crit Care. 2002;1(17):12–5.

    Article  Google Scholar 

  10. McGlynn EA. Selecting common measures of quality and system performance. Med Care. 2003;1:I39–47.

    Google Scholar 

  11. Berlin JA, Cirigliano MD. Does blinding of readers affect the results of meta-analyses? Lancet. 1997;350(9072):185–6.

    Article  CAS  PubMed  Google Scholar 

  12. O’Brien K, Wilkins A, Zack E, Solomon P. Scoping the field: identifying key research priorities in HIV and rehabilitation. AIDS Behav. 2010;14(2):448–58.

    Article  PubMed  Google Scholar 

  13. Kruk ME, Gage AD, Arsenault C, Jordan K, Leslie HH, Roder-DeWan S, et al. High-quality health systems in the sustainable development goals era: time for a revolution. Lancet Glob Health. 2018;6(11):e1196–252.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Donabedian A. Evaluating the quality of medical care. Milbank Mem Fund Q. 1966;44(3):166–206.

    Article  Google Scholar 

  15. Agency for Healthcare Research and Quality. AHRQ Quality Indicators. http://www.qualityindicators.ahrq.gov/. Accessed 6 Jan 2019.

  16. European Society of Intensive Care Medicine. Guidelines and Consensus statements. https://www.esicm.org/resources/guidelines-consensus-statements/. Accessed 17 Feb 2021.

  17. Beane A, Dondorp AM, Taqi A, Ahsan AS, Vijayaraghavan BK, Permpikul C, et al. Establishing a critical care network in Asia to improve care for critically ill patients in low-and middle-income countries. Crit Care. 2020;24(1).

  18. Intensive Care National Audit & Research Centre. Focus on Quality. https://www.icnarc.org/. Accessed 17 Feb 2021.

  19. The Scottish Intensive Care Society Audit Group. Quality Indicators. https://www.sicsag.scot.nhs.uk/quality/indicators.html. Accessed 17 Feb 2021.

  20. Australian and New Zealand Intensive Care Society. Critical Care Resources (CCR) Registry. https://www.anzics.com.au/critical-care-resources-ccr-registry/. Accessed 17 Feb 2021.

  21. Epimed Solutions. About Epimed. https://www.epimedsolutions.com/en/team/. Accessed 17 Feb 2021.

  22. European Society of Intensive Care Medicine. Research. https://www.esicm.org/research/. Accessed 17 Feb 2021.

  23. Intensive Care Society. Intensive Care Society Frameworks. https://ics.ac.uk/ICS/Education/ICS/events-and-seminars.aspx?hkey=2b34aa75-0ff9-475f-b2ea-96b1ab957b51. Accessed 17 Feb 2021.

  24. The Japanese Society of Intensive Care Medicine. JIPAD. https://www.jsicm.org/en/jipad.html. Accessed 17 Feb 2021.

  25. Covidence systematic review software. Veritas Health Innovation, Melbourne, Australia. www.covidence.org.

  26. Moulton E, Wilson R, Silva AR, Kircher C, Petry S, Goldie C, et al. Measures of movement and mobility used in clinical practice and research: a scoping review. JBI Evid Synth. 2021;19(2):341–403.

    Article  PubMed  Google Scholar 

  27. Burns PB, Rohrich RJ, Chung KC. The levels of evidence and their role in evidence-based medicine. Plast Reconstr Surg. 2011;128(1):305.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  28. R Core Team. R: a language and environment for statistical computing. Vienna: R Foundation for Statistical Computing; 2013. URL http://www.R-project.org/

    Google Scholar 

  29. Christiansen CF, Møller MH, Nielsen H, Christensen S. The Danish intensive care database. Clin Epidemiol. 2016;8:525.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Kahn JM, Brake H, Steinberg KP. Intensivist physician staffing and the process of care in academic medical centres. BMJ Qual Saf. 2007;16(5):329–33.

    Article  Google Scholar 

  31. Zampieri FG, Soares M, Borges LP, Salluh JI, Ranzani OT. The Epimed Monitor ICU Database®: a cloud-based national registry for adult intensive care unit patients in Brazil. Rev Bras Ter Intensiva. 2017;29(4):418–26.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Claridge JA, Pang P, Leukhardt WH, Golob JF, Carter JW, Fadlalla AM. Critical analysis of empiric antibiotic utilization: establishing benchmarks. Surg Infect. 2010;11(2):125–31.

    Article  Google Scholar 

  33. Kwak YG, Lee SO, Kim HY, Kim YK, Park ES, Jin HY, et al. Risk factors for device-associated infection related to organisational characteristics of intensive care units: findings from the Korean Nosocomial Infections Surveillance System. J Hosp Infect. 2010;75(3):195–9.

    Article  CAS  PubMed  Google Scholar 

  34. Vanhems P, Lepape A, Savey A, Jambou P, Fabry J. Nosocomial pulmonary infection by antimicrobial-resistant bacteria of patients hospitalized in intensive care units: risk factors and survival. J Hosp Infect. 2000;45(2):98–106.

    Article  CAS  PubMed  Google Scholar 

  35. Li Y, Cao X, Ge H, Jiang Y, Zhou H, Zheng W. Targeted surveillance of nosocomial infection in intensive care units of 176 hospitals in Jiangsu province, China. J Hosp Infect. 2018;99(1):36–41.

    Article  CAS  PubMed  Google Scholar 

  36. El-Kholy A, Saied T, Gaber M, Younan MA, Haleim MM, El-Sayed H, et al. Device-associated nosocomial infection rates in intensive care units at Cairo University hospitals: first step toward initiating surveillance programs in a resource-limited country. Am J Infect Control. 2012;40(6):e216–20.

    Article  PubMed  Google Scholar 

  37. Lilly CM, Landry KE, Sood RN, Dunnington CH, Ellison RT III, Bagley PH, et al. Prevalence and test characteristics of national health safety network ventilator-associated events. Crit Care Med. 2014;42(9):2019–28.

    Article  PubMed  Google Scholar 

  38. L'heriteau F, Olivier M, Maugat S, Joly C, Merrer J, Thaler F, et al. REACAT catheter study group. Impact of a five-year surveillance of central venous catheter infections in the REACAT intensive care unit network in France. J Hosp Infect. 2007;66(2):123–9.

    Article  CAS  PubMed  Google Scholar 

  39. Januel JM, Harbarth S, Allard R, Voirin N, Lepape A, Allaouchiche B, et al. Estimating attributable mortality due to nosocomial infections acquired in intensive care units. Infect Control Hosp Epidemiol. 2010;31(4):388–94.

    Article  PubMed  Google Scholar 

  40. Anesi GL, Gabler NB, Allorto NL, Cairns C, Weissman GE, Kohn R, et al. Intensive care unit capacity strain and outcomes of critical illness in a resource-limited setting: a 2-hospital study in South Africa. J Intensive Care Med. 2020;35(10):1104–11.

    Article  PubMed  Google Scholar 

  41. Marik PE, Doyle H, Varon J. Is obesity protective during critical illness? An analysis of a national ICU database. Crit Care Shock. 2003;6(3):156–62.

    Google Scholar 

  42. Beale R, Reinhart K, Brunkhorst FM, Dobb G, Levy M, Martin G, et al. Promoting Global Research Excellence in Severe Sepsis (PROGRESS): lessons from an international sepsis registry. Infection. 2009;37(3):222–32.

    Article  CAS  PubMed  Google Scholar 

  43. Llompart-Pou JA, Chico-Fernández M, Sánchez-Casado M, Salaberria-Udabe R, Carbayo-Górriz C, Guerrero-López F, et al. Scoring severity in trauma: comparison of prehospital scoring systems in trauma ICU patients. Eur J Trauma Emerg Surg. 2017;43(3):351–7.

    Article  CAS  PubMed  Google Scholar 

  44. Zajic P, Bauer P, Rhodes A, Moreno R, Fellinger T, Metnitz B, et al. Weekends affect mortality risk and chance of discharge in critically ill patients: a retrospective study in the Austrian registry for intensive care. Crit Care. 2017;21(1):1–8.

    Article  Google Scholar 

  45. Koetsier A, Peek N, de Jonge E, Dongelmans D, van Berkel G, de Keizer N. Reliability of in-hospital mortality as a quality indicator in clinical quality registries. A case study in an intensive care quality register. Methods Inf Med. 2013;52(5):432–40.

    Article  CAS  PubMed  Google Scholar 

  46. Durairaj L, Torner JC, Chrischilles EA, Sarrazin MS, Yankey J, Rosenthal GE. Hospital volume-outcome relationships among medical admissions to ICUs. Chest. 2005;128(3):1682–9.

    Article  PubMed  Google Scholar 

  47. Paul E, Bailey M, Van Lint A, Pilcher DV. Performance of APACHE III over time in Australia and New Zealand: a retrospective cohort study. Anaesth Intensive Care. 2012;40(6):980–94.

    Article  CAS  PubMed  Google Scholar 

  48. Paul E, Bailey M, Kasza J, Pilcher D. The ANZROD model: better benchmarking of ICU outcomes and detection of outliers. Crit Care Resusc. 2016;18(1):25.

    PubMed  Google Scholar 

  49. Poole D, Rossi C, Anghileri A, Giardino M, Latronico N, Radrizzani D, et al. External validation of the Simplified Acute Physiology Score (SAPS) 3 in a cohort of 28,357 patients from 147 Italian intensive care units. Intensive Care Med. 2009;35(11):1916–24.

    Article  PubMed  Google Scholar 

  50. Liu V, Turk BJ, Ragins AI, Kipnis P, Escobar GJ. An electronic Simplified Acute Physiology Score-based risk adjustment score for critical illness in an integrated healthcare system. Crit Care Med. 2013;41(1):41–8.

    Article  CAS  PubMed  Google Scholar 

  51. Hung SC, Kung CT, Hung CW, Liu BM, Liu JW, Chew G, et al. Determining delayed admission to the intensive care unit for mechanically ventilated patients in the emergency department. Crit Care. 2014;18(4):1–9.

    Article  Google Scholar 

  52. Chrusch CA, Martin CM. Quality improvement in critical care: selection and development of quality indicators. Can Respir J. 2016;2016.

  53. Oliveira AC, Garcia PC, Nogueira LD. Nursing workload and occurrence of adverse events in intensive care: a systematic review. Rev Esc Enferm USP. 2016;50(4):683–94.

    Article  PubMed  Google Scholar 

  54. Levy MM, Rapoport J, Lemeshow S, Chalfin DB, Phillips G, Danis M. Association between critical care physician management and patient mortality in the intensive care unit. Ann Intern Med. 2008;148(11):801–9.

    Article  PubMed  PubMed Central  Google Scholar 

  55. Attridge RT, Frei CR, Pugh MJ, Lawson KA, Ryan L, Anzueto A, et al. Health care–associated pneumonia in the intensive care unit: guideline-concordant antibiotics and outcomes. J Crit Care. 2016;36:265–71.

    Article  PubMed  PubMed Central  Google Scholar 

  56. Trejnowska E, Deptuła A, Tarczyńska-Słomian M, Knapik P, Jankowski M, Misiewska-Kaczur A, et al. Surveillance of antibiotic prescribing in intensive care units in Poland. Can J Infect Dis Med Microbiol. 2018;2018.

  57. Hill AD, Fowler RA, Burns KE, Rose L, Pinto RL, Scales DC. Long-term outcomes and health care utilization after prolonged mechanical ventilation. Ann Am Thorac Soc. 2017;14(3):355–62.

    Article  PubMed  Google Scholar 

  58. Schwab F, Geffers C, Behnke M, Gastmeier P. ICU mortality following ICU-acquired primary bloodstream infections according to the type of pathogen: a prospective cohort study in 937 Germany ICUs (2006-2015). PLoS One. 2018;13(3):e0194210.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  59. Ding X, Zhang Y, Jiang Y, Yuan Y, Xu B. Targeted surveillance to hospital infection in intensive care unit. Chin J Nosocomiol. 2010;20(21):3295–7.

    Google Scholar 

  60. Ding XP, Gu P, Yuan YM, Zhang YP, Xu BY, Jiang YN, et al. Prospective surveillance of pathogens causing nosocomial infection during 2004-2010. Chin J Nosocomiol. 2011;20.

  61. El-Saed A, Al-Jardani A, Althaqafi A, Alansari H, Alsalman J, Al Maskari Z, et al. Ventilator-associated pneumonia rates in critical care units in 3 Arabian gulf countries: a 6-year surveillance study. Am J Infect Control. 2016;44(7):794–8.

    Article  PubMed  Google Scholar 

  62. Bird D, Zambuto A, O'Donnell C, Silva J, Korn C, Burke R, et al. Adherence to vap bundle decreases incidence of ventilator-associated pneumonia in the intensive care unit. Crit Care Med. 2009;37(12):A142–a142 530 Walnut St, Philadelphia, Pa 19106-3621 Usa: Lippincott Williams & Wilkins.

    Google Scholar 

  63. Fontela PS, Platt RW, Rocher I, Frenette C, Moore D, Fortin É, et al. Surveillance Provinciale des infections Nosocomiales (SPIN) program: implementation of a mandatory surveillance program for central line-associated bloodstream infections. Am J Infect Control. 2011;39(4):329–35.

    Article  PubMed  Google Scholar 

  64. Li L, Fortin E, Tremblay C, Ngenda-Muadi M, Quach C. Central-line–associated bloodstream infections in Quebec intensive care units: results from the provincial healthcare-associated infections surveillance program (SPIN). Infect Control Hosp Epidemiol. 2016;37(10):1186–94.

    Article  PubMed  Google Scholar 

  65. Malacarne P, Langer M, Nascimben E, Moro ML, Giudici D, Lampati L, et al. Building a continuous multicenter infection surveillance system in the intensive care unit: findings from the initial data set of 9,493 patients from 71 Italian intensive care units. Crit Care Med. 2008;36(4):1105–13.

    Article  PubMed  Google Scholar 

  66. Gastmeier P, Sohr D, Geffers C, Behnke M, Daschner F, Rüden H. Mortality risk factors with nosocomial Staphylococcus aureus infections in intensive care units: results from the German Nosocomial Infection Surveillance System (KISS). Infection. 2005;33(2):50–5.

    Article  CAS  PubMed  Google Scholar 

  67. Fernández R. Occupancy of the Departments of Intensive Care Medicine in Catalonia (Spain): a prospective, analytical cohort study. Med Int. 2015;39(9):537–42.

    Google Scholar 

  68. Worth LJ, Spelman T, Bull AL, Brett JA, Richards MJ. Central line-associated bloodstream infections in Australian intensive care units: time-trends in infection rates, etiology, and antimicrobial resistance using a comprehensive Victorian surveillance program, 2009-2013. Am J Infect Control. 2015;43(8):848–52.

    Article  PubMed  Google Scholar 

  69. Moore L, Stelfox HT, Turgeon AF, Nathens A, Bourgeois G, Lapointe J, et al. Hospital length of stay after admission for traumatic injury in Canada: a multicenter cohort study. Ann Surg. 2014;260(1):179–87.

    Article  PubMed  Google Scholar 

  70. Barbash IJ, Le TQ, Pike F, Barnato AE, Angus DC, Kahn JM. The effect of intensive care unit admission patterns on mortality-based critical care performance measures. Ann Am Thorac Soc. 2016;13(6):877–86.

    Article  PubMed  PubMed Central  Google Scholar 

  71. Glance LG, Osler TM, Dick A. Rating the quality of intensive care units: is it a function of the intensive care unit scoring system? Crit Care Med. 2002;30(9):1976–82.

    Article  PubMed  Google Scholar 

  72. Koetsier A, de Keizer NF, de Jonge E, Cook DA, Peek N. Performance of risk-adjusted control charts to monitor in-hospital mortality of intensive care unit patients: a simulation study. Crit Care Med. 2012;40(6):1799–807.

    Article  PubMed  Google Scholar 

  73. Laupland KB, Lee H, Gregson DB, Manns BJ. Cost of intensive care unit-acquired bloodstream infections. J Hosp Infect. 2006;63(2):124–32.

    Article  CAS  PubMed  Google Scholar 

  74. Metnitz B, Metnitz PG, Bauer P, Valentin A. Patient volume affects outcome in critically ill patients. Wien Klin Wochenschr. 2009;121(1):34–40.

    Article  PubMed  Google Scholar 

  75. Brown B, Gude WT, Blakeman T, van der Veer SN, Ivers N, Francis JJ, et al. Clinical performance feedback intervention theory (CP-FIT): a new theory for designing, implementing, and evaluating feedback in health care based on a systematic review and meta-synthesis of qualitative research. Implement Sci. 2019;14(1):1–25.

    Article  Google Scholar 

  76. Brown SE, Ratcliffe SJ, Halpern SD. Assessing the utility of ICU readmissions as a quality metric: an analysis of changes mediated by residency work-hour reforms. Chest. 2015;147(3):626–36.

    Article  PubMed  Google Scholar 

  77. Morita K, Matsui H, Yamana H, Fushimi K, Imamura T, Yasunaga H. Association between advanced practice nursing and 30-day mortality in mechanically ventilated critically ill patients: a retrospective cohort study. J Crit Care. 2017;41:209–15.

    Article  PubMed  Google Scholar 

  78. Ahern JW, Alston WK. Use of longitudinal surveillance data to assess the effectiveness of infection control in critical care. Infect Control Hosp Epidemiol. 2009;30(11):1109–12.

    Article  PubMed  Google Scholar 

  79. Meyer E, Schwab F, Jonas D, Rueden H, Gastmeier P, Daschner FD. Surveillance of antimicrobial use and antimicrobial resistance in intensive care units (SARI): 1. Antimicrobial use in German intensive care units. Intensive Care Med. 2004;30(6):1089–96.

    Article  PubMed  Google Scholar 

  80. Brown SE, Ratcliffe SJ, Halpern SD. An empirical comparison of key statistical attributes among potential ICU quality indicators. Crit Care Med. 2014;42(8):1821.

    Article  PubMed  PubMed Central  Google Scholar 

  81. Weissman GE, Gabler NB, Brown SE, Halpern SD. Intensive care unit capacity strain and adherence to prophylaxis guidelines. J Crit Care. 2015;30(6):1303–9.

    Article  PubMed  PubMed Central  Google Scholar 

  82. Miltiades AN, Gershengorn HB, Hua M, Kramer AA, Li G, Wunsch H. Cumulative probability and time to reintubation in United States intensive care units. Crit Care Med. 2017;45(5):835.

    Article  PubMed  PubMed Central  Google Scholar 

  83. Ibrahim AS, Akkari AR, Raza T, Hassan IF, Akbar A, Alatoum I. Epidemiological and clinical profiles of patients with acute respiratory distress syndrome admitted to medical intensive care in Qatar: a retrospective analysis of the data registry for the year 2015. Qatar Med J. 2019;2019(1):3.

    Article  PubMed  PubMed Central  Google Scholar 

  84. Durdu B, Kritsotakis EI, Lee AC, Torun P, Hakyemez IN, Gultepe B, et al. Temporal trends and patterns in antimicrobial-resistant gram-negative bacteria implicated in intensive care unit-acquired infections: a cohort-based surveillance study in Istanbul, Turkey. J Glob Antimicrob Resist. 2018;14:190–6.

    Article  PubMed  Google Scholar 

  85. Kanamori H, Weber DJ, DiBiase LM, Sickbert-Bennett EE, Brooks R, Teal L, et al. Longitudinal trends in all healthcare-associated infections through comprehensive hospital-wide surveillance and infection control measures over the past 12 years: substantial burden of healthcare-associated infections outside of intensive care units and “other” types of infection. Infect Control Hosp Epidemiol. 2015;36(10):1139–47.

    Article  PubMed  Google Scholar 

  86. Weber DJ, Sickbert-Bennett EE, Brown V, Rutala WA. Comparison of hospitalwide surveillance and targeted intensive care unit surveillance of healthcare-associated infections. Infect Control Hosp Epidemiol. 2007;28(12):1361–6.

    Article  PubMed  Google Scholar 

  87. Zuschneid I, Schwab F, Geffers C, Rüden H, Gastmeier P. Reducing central venous catheter–associated primary bloodstream infections in intensive care units is possible: data from the German nosocomial infection surveillance system. Infect Control Hosp Epidemiol. 2003;24(7):501–5.

    Article  PubMed  Google Scholar 

  88. Agodi A, Auxilia F, Barchitta M, Brusaferro S, D'Alessandro D, Montagna MT, et al. Building a benchmark through active surveillance of intensive care unit-acquired infections: the Italian network SPIN-UTI. J Hosp Infect. 2010;74(3):258–65.

    Article  CAS  PubMed  Google Scholar 

  89. Lambert ML, Silversmit G, Savey A, Palomar M, Hiesmayr M, Agodi A, et al. Preventable proportion of severe infections acquired in intensive care units: case-mix adjusted estimations from patient-based surveillance data. Infect Control Hosp Epidemiol. 2014;35(5):494–501.

    Article  PubMed  Google Scholar 

  90. Schröder C, Schwab F, Behnke M, Breier AC, Maechler F, Piening B, et al. Epidemiology of healthcare associated infections in Germany: nearly 20 years of surveillance. Int J Med Microbiol. 2015;305(7):799–806.

    Article  PubMed  Google Scholar 

  91. Wojkowska-Mach J, Godman B, Glassman A, Kurdi A, Pilc A, Rozanska A, et al. Antibiotic consumption and antimicrobial resistance in Poland; findings and implications. Antimicrob Resist Infect Control. 2018;7(1):1–0.

    Article  Google Scholar 

  92. Iacchini S, Sabbatucci M, Gagliotti C, Rossolini GM, Moro ML, Iannazzo S, et al. Bloodstream infections due to carbapenemase-producing Enterobacteriaceae in Italy: results from nationwide surveillance, 2014 to 2017. Eurosurveillance. 2019;24(5):1800159.

    Article  PubMed Central  Google Scholar 

  93. Alwan A, Aldhubhani AH, Izham MM, Pazilah I, Anaam MS, Karsany MS, et al. East Mediterr Health J. 2016;12.

  94. Bouadma L, Sonneville R, Garrouste-Orgeas M, Darmon M, Souweine B, Voiriot G, et al. Ventilator-associated events: prevalence, outcome, and relationship with ventilator-associated pneumonia. Crit Care Med. 2015;43(9):1798–806.

    Article  PubMed  Google Scholar 

  95. Choi JY, Kwak YG, Yoo H, Lee SO, Kim HB, Han SH, et al. Trends in the incidence rate of device-associated infections in intensive care units after the establishment of the Korean Nosocomial Infections Surveillance System. J Hosp Infect. 2015;91(1):28–34.

    Article  CAS  PubMed  Google Scholar 

  96. Choi JY, Kwak YG, Yoo H, Lee SO, Kim HB, Han SH, et al. Trends in the distribution and antimicrobial susceptibility of causative pathogens of device-associated infection in Korean intensive care units from 2006 to 2013: results from the Korean Nosocomial Infections Surveillance System (KONIS). J Hosp Infect. 2016;92(4):363–71.

    Article  CAS  PubMed  Google Scholar 

  97. Katherason SG, Naing L, Jaalam K, Musa KI, Mohamad NA, Aiyar S, et al. Ventilator-associated nosocomial pneumonia in intensive care units in Malaysia. J Infect Dev Ctries. 2009;3(09):704–10.

    Article  Google Scholar 

  98. Hayashi Y, Morisawa K, Klompas M, Jones M, Bandeshe H, Boots R, et al. Toward improved surveillance: the impact of ventilator-associated complications on length of stay and antibiotic use in patients in intensive care units. Clin Infect Dis. 2013;56(4):471–7.

    Article  CAS  PubMed  Google Scholar 

  99. Kim EJ, Choi YH, Kim HY, Kwak YG, Kim TH, Kim HB, et al. Trends of device utilization ratios in intensive care units during 10 years in South Korea: results from the Korean National Healthcare-Associated Infections Surveillance System. Open Forum Infect Dis. 2017;4(suppl_1):S629–30 US: Oxford University Press.

    Article  PubMed Central  Google Scholar 

  100. van der Kooi TI, de Boer AS, Manniën J, Wille JC, Beaumont MT, Mooi BW, et al. Incidence and risk factors of device-associated infections and associated mortality at the intensive care in the Dutch surveillance system. Intensive Care Med. 2007;33(2):271–8.

    Article  PubMed  Google Scholar 

  101. Rosenthal VD, Bijie H, Maki DG, Mehta Y, Apisarnthanarak A, Medeiros EA, et al. International Nosocomial Infection Control Consortium (INICC) report, data summary of 36 countries, for 2004-2009. Am J Infect Control. 2012;40(5):396–407.

    Article  PubMed  Google Scholar 

  102. Zahar JR, Nguile-Makao M, Français A, Schwebel C, Garrouste-Orgeas M, Goldgran-Toledano D, et al. Predicting the risk of documented ventilator-associated pneumonia for benchmarking: construction and validation of a score. Crit Care Med. 2009;37(9):2545–51.

    Article  PubMed  Google Scholar 

  103. Bénet T, Ecochard R, Voirin N, Machut A, Lepape A, Savey A, et al. Effect of standardized surveillance of intensive care unit–acquired infections on ventilator-associated pneumonia incidence. Infect Control Hosp Epidemiol. 2014;35(10):1290–3.

    Article  PubMed  Google Scholar 

  104. Gastmeier P, Behnke M, Schwab F, Geffers C. Benchmarking of urinary tract infection rates: experiences from the intensive care unit component of the German national nosocomial infections surveillance system. J Hosp Infect. 2011;78(1):41–4.

    Article  CAS  PubMed  Google Scholar 

  105. Laupland KB, Bagshaw SM, Gregson DB, Kirkpatrick AW, Ross T, Church DL. Intensive care unit-acquired urinary tract infections in a regional critical care system. Crit Care. 2005;9(2):1–6.

    Article  Google Scholar 

  106. Balkhy H, Al Shehri AM, Dagunton NL, Dagunton N. A multifaceted approach in reducing central line associated bloodstream infections (CLABSI) in pediatric icus at a tertiary hospital. Antimicrob Resist Infect Control. 2015;4(1):1.

    Google Scholar 

  107. Bion J, Richardson A, Hibbert P, Beer J, Abrusci T, McCutcheon M, et al. ‘Matching Michigan’: a 2-year stepped interventional programme to minimise central venous catheter-blood stream infections in intensive care units in England. BMJ Qual Saf. 2013;22(2):110–23.

    Article  PubMed  Google Scholar 

  108. Spelman T, Pilcher DV, Cheng AC, Bull AL, Richards MJ, Worth LJ. Central line-associated bloodstream infections in Australian ICUs: evaluating modifiable and non-modifiable risks in Victorian healthcare facilities. Epidemiol Infect. 2017;145(14):3047–55.

    Article  CAS  PubMed  Google Scholar 

  109. Conrick-Martin I, Foley M, Roche FM, Fraher MH, Burns KM, Morrison P, et al. Catheter-related infection in Irish intensive care units diagnosed with HELICS criteria: a multi-centre surveillance study. J Hosp Infect. 2013;83(3):238–43.

    Article  CAS  PubMed  Google Scholar 

  110. Laupland KB, Zygun DA, Doig CJ, Bagshaw SM, Svenson LW, Fick GH. One-year mortality of bloodstream infection-associated sepsis and septic shock among patients presenting to a regional critical care system. Intensive Care Med. 2005;31(2):213–9.

    Article  PubMed  Google Scholar 

  111. Bonnet V, Dupont H, Glorion S, Aupée M, Kipnis E, Gérard JL, et al. Influence of bacterial resistance on mortality in intensive care units: a registry study from 2000 to 2013 (IICU study). J Hosp Infect. 2019;102(3):317–24.

    Article  CAS  PubMed  Google Scholar 

  112. Kwak YG, Choi JY, Yoo HM, Lee SO, Kim HB, Han SH, et al. Validation of the Korean National Healthcare-associated Infections Surveillance System (KONIS): an intensive care unit module report. J Hosp Infect. 2017;96(4):377–84.

    Article  CAS  PubMed  Google Scholar 

  113. Masia MD, Barchitta M, Liperi G, Cantù AP, Alliata E, Auxilia F, et al. Validation of intensive care unit-acquired infection surveillance in the Italian SPIN-UTI network. J Hosp Infect. 2010;76(2):139–42.

    Article  CAS  PubMed  Google Scholar 

  114. Mertens K, Morales I, Catry B. Infections acquired in intensive care units: results of national surveillance in Belgium, 1997–2010. J Hosp Infect. 2013;84(2):120–5.

    Article  CAS  PubMed  Google Scholar 

  115. Zhu S, Cai L, Ma C, Zeng H, Guo H, Mao X, et al. The clinical impact of ventilator-associated events: a prospective multi-center surveillance study. Infect Control Hosp Epidemiol. 2015;36(12):1388–95.

    Article  PubMed  Google Scholar 

  116. Callejo-Torre F, Bouza JM, Astigarraga PO, Del Corral MJ, Martínez MP, Alvarez-Lerma F. Risk factors for methicillin-resistant Staphylococcus aureus colonisation or infection in intensive care units and their reliability for predicting MRSA on ICU admission. Europe. 2016;5:1–9.

    Google Scholar 

  117. Huang SS, Rifas-Shiman SL, Warren DK, Fraser VJ, Climo MW, Wong ES, et al. Improving methicillin-resistant Staphylococcus aureus surveillance and reporting in intensive care units. J Infect Dis. 2007;195(3):330–8.

    Article  PubMed  Google Scholar 

  118. Kramer TS, Schröder C, Behnke M, Aghdassi SJ, Geffers C, Gastmeier P, et al. Decrease of methicillin resistance in Staphylococcus aureus in nosocomial infections in Germany–a prospective analysis over 10 years. J Infect. 2019;78(3):215–9.

    Article  CAS  PubMed  Google Scholar 

  119. Brown SE, Ratcliffe SJ, Halpern SD. An empirical derivation of the optimal time interval for defining ICU readmissions. Med Care. 2013;51(8):706.

    Article  PubMed  PubMed Central  Google Scholar 

  120. Verburg IW, de Jonge E, Peek N, de Keizer NF. The association between outcome-based quality indicators for intensive care units. PLoS One. 2018;13(6):e0198522.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  121. Hua M, Gong M, Brady J, Wunsch H. Hospital readmissions at 30 days for survivors of critical illness. Am J Respir Crit Care Med. 2014;189:1.

    Google Scholar 

  122. Damian MS, Ben-Shlomo Y, Howard R, Bellotti T, Harrison D, Griggs K, et al. The effect of secular trends and specialist neurocritical care on mortality for patients with intracerebral haemorrhage, myasthenia gravis and Guillain–Barré syndrome admitted to critical care. Intensive Care Med. 2013;39(8):1405–12.

    Article  PubMed  Google Scholar 

  123. Engerström L, Kramer AA, Nolin T, Sjöberg F, Karlström G, Fredrikson M, et al. Comparing time-fixed mortality prediction models and their effect on ICU performance metrics using the simplified acute physiology score 3. Crit Care Med. 2016;44(11):e1038–44.

    Article  PubMed  Google Scholar 

  124. Kübler A, Adamik B, Durek G, Mayzner-Zawadzka E, Gaszyński W, Karpel E, et al. Results of the severe sepsis registry in intensive care units in Poland from 2003–2009. Anaesthesiol Intensive Ther. 2015;47(1):7–13.

    Article  PubMed  Google Scholar 

  125. Lone NI, Gillies MA, Haddow C, Dobbie R, Rowan KM, Wild SH, et al. Five-year mortality and hospital costs associated with surviving intensive care. Am J Respir Crit Care Med. 2016;194(2):198–208.

    Article  PubMed  PubMed Central  Google Scholar 

  126. Quach S, Hennessy DA, Faris P, Fong A, Quan H, Doig C. A comparison between the APACHE II and Charlson index score for predicting hospital mortality in critically ill patients. BMC Health Serv Res. 2009;9(1):1–8.

    Article  Google Scholar 

  127. Turner PL, Ilano AG, Zhu Y, Johnson SB, Hanna N. ACS-NSQIP criteria are associated with APACHE severity and outcomes in critically ill surgical patients. J Am Coll Surg. 2011;212(3):287–94.

    Article  PubMed  Google Scholar 

  128. Yin W, Li Y, Zeng X, Qin Y, Wang D, Zou T, et al. The utilization of critical care ultrasound to assess hemodynamics and lung pathology on ICU admission and the potential for predicting outcome. PLoS One. 2017;12(8):e0182881.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  129. Lilly CM, Zuckerman IH, Badawi O, Riker RR. Benchmark data from more than 240,000 adults that reflect the current practice of critical care in the United States. Chest. 2011;140(5):1232–42.

    Article  PubMed  Google Scholar 

  130. Nathanson BH, Higgins TL, Teres D, Copes WS, Kramer A, Stark M. A revised method to assess intensive care unit clinical performance and resource utilization. Crit Care Med. 2007;35(8):1853–62.

    Article  PubMed  Google Scholar 

  131. Niskanen M, Reinikainen M, Pettilä V. Case-mix-adjusted length of stay and mortality in 23 Finnish ICUs. Intensive Care Med. 2009;35(6):1060–7.

    Article  PubMed  Google Scholar 

  132. Paul E, Bailey M, Kasza J, Pilcher DV. Assessing contemporary intensive care unit outcome: development and validation of the Australian and New Zealand risk of death admission model. Anaesth Intensive Care. 2017;45(3):326–43.

    Article  CAS  PubMed  Google Scholar 

  133. Reiter A, Mauritz W, Jordan B, Lang T, Pölzl A, Pelinka L, et al. Improving risk adjustment in critically ill trauma patients: the TRISS-SAPS score. J Trauma Acute Care Surg. 2004;57(2):375–80.

    Article  Google Scholar 

  134. Lagu T, Lindenauer PK, Rothberg MB, Nathanson BH, Pekow PS, Steingrub JS, et al. Development and validation of a model that uses enhanced administrative data to predict mortality in patients with sepsis. Crit Care Med. 2011;39(11):2425–30.

    Article  PubMed  Google Scholar 

  135. Lilly CM, Swami S, Liu X, Riker RR, Badawi O. Five-year trends of critical care practice and outcomes. Chest. 2017;152(4):723–35.

    Article  PubMed  Google Scholar 

  136. Render ML, Kim HM, Welsh DE, Timmons S, Johnston J, Hui S, et al. Automated intensive care unit risk adjustment: results from a National Veterans Affairs study. Crit Care Med. 2003;31(6):1638–46.

    Article  PubMed  Google Scholar 

  137. Umegaki T, Sekimoto M, Hayashida K, Imanaka Y. An outcome prediction model for adult intensive care. Crit Care Resusc. 2010;12(2):96.

    PubMed  Google Scholar 

  138. da Costa JB, Taba S, Scherer JR, Oliveira LL, Luzzi KC, Gund DP, et al. Psychological disorders in post-ICU survivors and impairment in quality of life. Psychol Neurosci. 2019;12(3):391.

    Article  Google Scholar 

  139. Zimmerman JE, Kramer AA, McNair DS, Malila FM. Acute Physiology and Chronic Health Evaluation (APACHE) IV: hospital mortality assessment for today’s critically ill patients. Crit Care Med. 2006;34(5):1297–310.

    Article  PubMed  Google Scholar 

  140. Moore L, Stelfox HT, Evans D, Hameed SM, Yanchar NL, Simons R, et al. Hospital and intensive care unit length of stay for injury admissions: a pan-Canadian cohort study. Ann Surg. 2018;267(1):177–82.

    Article  PubMed  Google Scholar 

  141. Venier AG, Gruson D, Lavigne T, Jarno P, L’hériteau F, Coignard B, et al. Identifying new risk factors for Pseudomonas aeruginosa pneumonia in intensive care units: experience of the French national surveillance, REA-RAISIN. J Hosp Infect. 2011;79(1):44–8.

    Article  CAS  PubMed  Google Scholar 

  142. Vestergaard AH, Christiansen CF, Nielsen H, Christensen S, Johnsen SP. Geographical variation in use of intensive care: a nationwide study. Intensive Care Med. 2015;41(11):1895–902.

    Article  PubMed  Google Scholar 

  143. Pradelli L, Povero M, Muscaritoli M, Eandi M. Updated cost-effectiveness analysis of supplemental glutamine for parenteral nutrition of intensive-care patients. Eur J Clin Nutr. 2015;69(5):546–51.

    Article  CAS  PubMed  Google Scholar 

  144. Pieris L, Sigera PC, De Silva AP, Munasinghe S, Rashan A, Athapattu PL, et al. Experiences of ICU survivors in a low middle income country-a multicenter study. BMC Anesthesiol. 2018;18(1):1–8.

    Article  Google Scholar 

  145. Zampieri FG, Lisboa TC, Correa TD, Bozza FA, Ferez M, Fernandes HS, et al. Role of organisational factors on the ‘weekend effect’in critically ill patients in Brazil: a retrospective cohort analysis. BMJ Open. 2018;8(1):e018541.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

Our thanks to the Linking of global intensive care (LOGIC) group and CRIT Care Asia for their support.

Declarations

.

Funding

This review was partly funded by the Wellcome-Oxford CRIT Care Asia (award Number: 215522) and NICST, a UK Charity (1171106).

Author information

Authors and Affiliations

Authors

Contributions

AB, JS, IJ, AMD, and RH conceived and developed the protocol for this scoping review. SR and CS conducted the systematic literature review with support from IJ and AB. IJ and SR undertook the analysis. IJ and SR overseen by AB drafted the manuscript. All authors read and approved the final manuscript and fulfill the adapted McNutt et al. criteria for authorship.

Corresponding author

Correspondence to Abi Beane.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

AB and RH are funded by Wellcome Oxford CRIT Care Asia.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

PRISMA-ScR Checklist.

Additional file 2.

Supplementary File 1: Definitions of unique quality indicators and evidence grading [29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50, 52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69, 72,73,74, 76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145].

Additional file 3.

Supplementary File 2: List of articles included in the review.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jawad, I., Rashan, S., Sigera, C. et al. A scoping review of registry captured indicators for evaluating quality of critical care in ICU. j intensive care 9, 48 (2021). https://doi.org/10.1186/s40560-021-00556-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40560-021-00556-6

Keywords