Performance monitoring is one of several quality enhancing activities designed to improve quality and accountability in health care. During the past years, there has been an increasing interest in such measurement and reporting systems worldwide. Particular emphasis seems to have been put in the design and implementation of systems monitoring hospital performance in a valid and reliable way.
The aim of this project has been to gather evidence in the field of hospital performance monitoring systems, by evaluating and comparing six selected national initiatives with regard to policy and methodological approach. The monitoring systems included in the review were developed by the international organisations OECD and WHO, as well as by governmental agencies in Canada, the United States, Denmark and Sweden.
Information on each system was identified by searches in Pub Med and on Internet, during the period April 2006 through April 2007. We performed an assessment of each system based on predefined criteria, covering:
- Conceptual framework
- Objectives and target groups
- Evaluation criteria
- Selection procedures
- Publication format
- Empirical testing
System for updating and revision Even though the monitoring systems chosen represent only a small number of the ones we potentially might have included in our review, they illustrate several ways of approaching health policy and methodological issues. They also represent a great amount of background experience that may be valuable in a Norwegian context. The authorities and organisations responsible for the systems have put extensive resources into the course of their development, both with regard to economical input and scope of expertise.
We recommend that the further development of the Norwegian national monitoring system:
Is based on internationally recognized procedures that are described in this report
Particularly build on systems that give explicit information concerning their evidence base and selection procedures. In addition, theoretical and empirical tests for reliability and validity in a Norwegian context should be performed
Ensure legitimacy and acceptance for the system by involving professional and other relevant user groups in the processes
- Design a conceptual framework that should form an overarching strategy and articulate guiding principles for both value-based and professional priorities
- Develop a long-term strategic plan, taking into account how expertise can be built up and how the needs and requirements concerning research and development in this area can be met
Quality indicators may be defined as indirect measures of quality within an area and is one way of measuring and monitoring the delivery and quality of health care services. Various audiences may wish to use them to document the quality of care: to make comparisons, to make judgements and determine priorities, and to support quality improvement and accountability in health care. The challenges connected with the development of quality measurement and reporting systems are multiple and complex. Irrespective of this, an increasing number of countries and organisations are implementing and applying such systems.
To contribute to the understanding of conceptual approaches and scientific requirements for the selection and reporting of quality indicators, the Norwegian Knowledge Centre for the Health Services in 2006 initiated the project: "Information bank for quality indicators. Selecting and employing performance measures for the hospital sector." Our aim is to contribute to the transferring of international knowledge in this area to health authorities and managers in Norway, responsible for developing and revising national and local performance measurement systems.
This report examines and compares six selected initiatives designed for monitoring health system performance. All systems have a national or international perspective, and all have health authorities or hospital leaders at various levels as target groups. Two systems were developed by the international organisations OECD and WHO, whereas the other four were developed by governmental agencies in Canada, the United States, Denmark and Sweden. The systems were selected on the basis that they might document some relevant international trends and illustrate a diversity of approaches concerning policy and methodological issues.
Information on each system was identified by searches in PubMed and on Internet, during the period April 2006 through April 2007. The project includes primarily monitoring systems for the hospital sector, with quality improvement or benchmarking at an international, national, or sub national level, as objectives. It primarily takes a clinical view of healthcare in relation to health, with process- and outcome measures as the main focus. In principle, indicators measuring health status, efficiency and productivity therefore have been excluded.
We performed an assessment of each system based on the following aspects:
- conceptual framework
- objectives and target groups
- evaluation criteria
- selection procedures
- publication format,
- empirical testing
- system for updating and revision of indicators.
We also evaluated whether transparency of the systems was ensured, by the degree of explicitness and availability of information, both with regard to the evidence base presented, and how the various steps of the selection procedures were documented.
We found considerable variance regarding the availability of documentation of the measurement systems. This finding may seem to contrast several of the systems’ statements, emphasising a theoretical and empirical foundation for their indicator sets. During the period of collecting information for this project, there was, however, a tendency for several systems towards improved transparency and availability of relevant information.
The actual status for the systems varies quite much. Whereas the national systems in Canada, the United States and Denmark have been implemented, and reports published for some years, the systems developed in Sweden, as well as by OECD and WHO, represent relatively novel models which so far have not been fully implemented in their final version.
Health authorities are responsible for the national systems in all of the four countries, ensuring professional and institutional participation in the development processes. OECD and WHO on the other hand, are in large dependant on voluntary contributions from the participating countries, making these systems more vulnerable.
The systems were developed in quite different contexts and illustrate several ways of approaching health policy issues. This in turn influences the focus and main purpose of the measurement systems. The primary aim of the systems developed by WHO and Denmark is to support quality improvement efforts in hospitals. The objective for the OECD, Canadian,the U.S. and Swedish systems, on the other hand, is primarily for management purposes and to support accountability in health care. Secondary goals of the systems often contribute to eliminate some of the differences: all have learning and quality improvement among their goals; the majority also aim to enhance accountability.
A marked feature characterizing most systems is that extensive work has been laid down in designing conceptual frameworks which typically articulate guiding principles on value- and professional based decisions and define strategies concerning tasks to be accomplished. In many of the systems the level of ambition seems to be quite high, both with regard to methodological approach, as well as to financial and human resources for data extraction, quality assurance and database infrastructure. So far, a more pragmatic approach seems to have been prevailing for the Swedish system, in terms of designing a coherent framework.
Most systems have included effectiveness, safety, accessibility and patient-centeredness among their prioritized dimensions of healthcare performance, whereas equity, efficiency and management/organization are less common. The Canadian system represents the broadest approach, with the entire set of the above dimensions included. WHO also takes a broad approach, while the Danish indicator project appears as the narrowest and most clinically oriented.
All systems have selected what may be characterized as major diagnoses with regard to clinical areas to be monitored. Cancer and cardiovascular disease are included in all systems. Diabetes, orthopaedic conditions, gynaecology and obstetrics, maternity and childhood care, pulmonary as well as infectious diseases, are also frequent. Patient safety is included as a distinct dimension in the Canadian, U.S. and OECD systems. The WHO approach is exceptional, by including patient-, as well as employee- and environmental aspects in their safety concept. Neither the Danish nor the Swedish system has designated safety indicators in a specific category. Indicators measuring patient-centeredness are not always part of the indicator sets, even though most systems have included this quality dimension in their frameworks. For those who actually have developed such indicators, a great variety of topics are covered; the principal ones being communication, behaviour and access to health personnel or care.
Main evaluation criteria for selection of indicators are given in all systems. Relevance, scientific soundness and feasibility are among the central ones. A prerequisite in the U.S., Swedish and OECD systems is that indicators should be chosen from a list of indicators already employed in existing systems. The reason for this is often time constraint and limited resources. The WHO system is also built on international experience with specific indicators; however, new measures are also developed in areas where a need is identified. The majority of the systems use existing data sources as their sole basis for indicator retrieval; this may be hospital administrative data, quality registries and patient surveys. The Danish system is unique, basing their datasets on mandatory reporting, which also include relevant information to perform case-mix analyses. For most systems the procedures for selection of indicators seem to be quite uniform, involving expert panels in consensus processes with a more or less systematic approach.
All systems except WHO, make national reports publicly available, including comparative data at various levels. The internal reports prepared by WHO is primarily an instrument for national and international benchmarking among "peer hospitals". The resources put into the format and presentation of the systems quality reports varies. Most reports present some normative interpretations, such as trend analyses or comparisons against a national or regional average.
This review shows the significance of having a conceptual framework with a clearly articulated vision of how a measurement and reporting system for the hospital sector should be organized, implemented and sustained. So far, the Norwegian national indicator system lacks a comprehensive and coherent framework; work to get this in place should therefore be given high priority. To secure acceptance and legitimacy, it is important that representatives from professional and other relevant user groups take part in this work.
We recommend that the further development of quality indicators is based on internationally recognized procedures and norms, as described in this report. Among the key elements is the systematic approach which takes into account both professional as well as value based considerations. In our opinion such activities are best taken care of through formalized consensus processes with panel members that are thoroughly selected.
Further development of the Norwegian quality indicator systems, either national or regional, should be built on international research and experience, particularly what may be considered systems of excellence. Inherent in these systems is transparency regarding their evidence base and the procedures for selection of indicator sets. The choice of a model system for such purposes also depend on purpose and target groups for a Norwegian system, i.e. type of control and type of action expected from the measurements. Prior to transferring individual indicators to a Norwegian context, it is recommended that theoretical or empirical testing is carried out to secure their reliability and validity.
This review shows that there is a great need for research in this area. We propose that this is attended to through the development of a strategic plan which takes into account how a multiprofessional environment best is created and how qualifications can be further raised to support the Norwegian performance measurement systems.