Blueprints for informed policy decisions: A review of laws and policies requiring routine evaluation
Health technology assessment
|Updated
Countries are committed to improving the health and welfare of their populations. Yet, we found only five examples of laws and policies requiring routine evaluation of public programmes.
Download
Key message
Countries are committed to improving the health and welfare of their populations. Yet, we found only five examples of laws and policies requiring routine evaluation of public programmes. This suggests that the majority of countries and international organisations may not be fulfilling their political and ethical obligations to use well designed evaluations of policies and programmes routinely to inform decisions about how best to use available resources to achieve societal goals. It is possible, however, that existing laws and policies do not adequately reflect the degree to which such actions are already being undertaken.
A number of important lessons can be drawn from the experiences discussed in this review, including:
- The enactment of laws and policies supporting routine use of evaluation can:
- Capitalise on broad reforms (e.g. those focusing on accountability and transparency)
- Build on existing laws and policies and existing evaluation experience
- Be championed by a wide range of people, including auditors general, budget bureaus, multilateral organisations and donors, legislative branches of government, and heads of state
- Linking evaluation and monitoring objectives to other government initiatives and institutions can create synergies in budgetary processes and accountability and transparency, in ways that create an integrated rather than overly-regulated system
- Implementation of a monitoring and evaluation system goes hand in hand with administrative reforms. Such changes enable those responsible for monitoring and evaluation to respond to the information needs of decision-makers and to link monitoring and evaluation to decision-making
- It is important to focus clearly on assessing the performance of programmes in order to identify the core issues that need to be addressed in evaluations (e.g. effectiveness, efficiency and equity), and to make clear what type of evidence is wanted
- It is advisable to have an entity with a clear mission to carry out independent, unbiased evaluations to a high standard. The entity should be insulated from the influence of political organisations or interest groups
- Monitoring and evaluation systems need to be supported by reliable and objective information which is continuously improved in order to give the system credibility
- A combination of flexibility and mandatory requirements is important
- Informing the public and engaging a wider spectrum of stakeholders in the design and interpretation of evaluation results will increase the probability that evaluation systems address questions that are important to intended beneficiaries, that the results of evaluations are used appropriately and, ultimately, that democratic principles are supported
- An appropriate legal framework and a well-designed and financed evaluation system can have important benefits
- There appears to be little risk of undesirable effects, although concerns have been raised about potential downsides, such as poor enforcement and bureaucratic implementation
Laws and policies requiring routine evaluation should themselves be routinely evaluated
Summary
The problem
Substantial sums of money are invested each year in public programmes and policies, ranging from attempts to improve health, social welfare, education, and justice, to programmes related to agriculture, work, and technology. Little is known, however, about the effects of most attempts to improve lives in this way and whether public programmes are able to fulfil their primary objectives, such as enhancing health and welfare. What little is known is often not used to inform decisions.
Because public resources are limited, it is important to use them effectively, efficiently and equitably. This is essential in low- and middle-income countries faced with severe resource constraints and competing priorities. It is also essential in high-income countries where there are also limited resources and unmet needs, and the potential for waste is greater.
When making decisions about public programmes, good intentions and plausible theories alone are insufficient. Research evidence, values, political considerations, and judgements are all necessary for well informed decisions. However, decisions are often made without systematically or transparently accessing and appraising relevant research evidence and without an adequate evaluation of both the intended and unintended effects of programmes. We need to make better use of what we already know and to better evaluate the effects of what we do.
The problem of public programmes being affected by poorly informed decisions varies in scale from country to country, across international and non-governmental organisations, from sector to sector within countries, and across programmes within a sector. Similarly, the cause of the problem can also vary in scale, such as the availability of human and financial resources. Decisions about public programmes are sometimes well-informed by research evidence and are sometimes rigorously evaluated without explicit processes or criteria for deciding when to undertake an impact evaluation. However, across national settings and different sectors, relevant evaluations are frequently not used to inform decisions and the need to evaluate the effects of programmes is frequently not considered. A formal requirement to consider relevant research evidence and the need for evaluation routinely, systematically and transparently might help to ensure better use of research evidence, planning of evaluations, use of public resources and outcomes.
Policy options
Many initiatives exist that aim to improve the use of relevant evaluations to inform decisions about public programmes and decisions about when to evaluate the effects of such programmes. These include attempts to:
- Prioritise research and align it with the needs of countries
- Build the capacity to undertake evaluations
- Increase funds for evaluation
- Commission research to meet the needs of policymakers for better information
- Improve the quality of research syntheses and impact evaluations
- Make research evidence more accessible to policymakers (e.g. through the use of summaries of systematic reviews, clearing houses, and policy briefs)
- Build policymaker interest in evaluations and their capacity to use them
- Improve public understanding of research evidence and its role in informing decisions about public programmes.
Yet, relatively little attention has been paid to requirements for routine evaluation. We were able to identify few examples of such requirements (Box 1). All of these appear to improve the use and conduct of evaluations and none appear to have important undesirable effects. However, given the small number of cases and the limitations associated with how these have been evaluated, it is not possible to draw firm conclusions. Details of how we identified and reviewed these five cases are described in our full report.
Canadian Policy on Evaluation The Treasury Board of Canada Secretariat is the central agency responsible for providing leadership for evaluation across the Canadian federal government, and gives advice and guidance in the conduct, use and advancement of evaluation practices. Deputy heads of department are responsible for establishing a robust, neutral evaluation function in their departments and for ensuring that their department adheres to the Policy on Evaluation and its supporting directive and standard. Chilean Budget Bureau’s Evaluation System The Ministry of Finance must formulate one or more decrees specifying which programmes or projects will be evaluated each year. The Evaluation Programme forms part of the Management Control System and is located in the National Budget Bureau (DIPRES) at the Ministry of Finance. Colombian Monitoring and Evaluation System The National Planning Department was given responsibility for creating the National System for Monitoring and Evaluation (SINERGIA) and for reporting annually to the National Council for Economic and Social Policy (a policy committee headed by Colombia’s President) on the evaluation findings. A National Planning Department resolution assigned responsibility for self-evaluation to all agencies in the executive branch of the government. The Directorate for Evaluation of Public Policies, a unit established within the National Planning Department is the technical secretariat of SINERGIA. Mexican Laws for Social Development and Financial Responsibility, and General Guidelines for the Evaluation of Federal Programmes The National Council for the Evaluation of Social Development Policies (CONEVAL) has the power – based on the General Law for Social Development – to regulate and coordinate the evaluation of social development policies and programmes and to assess periodically the compliance of programmes with their social objectives. The Secretariat of Finance and Public Credit and the Secretariat of Public Service, together provide a system of performance evaluation – based on the Federal Budget and Financial Responsibility Law – to evaluate the efficiency, economy, effectiveness and social impact of public expenditure. The Secretariat of Public Service evaluates the performance and results of the relevant institutions. All federal secretariats and agencies are required to adhere to the evaluation guidelines and must use the prescribed monitoring and evaluation instruments. USA Evaluation of Educational Programmes There is no overarching body responsible for programme evaluation. A case-by-case assessment is made for each programme to determine the specific manner in which it will be evaluated. For several years, two offices in the Department of Education have been responsible for programme and policy evaluation. The Policy and Program Studies Service based in the Office of Planning, Evaluation, and Policy Development, advises the Secretary on policy development and review, strategic planning, performance measurement, and evaluation. The Institute of Education Sciences (IES) is the research arm of the Department. The IES is charged with producing rigorous evidence on which to ground education practice and policy, with programme evaluation being undertaken primarily by the National Center for Education Evaluation and Regional Assistance. |
The five cases identified illustrate a variety of options for designing and implementing programme requirements (Table 1).
Considerations |
Options |
Enactment of laws and policies |
The enactment of requirements for routine evaluation can be precipitated by a variety of events (such as the election of a new government), can have a variety of motivations (such as improving expenditure decisions or transparency), can be championed by a range of advocates (such as an auditor general or a president), and can build on earlier laws and policies and on experience with evaluation. |
Scope of laws and policies |
Requirements can apply across sectors or within a sector. They can also apply to the use of research evidence to inform decisions about programmes, decisions about when and how to undertake evaluations, or both. However, the five examples that we identified and reviewed only focused on decisions about when and how to undertake evaluations. |
Responsibility for enforcing laws and policies |
The primary responsibility for enforcing laws and policies can be vested in a treasury department (linked to budgetary processes), in a planning department (linked to planning processes), in an independent organisation, spread across departments and agencies, or a combination of these. |
How laws and policies are enforced |
Strategies for enforcing the laws and policies include: having identifiable people or organisations responsible and accountable for evaluation, monitoring compliance, and taking corrective actions; and the real or perceived power to withdraw funding when there is a lack of adherence. There may also be mechanisms to ensure compliance that form part of the general structure of government (e.g. clear, understood and accepted responsibilities and accountability) or other legislation or policies (e.g. incentives that can be used as an incentive for civil servants to undertake evaluation). |
Decisions about which programmes to evaluate |
Approaches to deciding which programmes need evaluation include requiring evaluation (not necessarily impact evaluation) for all programmes while allowing flexibility in deciding on the approaches to be used or providing a structured process for deciding which programmes to evaluate. Structured processes can engage a variety of stakeholders and use different criteria and processes adapted to specific contexts. |
Who undertakes evaluations |
Evaluations can be commissioned, can be undertaken in-house, or both. |
Specification of methods used in evaluations |
The methods used in specific evaluations can be determined by the people responsible for undertaking the evaluation, by a central entity responsible for evaluations, by the department responsible for the programme being evaluated, or by a combination of these approaches. |
Funding for evaluations |
Funding mechanisms can include the allocation of core funding to entities responsible for evaluation, earmarked funds for evaluation linked to programmes, external funding, and requirements for departments to pay for evaluations from their own budget. |
Enforcement of recommendations derived from evaluations |
Ways to ensure that the evaluation results are used include: assuring the relevance and legitimacy of evaluations; designing evaluations to generate not only impact assessment but hypotheses about ways to improve programmes; framing conclusions in a way that will not alienate those responsible for the programmes; forums within the legislative and executive branches and within civil society; the joint drafting of institutional commitments by the organisation responsible for the evaluation and the organisation responsible for the programme; assigning responsibility to the senior civil servant in each department; a follow-up report on the aspects of public programmes that can be improved; an evaluation report on social development policy that establishes recommendations addressed to different decision makers; a performance evaluation system that provides information for budgetary decision-making; and the monitoring of compliance with commitments. |
Transparency and independence |
Requirements for transparency vary in relation to different types of decisions, including: which programmes are evaluated, who will undertake evaluations, what methods are used in evaluations, how the results of evaluations are reported and disseminated, and how evaluations are used. Similarly, requirements for independence can vary in relation to who pays for evaluations, decisions about which programmes are evaluated, who undertakes evaluations, decisions about the terms of reference for evaluations, decisions about the methods that are used in evaluations, reporting and interpreting the results of evaluations, peer review of evaluation reports, and decisions about how the results of evaluations are used. |
Evaluation of laws and policies |
We did not find any evaluations that compared outcomes of any kind in settings with and without requirements for evaluation. Assessments of existing requirements have been undertaken by external groups in Chile, Colombia and Mexico and, to some extent, internally in all five countries. While these assessments have largely been positive, a number of challenges have been identified, including concerns about human and financial capacity (and consequently only a small proportion of programmes being evaluated). The absence of a clear link between evaluation and planning and budgeting processes, including decisions about modifying or discontinuing programmes, has also been identified as a concern. |
The enactment of requirements for routine evaluation has been prompted by various factors. In four of the cases we identified (Chile, Colombia, Mexico, and the United States of America), laws and policies were initiated by new governments as part of a broader set of reforms focusing on or motivated by a need to improve the effectiveness of state policies and programmes, expenditure decisions, and public management. Additional concerns included the perceived need to improve systems of evaluation because of concerns about corruption, and to counter a perceived lack of objectivity, technical rigour, transparency and accountability. Establishing a body outside government, which would be devoted to evaluating programmes and focused on results-based management, was therefore perceived as necessary. Requirements for routine evaluation were championed or supported by a range of different stakeholders in these cases, including auditors general, budget bureaux, heads of state, parliaments or legislative branches of government, individual Members of Congress, as well as by multilaterals and donors. The enactments built upon earlier laws and policies, a culture of evaluation, and past evaluation experience.
The advantages and disadvantages of using an intersectoral versus a sectoral scope may be affected by the size of a particular country and sector. In the United States of America (USA), for example, more resources are used for evaluation within the education sector alone (US$70 million annually) than across sectors in Chile, Colombia and Mexico (ranging from US$2.5 to US$8 million annually per country). Trade‑offs may need to be made between an increased potential for independence and efficiency afforded by being outside a sector versus the increased potential for ownership and communication when inside a sector. Canada has attempted to capitalise on the advantages of using both an intersectoral and sectoral approach by applying the Treasury Board’s Policy on Evaluation to all government spending without precluding other departments from having their own specific policies. Health Canada, for example, is therefore allowed to use its own policy to make evaluations more specific to the health sector. In addition, every department is required to have an evaluation function; a central entity for evaluations and the use of guidelines helps to ensure consistent quality standards.
The USA was the only example, of the five cases we reviewed, in which recommendations are not included in evaluation reports. While reports may include recommendations for further research, they do not include recommendations for policy decisions. The rationale for this approach is that when recommendations are made, this introduces subjective values and political standpoints. While informants from the other countries acknowledged the importance of this concern, they believed that there was still a need for recommendations within their own national context.
We identified the following key strengths in the five examples of requirements for routine evaluation: the extensive use of information in budget- and decision-making, the ability to monitor progress towards political goals, the active participation of key stakeholders in monitoring and evaluation activities, independent evaluation and appropriate levels of financial support, a strong monitoring and evaluation system, and improvements in research capacity and quality. Key weaknesses in one or more of the cases were:
- A failure to adequately clarify roles and responsibilities
- A lack of evaluation, oversight and accountability functions
- A lack of comprehensive coverage and inappropriate discretion in deciding which programmes to evaluate
- Restrictions on how contracts for evaluations are awarded
- Problems with the availability and quality of data
- The absence of clear links between evaluation and planning and budgetary processes
- The low utilisation of the results of evaluations and nonbinding recommendations
- A failure to build capacity and disseminate results to subnational authorities
- A lack of evidence-based programmes despite official requirements to have them
Implementation considerations
Challenges to implementing requirements for routine evaluation include:
- A lack of skilled people to manage the processes or undertake evaluations
- Inadequate financing
- A lack of routinely collected data or the means to collect reliable data for evaluations
- A lack of awareness of the benefits of evaluation
- Too much discretion being used during evaluations
- Poorly defined programmes (i.e. what the focus of an evaluation is)
- Complex legal frameworks caused by multiple pieces of legislation and policies
- Procurement legislation that makes it difficult to commission evaluations
Strategies to address these problems include:
- Linking evaluation and monitoring objectives to other government initiatives and institutions in order to create synergies in budgetary processes, accountability and transparency, and thereby creating integrated systems that are not overly regulated
- Administrative reforms which enable those responsible for monitoring and evaluation to respond to the information needs of decision-makers and to link monitoring and evaluation to decision-making
- A clear focus on assessing programme performance and identifying the core issues that should be addressed in evaluations (e.g. effectiveness, efficiency and equity) and a clear specification of what type of evidence is needed
- Having an entity with a clear mission to carry out independent, unbiased evaluation to a high standard. The entity should be insulated from the influence of political organisations or interest groups
- Supporting monitoring and evaluation through the use of reliable and objective information which is continuously improved, thereby giving credibility to evaluations
- Applying both mandatory and flexible requirements
- Informing the public and engaging a wide spectrum of stakeholders in the design and interpretation of evaluation results to ensure that evaluation systems address questions that are important to intended beneficiaries, that the results of evaluations are used appropriately and, ultimately, support democratic principles
Next steps
A first step should be the assessment of the size of the problem (poorly informed decisions about public programmes) and its causes within the specific context. In most instances, there are no formal demands for routinely considering relevant research evidence systematically and transparently, or for using evaluations when making decisions about public programmes. However, a consideration of such requirements is warranted. The design and implementation of requirements for routine evaluation can be informed by the experience summarised in this report and by related experience, including findings from institutionalising evaluations, health impact assessments, environmental impact assessments, health technology assessments, and regulatory impact assessments. Arguments against such requirements should also be considered. If there is inadequate evaluation capacity, poor implementation, or an under-developed culture of evaluation there is a risk that evaluations may simply become bureaucratic requirements that need to be ‘checked off’. Laws and policies requiring routine evaluation should themselves be routinely evaluated.