image description

Evaluation Approach

Grantee Resources, Foundation Updates

Why Evaluate Our Work?

Montana Healthcare Foundation (MHCF) makes strategic investments to improve the health and wellbeing of all Montanans. We envision contributing to a measurably healthier state through improving access to quality and affordable health services, research and analysis, improving the upstream influences on health and illnesses and informed public policy. These are challenging goals, and MHCF has limited resources with which to achieve them. The U.S. now spends nearly $10,000 per capita on healthcare every year (this would translate to roughly $10 billion a year in Montana). Compared to such large numbers, the dollars that a foundation like MHCF can contribute are a drop in the bucket.

To achieve our goals, MHCF must learn what works best to catalyze sustainable improvements in health and healthcare in the unique communities that make up our state. Doing so will require persistence, innovation and a continuous, disciplined effort to adapt and improve our programming through learning from our successes and failures. Evaluation is best seen in this light: it is a tool to help us hold ourselves accountable for making progress on our objectives, and a means to allow us to continually improve the effectiveness and reach of our limited resources.

 

Principles and Framework

Evaluation is not a single method or tool: it employs a wide range of qualitative and quantitative analytic methods to understand the impact or outcomes associated with a program, and the process and internal dynamics that lead to success or failure.

To develop MHCF’s approach to evaluation, we held a series of discussions with the staff and trustees; we reviewed examples of evaluation from other foundations; we read guidance developed by several large foundations and supporting organizations; and we discussed evaluation with colleagues at the Empire Health Foundation and Robert Wood Johnson Foundation. Stakeholders offered questions and insights that helped guide our investigation:

  • Evaluation is an important way to help us look at the big picture: overall, what does our work add up to?
  • We are working on complex, long-standing problems: what is the right timeframe to decide if we are making progress?
  • We hope to improve both health and healthcare: what metrics do we use?
  • Setting quantifiable goals can be an important way to motivate action.
  • Not everything that is important can be quantified—the ability to quantify an outcome should not be the sole driver of our programming.
  • How do we measure and talk about our important qualitative impacts?
  • How do we measure and report our contribution to systems-level changes?
  • For grants that do not do well, why? What could we do differently?
  • We should be careful about how much we spend on evaluation, and we should not use evaluation as a tool to simply “toot our own horn.”

In the materials we reviewed, the basic definition and description of evaluation is consistent, but the actual practice varies widely. Evaluations may be large-scale, multimillion dollar investigations that rely on extensive surveys or detailed statistical analysis. At the other end of the spectrum, small scale internal evaluations are commonly carried out by foundation staff or individual grantees as an integral part of implementing a project.

Based on this review, and guided by MHCF’s purpose, goals and insights from stakeholders, we developed this initial framework for evaluating our work.

Evaluation Philosophy

MHCF will use evaluation as a tool to strengthen our programming and hold ourselves and our grantees accountable for results. We do not view evaluation as a tool to aggrandize our achievements. Like all MHCF investments, our investments in evaluation should help us achieve the Foundation’s goals. Decisions to invest time and resources in evaluation are governed by the same strategic lens we apply to choosing grants and designing our direct programming: we invest in evaluation when it is the best tool to allow us to accomplish specific goals related to improving health and wellbeing in Montana.

There is no single, best approach to conducting an evaluation: instead, the scale and design of the evaluation should fit the purpose and goals of conducting it. Dollars invested in evaluation are not available for programming: in all cases, MHCF will seek to use the most practical, streamlined approach that will shed light on the question at hand, and provide adequate information to allow us to improve our programming and achieve our purpose.

Definitions and Concepts

There are many definitions, methods, and types of evaluation in common use. Several key concepts are important to MHCF’s evaluation framework:

  1. “Process evaluation” tracks the activities involved with implementing a program to identify factors that are necessary for success—for example, tracking meetings held to formalize a new partnership between stakeholders, or tracking revenues to ensure sustainability.
  2. “Outcome evaluation” assesses the results of the program. It may include, for example, measured changes in health outcomes, changes in healthcare quality metrics, changes in revenue, or qualitative outcomes such as new partnerships or policy changes. Evaluation experts sometimes also refer to “impact evaluation,” which tends to focus on longer-term, systems-level changes. For the purposes of this framework, we will use the term “outcome evaluation” to refer to both. Outcome evaluation can be used at the level of a specific grantee, or to understand the impact of an initiative or direct programmatic investment.
  3. “Formative evaluation” accompanies program implementation and the results are used to adapt the program as it is implemented. In healthcare, the concepts of “continuous quality improvement,” and “plan, do, study, act” draw on the basic principle of formative evaluation: rather than planning a program and then implementing it blindly, the program can be improved through continual reassessment as it is implemented. Formative evaluation overlaps with the concepts of process and outcome evaluation: as a grantee implements a new program, a formative approach can involve periodic reassessment of the implementation process and the early outcomes, and adjustments in the program design as needed. As such, our framework will generally focus on process and outcome evaluation, with the understanding that a formative approach can be helpful in both cases.
  4. A “proxy measure” is a metric that can be used in lieu of a specific health outcome that may be harder to measure accurately. For example, a small clinic that begins to provide integrated behavioral health (IBH) may not see enough patients to measure a statistically significant improvement in depression or other specific illnesses. The IBH model has been found in many studies, though, to improve outcomes for depression and chronic illnesses like diabetes. In this case, since IBH is known to improve specific health outcomes, measuring the clinic’s adherence to key elements of IBH could serve as a proxy for the desired health outcomes.

Goals for Evaluation

  1. Support successful grants: By understanding what features of a programs design work best, MHCF can help current and future grantees succeed.
    1. Individual grantees can use a formative approach to evaluation during their projects to help adapt to unexpected needs and challenges as the project unfolds.
    2. MHCF can use lessons from evaluating grantees working in a certain area to help future grantees design and implement more effective programs.
  2. Replicate and scale successes: MHCF grantees are exploring many innovative ideas that hold promise as solutions that could be replicated and scaled to address some of the state’s most challenging problems.
    1. Process evaluation can produce information that helps others replicate key aspects of the program—for example, program design, data sharing agreements and financing models.
    2. Outcome evaluation can help build the case for the value of a programs and motivate action on the part of other communities, or policy changes at the state level that support widespread implementation.
  3. Improve MHCF programs: Evaluation is a tool to help us hold ourselves accountable for effective programming, and a means to allow us to learn what works best and continually improve our programming. For a major investment, such as our IBH initiative, we may use a formative approach to refine and improve successive rounds of grant funding and technical assistance. For smaller investments, such as a conference convening, we might use participant evaluations to understand how to design a better conference.
  4. Motivate stakeholder action: Evaluation results can provide the grounds for action on the part of other stakeholders. For example, evaluation of IBH implementation in other states has demonstrated improved health outcomes and reduced utilization of emergency rooms and other higher-cost services. On this basis, Colorado recently announced a multi-payer collaborative to support broad implementation of IBH. Similarly, outcome evaluation of MHCF’s IBH cohort may be a powerful way to support actions by Medicaid and private payers to facilitate broader implementation of IBH.

Setting measurable goals that are tracked as a program is implemented can help focus stakeholders on a collaborative goal. Montana’s Graduation Matters is an example of using a single, quantifiable metric—high school graduation rates—to motivate collaborative action.

Approach to Evaluation

Overall, most MHCF evaluation efforts will be conducted by our staff and grantees. When warranted, in cases where an evaluation will be complex and has significant potential to advance a specific MHCF objective, we may invest in contracts with outside evaluation experts.

There are several levels at which MHCF will use evaluation:

Individual Grantees

Grantee self-evaluation: Each MHCF grantee is responsible for defining specific desired outcomes, and conducting a limited self-evaluation. We encourage grantees to match their evaluation plans to the scale and complexity of their projects: we do not expect, for example, that a $20,000 grant will yield a detailed, sophisticated evaluation. On the other hand, we hope that by encouraging each grantee to think about self-evaluation as part of the project, we will, over time, learn a great deal about the process and outcomes of our grant-funded work.

Using these evaluation and outcome plans, MHCF staff work closely with each grantee to monitor progress throughout the grant, and provide constructive input when challenges arise.

External evaluation of individual grantees: MHCF may choose to conduct or commission outside evaluations of an individual grant-funded projects. We use this option when a grant is piloting a particularly important innovation that has potential to be replicated in other parts of the state.

For example, the Park County Connect program partners a community health center, critical access hospital, mental health center and county health department to address the needs of people who are using emergency department services frequently. Although hospital-based programs to reduce high-utilization of emergency room (ER) and hospital services have become more common, the partnership with a small county health department is unique and offers considerable potential for other rural communities facing workforce shortages. Consequently, MHCF hired the Center for Community Health and Evaluation to provide technical assistance to support a robust evaluation of ER utilization and other health-related outcomes, and cost savings.

Initiatives

MHCF sometimes supports a cohort of grantees working on similar projects with a common set of goals. In these cases, external evaluation may play an important role in demonstrating the effectiveness of the program, as well as for helping MHCF strengthen the program.

Our Integrated Behavioral Health (IBH) Initiative, for example, seeks to advance more widespread use of IBH by supporting diverse grantee organizations to work toward specific health outcome improvement objectives by implementing a clearly defined set of practice elements. In the IBH Initiative, process evaluation by the grantees and the National Council for Behavioral Health will allow MHCF to refine and strengthen our approach to supporting successful IBH implementation. Outcome evaluation that demonstrates health benefits (for example, improved depression scores or reduced ER utilization) and related cost savings will help inspire other healthcare providers to implement this model, and may also provide support for policy-level changes that would make it easier for Montana practices to use IBH.

The Big Picture: Assessing Progress and Improving Our Effectiveness

Each MHCF focus area encompasses a broad and complex set of challenges; and within each area MHCF employs a range of tactics, including both direct programming and grants. Measurable progress on issues such as behavioral health and American Indian health disparities will, in many cases, require steady efforts over many years. Often, there will not be a single metric that tells us if we are making progress. To allow us to continuously improve the reach and effectiveness of our programming, MHCF will use several approaches to evaluation:

  1. Synthesize lessons and themes from a group of grantees working in a common topic or focus: MHCF staff will periodically review grantee self-evaluations, and synthesize common themes within each focus. This will let us identify the most promising strategies and proactively address challenges that commonly arise for grantees working in a similar program area.
  2. Evaluate MHCF’s direct programming: MHCF staff are leading or collaborating on many statewide initiatives, for example, the American Indian Health Leaders group, advancing the use of SBIRT (Screening, Brief Intervention, Referral to Treatment), forming a Behavioral Health Association, and the Manatt/DPHHS collaboration on the Medicaid treatment system for substance use disorders. For each of these efforts, MHCF will establish goals and metrics that allow us to evaluate and improve our effectiveness.
  3. “Standard” foundation metrics: Many of the foundations we reviewed focus on statistics such as the number of grants and number of dollars awarded. These statistics are not an adequate proxy measure for our health improvement objectives. They provide overall context for understanding our impact, however. These metrics can also help measure progress on our goal of having a diverse grantee portfolio with a range of organizations and geographies represented.

Our first strategic plan defined broad goals for our initial years of programming, one of which was to “generate data that support focused initiatives in the future, and better-informed planning over the longer term.” MHCF will use evaluation as a tool to continually refine and strengthen our programming. Within each focus area, we will use results of evaluations of grantees, initiatives and direct programming to define more specific objectives and metrics that can be measured to track progress.

 

View a PDF version of MHCF’s Evaluation Approach.


Evaluation Sources Reviewed:

  1. Center for Nonprofit Management: What Is the Difference Between Process, Outcome and Impact Evaluations? 
  2. NYS Health Foundation: Tools and Guidelines for Planning Effective Project Evaluations
  3. Irvine: Evaluation Advances Our Mission in Four Ways
  4. RWJF: Research, Evaluation and Learning
  5. Hilton Foundation: Evaluation of the Conrad N. Hilton Foundation Chronic Homelessness Initiative: 2015
  6. Geo Funders: Learn For Improvement