Jeremy Offenstein, Ph.D.

Senior Analyst

Process Evaluation

We have expertise in performing process evaluation of numerous types of energy efficiency and demand response programs operating in different markets. Below, ADM senior analyst Jeremy Offenstein, Ph.D., answers some questions relating to energy program process evaluation.

There is often an arbitrary line drawn between process evaluation and impact evaluations. This division is seen in cases where process and impact evaluation reporting are kept separate or where the third-party evaluators may be separately contracted for impact and process evaluations. Despite this practice, there are benefits to an integrated evaluation team that assesses the whole program inclusive of program impacts and processes.

One of the benefits to completing integrated process and impact evaluation is to realize synergies in data collection. Participant feedback on program processes may be solicited through the same surveys used to collect data on impact parameters such as in-service rates. Interviews and survey data characterizing decision-making processes can provide insight into an understanding of the appropriate gross and net baselines. For example, interviews with trade allies may point to common practices that should be considered when developing baseline conditions for estimating savings impacts.

Additionally, concurrently assessing program processes and impacts affords the opportunity to better understand how the program is delivering its impacts and how program implementation and design may impact the realized gross and net savings. For example, comparing survey responses to questions on topics related to program marketing and outreach can lead to insights that may account for differences in reported program influence across customers. We have found cases where specific sources of program awareness are associated with lower levels of program influence and this information can be used to improve program net impacts over time.

Assessing the effectiveness and efficiency of the program processes is less clear cut than assessing program impact performance, in which case there is typically a defined goal to measure the achieved program performance against. The framework we use when approaching process evaluation includes the assessment of implementation fidelity, a comparison of the program design and implementation to what needs to happen to effectively deliver the program, and an assessment of the customer experience (e.g., issues such as difficulty scheduling appointments or satisfaction with various facets of the program).

The assessment of implementation fidelity involves checking to see if the program is operating as it is intended. For example, if the program design includes delivery home energy audits as an initial step to participation, we might ask participants if the audit was performed, if the findings were reviewed with them, and if the recommendations were understood and useful. Identifying gaps in how programs are implemented as compared to the intended implementation can point towards improvements in the program that enable them to deliver higher savings, more satisfied customers, or greater educational and other non-energy benefits. That said, not every identified gap represents a problem. Some aspects of the program design and implementation procedures may not be needed. For that reason, when gaps are found, the evaluator should ask her or himself what the likely negative impact is of that gap and what the likely benefits are of addressing the gap. If the program is delivering all needed results, then the gap may not be an issue.

Nevertheless, assessing implementation fidelity can only get you so far in assessing the program processes. The program may omit important processes or design features that if included would enable the program to function better. The identification of omitted program aspects is facilitated through understanding the program goals and theory and our experience in evaluating similar programs across the country that has informed an understanding of what well-designed and run programs include.

Participant and other stakeholder feedback on their experience of the program also provide key information for assessing program performance.  These folks can provide key insights into what isn’t working with the program, markets that are underserved, and feedback on other relevant performance criteria.

When planning the evaluation effort, an assessment of each program’s relative importance to the current or future portfolio should be made to effectively allocate evaluation resources. The importance of a given program is usually a function of the magnitude of savings goals and budgets, and for new or pilot programs, how they may scale up in the future.

Another consideration is where the program is at in its lifecycle. On the one hand, well-established programs that have consistently delivered cost-effective savings are unlikely to benefit as much from a process evaluation as newer programs or those with operational problems that are not well understood by the program administer or implementer.

On the other hand, new programs and pilot programs would likely benefit the most from a more in-depth process evaluation effort. You could think of these programs as in part 2 of the program design phase because there are typically multiple operational procedures still being worked out and the market response is not yet known.  All else considered, for new and pilot programs we would typically allocate greater efforts to the process evaluation.

Program complexity is another factor to consider when designing a process evaluation. Complexity increases as there are increases in the number of parties involved, the number of services offered, and the number objectives to achieve. A program can also be thought to be more complex as the number of sequenced administration steps that need to occur to deliver it increase. More complex programs typically require greater process evaluation effort because there may be more “linkages” between individuals and processes and more opportunities for delivery issues to materialize.

Overall, the planning process is one where client engagement and feedback is critical to the overall success of the evaluation effort. This input can help inform a well-designed evaluation that provides critical insights into program functioning for future improvement.

Load More