What I Learned About Evaluation + Design

A couple of weeks ago, I attended the 30th annual American Evaluation Association conference in Atlanta, GA. With over 3,500 in-person attendees and 900 sessions, there was a lot to take in! The theme of this years conference was Evaluation + Design, focusing on program design, evaluation design, and information design. I was particularly excited about this theme as I have been increasingly interested about the relationship between evaluation and program design.

 

What role does evaluation play in design?

Within this complex world, adaptive program design and developmental evaluation are needed to make an impact. When evaluation is incorporated into the program design phase, evaluation can help program designers understand how components and activities are linked to desired change, and potentially impact. Evaluation can help to clarify program strategies.

Both design and evaluation are trying to tackle complexity and wickedness. One way evaluation is trying to do this is through systems thinking. Designers are using divergent thinking to find convergence on a solution.

Do not confuse design with methods.

This is true for both program design and evaluation design. In evaluation, surveys are not design; interviews are not design. In programs, groups are not design; a training is not a design.  These are the methods that are used to support a design. Without the underlying design, these program and evaluation methods will eventually fail.

What evaluation and design strategies can be shared across sectors or are already overlapping?

Need-finding: Need-finding is a strategy used by designers at the beginning of their process to understand the lives and needs of potential users. As Heather Fleming said, “need finding has to be solution agnostic.” This process seems similar to a needs assessment in evaluation. But oftentimes, there is a specific solution in mind when conducting a needs assessment. Maybe evaluation can take a page out of designers book and start approaching needs assessments with a “solution agnostic” mindset.

Logic models: Logic models are used in evaluation to show a program’s theory of change, connecting program activities to outcomes. Design often struggles to connect their activities to the expected behavior changes or impact. Maybe logic models could help to lay out this series of events and develop some benchmarks along the way to start to document these steps toward impact.

Systems thinking: For designers, they seek to understand the daily lives of users. For evaluators, they use systems thinking to understand the forces at play in the lives of individuals, for programs, for organizations, and for policies. Both of these strategies are looking for the patterns, gaps, and opportunities in the world.

Empathy and people-centered approach: For both evaluation and design, you have to approach your work with empathy. Both fields are ultimately about people. The people that products are designed for. The people served by a program. The people affected by a policy. In order to design a product or service, or evaluate a program or policy, you have to have empathy for the people affected by them. They have to be at the center of your work.

The struggle between design and impact.

I was struck by how hard it seems to be to connect design to impact. This seems like an obvious place where design and evaluation can partner. Heather Fleming described four ways that design has been struggling to prove its impact:

  1. Designers typically do not have a say in what type of product is designed or what problem is being “solved.”
  2. Designers are enablers; they help other organizations make an impact.
  3. Units sold does not equal social impact. Yet this is often the only thing tracked.
  4. It is difficult to discern what can be attributed to design, rather than other efforts.

How can evaluation help to mitigate and address some of these struggles that design is having to prove its impact?

Thanks to the following presentations and speakers that inspired this post:

  • What Can Evaluators Learn from Design (and vice versa)? – Heather Fleming of Catapult Design
  • How Can Evaluation Contribute To and Influence Program Design? – Michael Bamberger, Thomaz Chianca, & Alexey Kuzmin
  • Systems-Oriented Design Thinking – Jan Noga (Pathfinder Evaluation and Consulting) & Mary McEathron (Rainbow Research, Inc.)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s