Title:

Program Impact Attribution

URL: http://www.toolsofchange.com
Summary:

This document looks at the some options for assessing what portion of any measured behavior changes resulted from your program and what portion resulted from other influences. These options can also be used to attribute the affects of your program on a wide range of related variables such as resources used, pollutants released, accident rates and health status.

Highlights:

This document will soon be available online with images. In the meatime, the following is the text of the document. 

This document looks at the some options for assessing what portion of any measured behavior changes resulted from your program and what portion resulted from other influences. These options can also be used to attribute the affects of your program on a wide range of related variables such as resources used, pollutants released, accident rates and health status.

 

Experimental Designs, also called Randomized Control Designs (RCDs) and Randomized Control Trials (RTSAs)

Are you able to randomly assign some groups of people to receive your program now, and others to serve as a control group (who may get your program later)? For example, could you introduce your program first with certain cities, neighborhoods, buildings, floors, departments or tenants? If so, you may be able to use what is called an "experimental design" without adding much effort or cost to your work. Let's say, for example that you have identified up to 12 different groups that can be randomly assigned in this way. You would divide them randomly into two groups. Half would receive your program (the intervention group) and half would not (the control group). Because the selection process is random, the two groups are considered statistically equivalent, so any differences that you measure between the two groups over time are assumed to be a result of your program. One easy way to assign the groups randomly is to roll a die (if you have 6 or less groups) or pair of dice (if you have between 7 and 12 groups). If the number on the rolled die corresponds with a group, that group is selected for your intervention. Repeat until you have selected all of the intervention groups you need.

 

Randomized Encouragement Designs (REDs)

Randomized Encouragement Designs (REDs) are becoming increasingly popular for situations where those who are assigned or offered an intervention will not comply with or accept their assignment, and in other situations where it is not possible to randomly assign people into control and intervention groups. REDs involve selecting a subset of eligible people or households, dividing them into treatment and control groups and then actively encouraging (hence the name of the design) households in the treated group to undertake the intervention. This approach helps account for free-riders (people who would have changed their behavior even without your intervention.) Note that, as compared to RCTs in which all households comply with their treatment assignment, the number of households required to obtain a given level of statistical power in REDs increases by a factor of c2 where c denoted the share of households that will participate in the program when encouraged.

 

Quasi-Experimental Designs

Are you able to get comparison data from a carefully matched group? As with experimental designs, any differences noted between the two groups over time can be assumed to be due to your program. The more significant the differences between your target audience and the comparison group, the less reliable the attribution is.

 

Staggered Baseline Designs

Staggered Baseline designs are further approaches to attribution. One of the key benefits of these time-series approaches is how simple they are to apply and to talk about with supervisors and other stakeholders. They can also be helpful when you can?t randomly assign a control group or find a comparison group. With Staggered Baseline design you must be able to divide your target audience into two or more groups that receive your campaign at different times, and your time frame must allows for ongoing measurement with all of the groups. You should see changes occurring in only one group at a time ,corresponding to when you are running your campaign with each group. If there were three locations or groups, the picture might look like the illustration below. Data are collected from all three groups and since only one group gets the program at a given time, you should see changes occurring in only one group at a time.

 

Dose Response

Will you be tracking awareness levels or other measures of exposure to your campaigns? If so you can test for a correlation between your exposure data and impact data. While this won?t demonstrate cause-effect, a strong correlation between exposure and response indicates some clear relationship between your work and the outcomes you are measuring.

The following are some additional measures of both reach and depth of campaign exposure.

  1. Seeing campaign messages in emails
  2. Seeing campaign messages on posters
  3. Interpersonal discussion about the campaign
  4. Seeing or hearing about particular elements of the campaign, and
  5. Participation in particular elements of the campaign

 

Reference

Randomized Encouragement Design: US DOE and Berkeley Labs, 2010. U.S. Department of Energy Smart Grid Investment Grant ­ Technical Advisory Group Guidance Document #7. https://www.smartgrid.gov/sites/default/files/pdfs/cbs_guidance_doc_7_randomized_experimental_approaches.pdf

Topics: Clean Air
Location:  
Resource Type: training and toolkits
Publisher: Tools of Change
Date Last Updated: 2014-11-26 17:20:58

Search the Topic Resources

Click for Advanced Search »