Breaking Down the Data on Evaluation

Ellie Buteau

The Center for Effective Philanthropy (CEP) and the Center for Evaluation Innovation (CEI) have been partnering for the past year to create a comprehensive benchmarking dataset on evaluation practices at large foundations — specifically, U.S. and Canadian foundations giving at least $10 million per year in grants. Our report, Benchmarking Foundation Evaluation Practices, which is just out today, contains information about an array of important topics related to evaluation, including staffing for evaluation, dollars invested in evaluation activities, the use of information resulting from evaluations, and the challenges foundations face with evaluation work.

For the past several years, participants of the Evaluation Roundtable, a group originated in the late 1990s by Patti Patrizi and now run by CEI, have gathered to discuss evaluation practices and challenges in foundation work. The Roundtable network is currently at its largest size yet, with more than 60 foundations in the U.S. and Canada participating.

A primary goal of CEP’s partnership with CEI was to expand the database about foundation evaluation practices. We wanted to go beyond information gathered from foundations that participate in the Roundtable and include other foundations that also engage in evaluation work — even if they don’t have dedicated evaluation staff or formal evaluation departments or processes.

The data presented in the report represent 127 foundations. At almost 60 percent of these foundations, the staff who responded to our survey had evaluation titles, while at 35 percent of foundations, the staff who responded to our survey were program staff. Six percent of foundations had staff with a title that did not fall into either of these two categories respond to our survey.

Some data points in the study seemed familiar in light of past research CEP has conducted. For example, the two issues most frequently cited as receiving too little investment from foundations are disseminating evaluation findings externally and improving grantee capacity for data collection or evaluation. Both of these findings are very much in line with what we found in other recent research initiatives (see Assessing to Achieve High Performance: What Nonprofits Are Doing and How Foundations Can Help and Sharing What Matters: Foundation Transparency).

But other pieces of data we found to be more surprising. For example, in a 2011 study, CEP learned that 65 percent of foundation CEOs found that having evaluations result in meaningful insights for the foundation was a challenge. Today, in 2016, 76 percent of respondents to our survey reported that having evaluations result in meaningful insights for the foundation is at least somewhat challenging. While these two data points are coming from different respondents within foundations, it seems notable that little seems to have changed on this front in recent years.

Another finding which surprised me — and given some comments we received during our survey design phase, I image this finding will surprise others as well — is that almost 20 percent of foundations have provided funding for a randomized control trial (RCT) of their grantees’ work in the past three years. While 63 percent of these foundations found the RCTs quite or extremely useful in providing evidence for the field about what does and does not work, only 25 percent found them quite or extremely useful in refining the foundation’s strategies or initiatives.

One area of data collection for this study that CEP and CEI knew would be very challenging was about dollars invested in evaluation. In the past, CEI and Patti Patrizi have attempted a variety of ways to determine and collect this figure, so we were able to learn from their past experiences. However, our joint efforts to gather this information in this study still came up short.

Our study indicates that the median spending on evaluation for the previous fiscal year was $200,000. Even though we put no parameters around what respondents should include in, or exclude from, this dollar figure, only 35 percent said they are quite or extremely confident in the dollar estimate that they provided. What qualifies as spending on evaluation — and how much is, in fact, spent on evaluation — are clearly issues about which foundations could use more internal clarity.

For any foundation staff member with questions about the decisions their foundation has made or will make when it comes to evaluation work, this report provides a wealth of information. Our hope is that foundations will use our research as a resource when reflecting on their own practices, and learn from what their colleagues are doing and learning in their work to determine the evaluation structures and processes that best align with their own foundation’s mission.

Ellie Buteau is vice president, research, at CEP. Follow her on Twitter at @EButeau_CEP.

SHARE THIS POST
evaluation, research
Previous Post
Grants Management and the Foundation of the Future
Next Post
Making Evaluation Integrated and Indispensable

Related Blog Posts