Influencing Philanthropic Practice Responsibly

Phil Buchanan

CEP’s focus is on influencing foundations to be more effective. We seek to do that through rigorous research, based on data. But sometimes, looking around at what is published and discussed, we ask ourselves, what if no one cares about the research and data?

What we really mean is, what if people don’t really differentiate — or aren’t given enough information to differentiate — between research and opinion, between facts and theory, between data and assertion, between evidence and an inspiring story, between that which is known to be true and that which is simply asserted to be true (or wished to be true)?

I’m not talking about anything as nefarious as “fake news” or “alternative facts,” but rather a more subtle erosion of the distinction between what we really know and what is asserted to be knowledge.

Overstated Claims in Business Literature

Here’s an example. Harvard Business Review published a cover story last April titled “Culture is not the Culprit.” The title on the print cover was even more strongly worded: “You Can’t Fix Culture: Focus on Your Business and the Rest Will Follow.” Those are interesting and provocative titles and the article was written by respected Harvard Business School (HBS) Professor Jay Lorsch and his research assistant. So I thought, Wow, I’ve got something to learn here, because I always thought culture was important!

Was I wrong? I opened to the article eagerly.

“When organizations get into big trouble, fixing the culture is usually the prescription,” Lorsch and his co-author write. “But the corporate leaders we have interviewed — current and former CEOs who have successfully led major transformations — say that culture isn’t something you ‘fix.’ Rather, in their experience, cultural change is what you get after you’ve put new processes or structures in place to tackle tough business challenges.”

OK, interesting. But, wait, how many interviews did they conduct to reach that conclusion?

The answer? Four. As in one, two, three, four. As in the average size of an American family in the 1960s. As in the number of Beatles. Not 44, not 400. Four.

All the interviews were with men who run major companies. So basically HBR — at the very least in the headlines it chose and I’d argue also to a certain extent in the article — was representing the subjective opinions of four powerful guys who run companies as conclusions that should influence the practices of its readers. (I also discussed this article in a blog post I wrote for Social Velocity last spring.)

This kind of thing happens all the time. I blogged about another example, also from an HBR cover story, in 2014, in which two HBS professors drew conclusions about perspectives on work-life balance, and differing views between men and women, based on a survey of 82 executives in an HBS class that included just 24 women. Twenty-four! Yet the statistics were arrayed in the pages of HBR as if meaningful, generalizable comparisons could be drawn between the populations, and as if these populations were representative of something more than the enrollees in a particular class.

It Happens in Philanthropy, Too

Just as the line between objective reporting of news on the one hand and analysis and opinion on the other has become blurred in the mainstream media, the line between research and opinion or theory (or half-assed guesses) has become similarly blurred among those seeking to influence leaders. It happens in the philanthropy world, too.

Sadly, many of the influential articles and reports about the practice of philanthropy — ones that assert that this approach or that approach is more effective than other approaches — are rooted in no actual research whatsoever. Others suffer from significant weaknesses that include flimsy or bad methodology, no or little transparency about methodology, small or unrepresentative samples whose limitations are not forthrightly enough disclosed, lack of acknowledgment of the effect of different contexts, and exaggerated claims that aren’t supported by the data.

It’s not that I am arguing that every piece of research can or should be comprehensive, or that there is such a thing as methodological perfection. What I am arguing is that the basis for claims should be transparent and clear. And that we should be more careful in the first place about making sweeping claims.

Although there will never be a formula for philanthropic impact, articles claiming to have found one inevitably get a lot of attention. Because it’s hard to break through to an audience, the temptation is to go with the boldest, biggest possible claim. (This is hardly a problem particular to philanthropy research, I know.)

The audience for philanthropy practice knowledge is feeling deluged, after all. A field study conducted by Harder & Co. and Edge Research for the William and Flora Hewlett Foundation examined “how foundations find knowledge and how it informs their philanthropic practice.” The Harder/Edge report states, “Interviewees frequently noted that they feel overwhelmed by the volume of practice knowledge available, and the survey results suggest average loyalty to individual knowledge producers is low.”

In fact, respondents “preferred sources and methods for gathering practice knowledge that are informal and often serendipitous,” the field study finds, relying especially on “their peers and colleagues, as opposed to particular organizations or publications, both as their most trusted knowledge sources and as their preferred means to gather knowledge.”

Perhaps they are turning to their peers because they trust them most? After all, just under half of survey respondents agreed that the knowledge produced for funders “is vetted/it works!” That’s pretty brutal.

What I take away from the Harder/Edge study (in addition to some other somewhat more affirming stuff that I won’t go into here) is that foundation staff don’t have a ton of faith in much of the knowledge work out there. But they also don’t have the time to figure out what’s credible and what isn’t.

Responsibilities of Philanthropy “Knowledge” Organizations

What then to do for those of us who seek to influence philanthropic practice for the better through knowledge? Here are three ideas.

  • We have a responsibility to be responsible. Just because we can gain attention or influence practice with work that isn’t sufficiently backed up by evidence doesn’t mean we should. We should be especially careful when we fervently believe something is true, because believing isn’t the same as knowing — and “confirmation bias” can creep into our work. We should stay away from sweeping claims and be forthright about the limitations of our work. And we should make clear the nature of our relationships — who funds us, who our clients are, and so on.
  • We have a responsibility to be open and clear about exactly what informs our findings. There is simply no excuse for not providing readers with clarity about what is behind our work. More times than not, it makes sense to explicitly lay out the limitations of our approaches. Actually, the Harder/Edge field scan models this well.
  • We have a responsibility to challenge each other — publicly and privately. We can’t expect our audience to dive into the methodology section of every report we put out, but us “knowledge” organizations can and should challenge and debate each other. We should all welcome pushback and argument about what we produce (especially when we’re making big claims), because none of us are smart enough to get everything right on our own. The overarching concern should be building knowledge so that practice is better — not promoting or blindly defending our own organizations. Debate and disagreement is healthy.

No one will force us to do these things. Many of the organizations seeking to influence or inform foundations, including CEP, do a lot of self-publishing. This means we’re policing ourselves rather than relying, say, on peer-reviewed journals. Such journals exist, of course, to help ensure that claims are sufficiently backed up. But, while important, they are unlikely to be the go-to resource for practitioners — in business management or in philanthropy.

So we need to hold ourselves accountable if we want to be seen as credible. One way we do that at CEP is by sharing drafts with a wide assortment of advisors and asking them to beat up our work before we publish it.

Writing up research findings can feel like a tough balancing act. We have certainly heard calls from some, when seeking feedback on draft reports, to boil down or simplify our findings or make clearer what the “to-dos” are for readers. Sometimes there is only so far you can go in these directions while remaining faithful to the research and data.

Ultimately, we should err on the side of caution in what we say. It is better to understate our findings than overstate them. Otherwise we risk reducing the credibility of all of us who seek to produce knowledge for foundation staff.

After all, seeking to influence the practice of others isn’t something we should do lightly.

Phil Buchanan is president of CEP. Follow him on Twitter at @philxbuchanan.

SHARE THIS POST
foundation effectiveness, philanthropy, research
Previous Post
Speaking Truth to Power
Next Post
An Insider’s Guide to the CEP Conference

Related Blog Posts