Tuesday, May 28, 2013

Participatory Action Research (PAR) vs. Applied Social Science as We Know It

There are a number of authors like Bent Flyvbjerg who have done a masterful job spelling out all the reasons why social scientists should stop aspiring to be natural scientists.  See his books Real Social Science and Making Social Science Matter for a full explanation.  The statistical tools available in the social sciences can only help us understand, not prove, why things happen.  Natural science, because of the elegance of the scientific method, can generate causal generalizations ("proofs")  that apply reliably across time and place.  Social scientists only know something about very specific situations. Generalizing across people, organizations, communities, and countries is a lost cause. The differences outweigh the commonalities, and efforts to "control" everything but the one thing we want to study are either unethical or impossible. This is not to say that social science isn't important.  On the contrary, the most serious problems we face are social and political, not physical.  "We have met the enemy and it is us." So, it is important to understand how to push social science in a useful direction and produce helpful prescriptive advice for those interested in social change.

There are some social scientists, mostly economists, who aspire to the mathematical rigor of the
natural sciences.  They undertake "double blind" controlled experiments. They give half a community something and not the other half, and see if they can detect statistical differences that prove the poverty-relieving effectiveness of a specific policy or program.  First,  you've got to believe that the two halves of the community are initially the same (and stay the same) during the course of the experiment. Second, you've got to believe that everything else stays constant during the course of the experiment. Third, you've got to believe that the experimental effort and not something else caused what statistically significant outcome might be found. Mere correlation, though,  isn't a basis for reshaping public policy or interfering with people's lives. So, whatever appears to be statistically relevant still doesn't explain why good or bad things happen. Put aside whatever concerns you might have about withholding something good or imposing something bad on half a community so that social scientists can test their ideas or a new practice. My view is that social scientists should stop pretending that correlation equals causation. They should admit that the complexity and uncertainty involved make science-like generalizations about people, communities and institutions extremely unreliable.

Instead, social scientists should work harder to make connections to client-communities, agencies and government entities that want help figuring out what they should about a specific problem  (or how they should alter their policies and practices).  And, they should team up with these groups to figure out how to make sense of past and current practices or events.  Alliances of this sort, in which the client calls the shots, may seem less than ideal for scholars who want to make a name for themselves by calling into question the conventional wisdom about some aspect of everyday life or current public policy. Social science scholars who want to be able to frame their own research questions, use methods of analysis that peer-reviewers will approve, remain entirely detached from the places or groups they are studying, and draw whatever conclusions make sense to them are in a special category.  They want to continue following the "lets-pretend-we-are scientists"approach to applied social research.  Instead, I would suggest a very different approach called Participatory Acton Research (PAR).

PAR as described by Action Research and Action Science specialists was developed, in part, by
my MIT colleague, the late Donald Schon. Davydd Greenwood and Morten Levin, Introduction to Action Research: Social Research for Social Change do a nice job of summarizing the current state-of-the-art.  PAR can produce meaningful results even though it tends to focus on individual cases rather than statistical analyses, large samples or controlled experiments.  PAR puts a premium on local knowledge (what people in the actual situation know from their first-hand experience), not just expert knowledge. And, PAR measures its success by the way in which client-communities feel about the "results"of the research  and its usefulness rather than the way in which peer-reviewers in the social science community feel about its methodological rigor (in traditional terms) or the replicability of the findings. All PAR knowledge is "situated."  That means it is place or case specific.  The goal isn't to come up with provable generalizations.  Quite the contrary, the objective of PAR is to generate what Aristotle would have called "practical wisdom" or useable knowledge, believable to those who have to take action if social change is going to occur.

I will be teaching PAR seminars for the first time at MIT next year. I'm sure there will be an extended discussion of whether PHD candidates will be allowed to substitute these classes for more traditional research methods classes. From my standpoint, there are three reasons why PAR methods should be accepted as a viable alternative to the usual applied social research (read statistical) methods that doctoral students in applied social sciences are normally expected to master.  First, unless graduate students learn how to interact with a client-community from the beginning of a research effort, they will never learn how to communicate "with" rather than send messages "to" agencies, groups, organizations and institutions seeking to promote social change. And, I would argue, the only ethically defensible role for social scientists is as partners to those who want to promote social change. Second, unless they learn how to make sense of what is happening in a specific case or context (rather than in a randomly drawn sample of places or situations),  they will always be limited to analyzing superficial correlatiosn when what they are really interested in is causation. One has to "go deep" to have any hope of diagnosing what is actually going on. Third, unless they learn how to build relationships with the users of actionable knowledge, they will always be offering pronouncements to the "cognizanti," not collaborating with the people who have the authority to make change.

There are all kinds of dilemmas that surround participatory action research.  Who represents the community, the agency or the client?  What if power is maldistributed inside the client-community? How does a "friendly outsider" (i.e. a PAR researcher) convince a client-community that he or she can be trusted? Who makes the final decision about which data that will be gathered and how findings will be analyzed or interpreted?  How should case specific information be integrated with findings from other cases or even more general findings produced by traditional applied social scientists?  What role should PAR researchers have in formulating prescriptions for action?  Is some kind of collaborative adaptive management possible, in which the PAR researcher stays involved with a client-community as its seeks to monitor results and make ongoing adjustments?

The resistance to PAR is strongest among social scientists who yearn to be part of the natural science fraternity.  The traditional types are more concerned about being respected by other academics than they are about "doing social change."   They fear that advocates of PAR feed right into the hands of natural science skeptics who think putting "social" in front of scientist is equivalent to putting "witch" in front of "doctor."  PAR practitioners, for their part, are worried that traditional social scientists are oblivious to the harm that they do when they generalize about social and political phenomenon and fail to appreciate the case specific implications of their findings.

We need to start a different conversation.   PAR teachers and practitioners should focus on explaining to their potential client communities what they do, and why they do it (and why it would be best to work with PAR researchers, not traditional social scientists).  They should codify the ethical norms that guide PAR in practice so they can be held accountable.  They should think hard about the best ways of integrating what PAR teaches about case specific situations with the kinds of generalizations that traditional social scientists produce. It may be that graduate students interested in PAR will also have to master traditional science research methods if they want to be taken seriously in the university. That's twice as much work, but it may be necessary.