Over on tumblr I got asked what participatory research is, and I kind of wrote a novel? SO HERE:
So, back in the day, plant breeders and agronomists (who are people who study crop management, basically) did experiments on research stations, and emerged with new varieties and new fertilizer recommendations, and they would decree “Everyone should plant variety ICSV 9812 and apply 100 kg of DAP at planting plus 100 kg of Urea at the flag leaf stage” or whatever.
Starting in, oh, the 80s probably, some people (social scientists and NGO types mostly) were like “you know nobody actually does those things, right?” and the scientists went “hmmmmm, I wonder why?” and “how can we fix this?” because scientists like problem solving.*
*Except some scientists, who went “lol we don’t care we have tenure and have you SEEN the roads out there?”
So people started installing experiments on farmers’ fields, and realized that those fields were kinda different from their research stations, and things that worked really well on-station didn’t work all that well in the field. And the good scientists took the opportunity to, y’know, talk to farmers, who sometimes told them things like “yeah but your sorghum matures too early, so the birds eat it.”
And then people started going a little farther, and talking to farmers FIRST, so they could do experiments based on the things farmers were actually interested in! And breeders came up with “Participatory Varietal Selection” in which farmers actually participate in the breeding process, so that you end up with varieties that have the traits farmers want. Which might be “tastes good” or “stores well” or “produces a lot of leaves for animals” and not what breeders usually select for, which is usually grain yield.
When it comes to agronomy, it gets more complicated. For example, letting farmers pick what they want to try is great for being relevant to farmers but doesn’t necessarily get you generalizable scientific results.
One big reason for that is that trials in farmers fields tend to have suuuuuper high variability. And here is where I pull out the statistics-talk, but I’ll try not to be scary about it. (I put some definitions at the bottom, defined words are starred when they first appear)
And because it’s easiest to talk about an example, I’ll use one from my research: farmers were curious if they could use less fertilizer if they applied compost, so we did some experiments with different levels of fertilizer with and without compost. I’ll talk here about 2 of those treatments*: fertilized maize with and without compost. We did the same trial on 40 farmers’ fields, with each farmer counting as one replicate*.
In science generally, the key to being sure about your results is that you want the differences between treatments to be bigger than the differences between individual replicates for the same treatment. There’s several ways to calculate “different enough” but one easy visual is to use the standard error. Google it if you want, but it’s basically “how different are the replicates of the same treatment”
So here’s these partial results:
The bars are the mean yields for the two treatments, the lines (error bars) represent one standard error above and below the mean. Blue bar is “yield without compost” green is “yield with compost.”
So if you look at the averages, you’d conclude “hooray! compost makes maize yield more!” except not quite as excited because that’s what you expected. But if you said that in a meeting you’d get major sideye from other scientists because do you see how much the error bars overlap? That means “we can’t tell FOR SURE that with compost was better because there’s too much variation among the different replicates.”
Which is super annoying. Now, you can try to fix that by measuring all the things you think might influence your results that aren’t the thing you’re trying to test: maybe the soils are different, or it rained more in one place than the other, or whatever. But in order to do that properly you have to have a lot of replicates, which means convincing a lot of farmers that you want to put this trial in their field, and a lot of work keeping an eye on things so you know they’re doing the trial the same way.
This is why some scientists dislike participatory trials, because it’s hard to get clear results. Others say “hey, all that variability is super important! We should figure out why things work well in some places but not others!” and they spend a lot of time and effort doing that. (Message me if you want a paper on that from a friend of mine). Others say “you know, we actually don’t care about statistically proving that maize does better if you apply compost, because actually we already know that should be true? What’s really important is learning together with farmers, figuring out what they need and how they work, and helping them understand WHY things happen” and etc.
That second one was me with this trial. I don’t care about the results that much. I care about talking to farmers ABOUT the results: why did Fadio get great yields but Baseriba got terrible ones? And also, “OK but what does this mean in terms of household economics?”
So when we had the results we met with the farmers who’d done the trials, in their villages, and figured out (together) the budgets for the different results. (Guess which one’s me and which one’s my technician :-P)
For me this is super useful, because now I know everything people put on maize and how much it costs! For farmers it’s interesting because they’re not used to doing this kind of calculation. Best comment I got was:
“So, usually the guy who has the highest yield brags to his neighbor a little, because he’s doing really well. But his neighbor might actually be earning more money, if he got a little lower yield but spent a lot less money!”
It’s great when people make my points for me. It’s great to see people learning. And I should point out that most of the people in these meetings are illiterate, the rest are barely literate. But that absolutely has nothing to do with their intelligence or ability to understand this stuff. They might not get all the math (they almost certainly don’t) but they do understand the point.
OKAY WELL THAT IS NOW SUPER LONG OH MY GOD NOW FOR THE POINT:
Participatory research treats farmers as part of the research process. Not just “the people who apply the things we learn by doing research.” Doing that well means caring about a lot more than just what you can publish in fancy journals. It usually means caring about farmers’ whole lives, because you’re working with them and treating them as partners. (So I now have on my to-do list “learn how and who to effectively ask about getting working health centers of some kind” because women asked me to). It can also be an effective way of teaching farmers things, because most people learn by doing. And it’s a great way to introduce a new variety: We did another experiment with cowpea (black-eyed pea) and now farmers want to know where they can buy seed for it. So we’re a) distributing seed and b) trying to partner with people who know about seed systems so we can get some farmers producing seeds for their neighbors, because we can’t just keep distributing seed, that’s not our job.
Basically: it becomes about a lot more than “does maize yield more with compost?” it becomes about “how can I help farmers improve their lives.”
Treatment: probably just what you think it is: What you are testing. You usually change only one thing at a time between treatments. In my example: fertilized maize without compost is one treatment, and fertilized maize with compost is another.
Replicate: repetition. You repeat the same treatments a bunch of times so you know your result isn’t because someone fucked up. Not because people don’t fuck up when you repeat more, but because if you have enough repetition you can assume the fuckups are random. So in my example maybe one person accidentally puts on too much fertilizer, but someone else doesn’t weed properly. If you only had the one, it’d make your result too high, if you only have the other, it’d make your result too low, but if you put them together they basically cancel out.