Speeding Up Progress: Integrating Social Science into Political Strategy

Choosing the best possible tactics and strategies is essential for victory. Here are some ways to employ campaign tactics to most easily measure how well they’re working.

Photo by  Miguel A. Amutio .

My first job after college was with Environment America, an advocacy group. I quit because I wasn’t persuaded by their theory of change: they didn’t convince me that there were good enough reasons—either theoretical, or previously demonstrated in some other place or time—why the chosen strategy for my campaign would achieve the goals in mind. This is a problem we must do our absolute best to mitigate, especially since we, the Left, are often fighting an uphill battle against extremely organized, powerful (often business) interests that have their playbook for political success figured out.

When we fight for political progress, we employ tactics and strategies. But we don’t always have the resources or know-how to figure out if they actually succeed. We can understand what we do, and we can observe degrees of victory or loss, but we don’t often know if what we did caused victory—or which of our tactics caused certain political outcomes that we intended.

This article can’t magically create more time or people to closely study how well our theories of change are working, but it does lay out some basic tips for groups—big and small—to employ tactics in such a way to make them more easily evaluable. These tips come straight from social science, used by researchers to study exactly what we should care about: what exactly our tactics and strategies are causing.

There are tons of new (and variations of old) strategies being tried out in the world (e.g., Sunrise’s protest in Pelosi’s office in November 2018, The Climate Mobilization’s effort to get cities to declare “climate emergencies”), and many are certainly not being evaluated to the degree they could be. In an ideal world, we would understand the effectiveness of every political action taken, so that we know what to try in other places, times, and for other issues. How do we get people to show up to our meetings? What tactics garner the most media attention? What effects are direct actions actually having? How do we get decision-makers to perceive our power? What size of coalition will we need to win?

The more quickly and thoroughly we can answer these questions, the sooner the climate Left, and our allies, can win. To be sure, individuals and groups engaging in politics have great ideas about what tactics and strategies work—and where, when, and how they’ll work. But it’s always possible that we’re misperceiving the effects of our actions. That’s where evaluating, used social-scientific-type methods, comes in.

These are the tools used by researchers. In the hands of grassroots groups (big and small), they can (A) help groups design political actions so that they can be studied more easily, (B) then evaluate the effectiveness of their own tactics and strategies (with hopefully minimal effort), and even (C) help groups clarify what exactly their intended goals and sub-goals are.



Experiment with tactics yourself

The key to figuring out whether some tactic works better than others is to randomize the use of such a tactic between a treatment and control group. Some easy examples include communications to potential group members. What happens when you change the persuasive pitch in an email to potential members who signed a petition, assigning different pitches to randomly selected sub-groups on your list? How about when you try out different versions of door-knocking or phone-banking scripts? Or even the wording on physical flyers or social media info about events, seeing if you get different levels of turnout with different messages?

Researchers talk about randomized control trials as the “gold standard” of scientific tests. Scientific testing of our tactics is the goal here, and the key to it is randomizing who, when, where, etc. is on the receiving end of the tactic. If you have a few tactics you’re thinking about, randomly assign them to different groups you’re trying to attract, or to locations, or times of day, and then take note of their effectiveness.

In general, we should be interested in two categories of things: (1) if one particular tactic has an effect (compared to the absence of that tactic) and (2) the relative effectiveness of alternative tactics and strategies.

As an example of the first kind, maybe you’re a network of climate justice groups getting ready to choose which precincts, neighborhoods, or districts to work in, for an upcoming election. If you have the resources to work in five of these locations, there’s power in making a list of the ten best options, but then randomly choosing which five of them in which to employ your tactics. You can then pick all the outcomes you care about—for example, media hits, public opinion poll measures, political candidate attention on your issue, and eventually policy passage—and very simply compare the quality of outcomes in the five locations where you worked versus the five where you didn’t. In this way, you can see how well your tactics and strategies worked, in this case.

In order to get at the second kind, in a similar manner, you might have one goal (outcome) in mind that’s measurable, and then randomly assign—to locations, people, etc.—the possible tactics.

As an example, maybe you’re a coalition of neighborhoods in a city trying to change the minds of city councillors on an issue, and your tactics in mind include a letter-to-the-editor campaign, direct actions, or a bunch of calls to their respective offices. If you’re not sure which is likely to work better, you could randomly assign these various tactics to different wards in your city, and see which ones seem more efficacious in swaying the representatives. Similar to the first example, you could pick the outcomes you think are indicators of success (e.g., councillor statements on the issue or votes on bills in your favor), and see which tactics produce higher values of these indicators.

The point of all this is that when you, the group choosing how to dedicate the resources—likely in the form of humans dedicating their time, but possibly money to spend—to reach your goals, you have serious power to employ tactics and strategies that are designed in a way to easily understand their effectiveness. The more you can do this, the more that you can immediately figure out what works. And, in the likely scenario that you don’t feel like you can dedicate the energy to study what you’re doing (since it’s so urgent that you need all your people to just go do stuff), employing tactics in this way makes it way more possible for researchers to aid you, often at no cost to you, in studying campaign results—during or after the campaign.

(An organization called research 4 impact exists to connect political projects with researchers, so that tactics and strategies can be studied and relationships between academic and political communities can be furthered.)


The cutoff-point-on-a-list “experiment” (researchers call this a “regression discontinuity”)

In addition to the time and energy that it might take to think about how to strategically employ tactics in order for their impact to be assessed, a major downside of this is that maybe you actually have a really strong feeling about the tactics that will work, or the locations where they’ll work, and you don’t have the leeway to randomly assign anything. Maybe the opportunity cost of doing so is too big.

There’s a sneaky “natural experiment” that can sometimes work in cases like this. If you have a list—maybe it’s locations to target, maybe it’s coalition partners to approach, maybe it’s political candidates to target—from which you must choose a certain number of them, you can draw a line between those chosen and those barely not chosen. For example, maybe you have a list of precincts to organize, and you choose ten of them. Those “barely not chosen” would be the 11th, 12th, and 13th precincts.

If the the units (in this example, precincts) on either side of the drawn line are similar enough in kind, you can (similarly to above) treat the few above the line as “randomly” assigned to receive the tactic/strategy in mind. It’s like saying those that barely made the cut are the “treatment” group, and those just below the line are the “control group”. If you pick the outcomes you care about, this natural experiment allows you to see if they’re noticeably different between these two groups after you’ve employed your tactics.

It’s not surprising that many political strategies go understudied. After all, it can be super hard to rigorously study the effects of what we’re doing. We have limited resources, and sometimes we just have to do, basing our choices on the wisdom of elders and logic that’s passed down and around organizing communities. But the faster we know what works best, the sooner we win. So next time you’re planning one or multiple tactics, consider if the techniques listed in this article are suitable to try out.


The results of your experimenting may not only help you—they may help campaigns all over the world. And if you do try anything like this, be sure to let us all know.


Sam Zacher is a PhD student in political science at Yale University studying how interest groups can improve their strategies for success on issues like climate change. He tweets @samzacher


Guess what didn’t fund this article…advertisements! Big corporations! Billionaires! What did fund this article? Just donations from our readers and the odd grant. You could be one of those people funding new essays on the most important issue of our time. And if you already are, thank you!