What should pollsters do in the absence of a horse race?
A new episode of Cross Tabs, featuring Ariel Edwards-Levy
We're back from a short break with a new episode of Cross Tabs, featuring Ariel Edwards-Levy, CNN’s editor of polling and election analytics. Ariel has spent over a decade helping readers and viewers make sense of political opinion research—and working to use survey to ask better questions.
In this conversation, we dig into what polling can (and can’t) tell us about how people think, what framing and partisanship do to data, and why polling in non-election years is often the most revealing.
We talked about a lot of things, but one of the best parts of the conversation was when we talked about the difficulty of responding to a question about an issue (or a candidate) at a high level of abstraction — think, “If” questions, that is, “if a candidate supported overturning Roe v Wade would you support him” or “if candidate X ran for president, would you vote for her”. These things haven’t happened yet, they lack any kind of grounding or context. How are you to know what you would think about that, especially when all you’ve been exposed to is the notion of it happening, and not any kind of … let’s say prototype of it happening.
There is a quote from Steve Jobs that is often butchered into something like, “you can’t ask people what they’ll want in the future” but the full context of what he said included acknowledging that while this is hard, a lot of times you can ask them to react to something they haven’t seen before — you just have to show it to them.
It’s easy to think this means people won’t know what they want until it’s right there, fully formed, in the world. And I think within the confines of typical issue polling, this is basically right.
As Ariel put it in our conversation,
I think it's a really good reason to be very cautious about forward-looking questions that ask people to how they might hypothetically feel about something if it happens. Because I think that is so hard for people to answer, and often those responses get really tempered by reality. So there's always this sort of obsession where we know something could happen and people want to sort of predict and forecast and want to ask these questions like, okay, so if somebody did this, would this make you more likely to support them or less likely to support them? And I kind of hate questions like that because I think it's frequently not completely tethered to what's going to happen. And I think it's hard to use those as predictive valves when I think people themselves don't necessarily know how they're going to feel about things.
On how badly I want to use my tools on their projects
I’d like to offer an alternative approach, something we do in the universe of marketing research. We make adlobs — ad-like objects. They look, to the untrained eye, like an ad. There’s an image, a headline, a block of copy, a call to action. We show these adlobs to people and ask them what they make of it. We might ask questions like:
What’s the main message coming across to you from this?
Is that message important to you? Is it relevant to your life?
Who is this coming from? Is it credible coming from them?
Is there anything confusing or unexpected?
What questions does it bring up for you?
What are you most likely to do as a result of seeing this?
(Granted, these questions are in the format of a qualitative interview, so you’d have to modify them for quantitative analysis, but look, I’m trying to get a newsletter out here, okay?)
We sometimes use tools that allow us to get people to interact with virtual environments and the choices they might have in those spaces. For example, we might create a virtual store environment, and allow respondents to pick items off a shelf and put them in a virtual cart. We might give them an allowance and a scenario — they only have $100, they have to get something for dinner tonight for the family. What goes in the cart?
Other times, we want to know if an ad stands out. We might show them a 3 minute reel of content — a mix of the ad we want to show them, a competitor’s ad, an ad for something else entirely, a short clip of a news item, or some other snippet of entertainment. We ask people what they remember, what stood out. We ask questions about the brand or product we’re studying to see if they ‘learned’ anything about it when exposed to that content, their opinions about the brand based on what little they saw, and so on. (Importantly, we don’t tell them ahead of time what they’re watching or what to watch out for.)
When we’re feeling especially low-fi, we simply write a short paragraph describing the company or product, what we think the key benefits of “reasons to believe” are, and try to articulate a clear value proposition. We can serve this to people in a survey instrument using a highlighter tool. They can highlight key words or phrases that they really like, or really don’t like. And then we can ask a battery of diagnostic questions about the statement, as well as a simple open-end asking them to reflect on what they were shown.
Here’s what that looks like in most survey tools (in this instance, Qualtrics), in case you’ve never tried it:
At the end you get this heat map and statistics on which words/phrases got the most likes and dislikes, and you compare that to the other diagnostics and the responses in the open-end to understand more about why they liked or disliked something.
The point of all these approaches — whether done in person in a qualitative environment, or done via a survey — is that it attempts to contextualize and ground an idea in a format that feels familiar and concrete to the person interacting with it.
You’d be surprised how quickly people will react to an adlob like it’s a real ad, how little most people know about ads to discern when it’s a real ad and when it’s an adlob.
The objections I hear to these approaches chiefly has to do with the perceived expense, and then, in the context of a news organization’s polling team, the appropriateness of this kind of exercise.
And of course, I’m sure there are organizations using some of these methods — it’s just not the sort of thing that gets shared publicly, and I think that’s a real shame. Part of the purpose of polling, especially within a journalistic enterprise, is to use it as a form of reporting at scale and over time. But there’s something very different from asking people to fill out a questionnaire online and spending time with people in their communities. One is a kind of practice, the other is the game1. How do we get practice to feel more like the game?
My point is, I think there’s room for doing a lot more experiments, especially when they make the abstract concrete, and make the hypothetical immediate.
And off years, when there’s no horse race predictions to be graded on later, are the perfect time to do them.
Everybody has a plan until they get punched in the face
I’m basically obsessed with pollsters who try to get to the nuance of people’s opinions. I love comparing what people said about a policy before it happened, and what they said about it after it happened. I love comparing what people say about something they take for granted, and what they say when it’s taken away. I love pulling apart the issue into it’s practical implications and figuring out where the breaking points are.
Because I honestly have no idea what it means for people to say they think this candidate or the other is “stronger” on an issue, or that you “trust them more” on a policy. To begin with, I don’t know what “stronger” means — are they more strongly opposed to it, more strongly in favor of it, will they be able to handle it with steely resolve, or resist changing it, or ably adapt when it changes? Trust them to do what? Keep it, manage it, get rid of it, make fun of it, pretend it doesn’t exist…? What do people mean?
But also, what is their mental model of that issue? Ariel and I discussed, as an example, questions about top issues, and what people have in mind about the issue when they choose it. Is immigration your top issue because you personally experienced the system and wish it was faster and easier to navigate? Or is it your top issue because you believe that immigrants are making your community less safe? Or is it something else?2 I think a lot about something Joe Kahn said in an interview with Semafor’s Ben Smith last spring,
“It’s our job to cover the full range of issues that people have. At the moment, democracy is one of them. But it’s not the top one — immigration happens to be the top [of polls], and the economy and inflation is the second.”
Fine, Joe, fine. But what really are these issues? What do people have in mind when they choose immigration, inflation or the economy? With the exception of inflation specifically, what people want from immigration or the economy covers a lot of disparate opinions and preferences.
My biggest problem with a lot of polling is that many politicians, pundits and pollsters use surveys, as David Ogilvy once said, as a drunk uses a lamp post — for support rather than illumination.
I prefer illumination. And thankfully, so does Ariel.
Hope you enjoy the episode. And don’t forget to check out Ariel’s latest piece, CNN Poll: A record share of Americans want the government to get more done. Few trust either party to do it.
I’m sorry it’s required by law. I don’t make the rules.
This work is particularly interesting. From the abstract: “The CSAS (crowdsourced adaptive surveys) method converts open-ended text provided by participants into survey items and applies a multi-armed bandit algorithm to determine which questions should be prioritized in the survey. The method’s adaptive nature allows new survey questions to be explored and imposes minimal costs in survey length. Applications in the domains of misinformation, issue salience, and local politics showcase CSAS’s ability to identify topics that might otherwise escape the notice of survey researchers.”