NJ Polling Center Moves Away from Candidate-Preference Polling

In an election year, forecasting feels like the biggest game in town. Every poll brings fresh findings, a different sample, a new story. In this flood of data, though, actionable results become harder to identify, fueling mistrust or apathy among the general public. But good polling is happening everywhere, and not just in election contexts. We just need a better understanding of how it serves us.

Alyssa Maurice, Research Associate at the William J. Hughes Center for Public Policy at Stockton University, talked with CivicStory about the role of polling in a healthy democracy, the ways that polling is changing, and how we consume polls in the media. 

This is our second conversation with Alyssa. Our first, covering the basics of polling, was shared on our blog in October 2023.


CivicStory: About a year ago, the Hughes Center shifted away from “candidate-preference” polling to “issues-oriented” polling. What considerations led to that shift? 

Alyssa Maurice: We were concerned with the critical coverage of these election polls—these candidate-preference polls that try to forecast an election. Mainly, we worried they were diminishing the credibility of other high-quality polls focused on issues, rather than candidates. Because they're two fundamentally different things, right? With election polling, you're trying to study a population that doesn’t exist yet. We can't see the future; we don't know who's going to show up to vote. And we have to try to forecast that.

Whereas, with issue polling, you know the population you're trying to study. You're reaching out to all residents, trying to see where they stand on a particular issue. So some of the problems that we've seen rise with election polling don’t impact issue polls.

Unfortunately, a lot of people don't know that distinction; they just see general coverage of polling that says, “They're completely wrong, they're not credible.” We were just concerned that heightened scrutiny on election-focused, candidate-preference polls would diminish the other quality work being put out.

Issues polling is a democratic tool meant to tell policymakers where people stand on various issues. We felt that that would better serve the public.  

Have you observed a different response from the public since making the change? 

It's interesting—around election season, the response rates pick up. Which can be an issue for accuracy; researchers have pointed out that when it comes to those candidate-preference polls, those politically charged polls, you get a lot of strong partisans who want to participate. You end up with extremes on both sides.

But when you're asking about the issues more broadly, there's not as much partisan polarization. The response rates aren't quite as good; people aren't as fired up to take them.

Regarding partisanship: we talk a lot about horserace-style election coverage. Major news outlets usually struggle to foster healthy participation in their own polls, but then rely on the results to drive reporting—especially their election coverage. Sometimes, the polling sample is not disclosed or at least buried in the story. What have you observed about how media institutions conduct and publish polls, as compared to the work being done by polling centers like yours?

I think it's a bit of a problem in the industry. In some ways, it could be a good thing that polling has become democratized—a lot of people have the resources and tools to do it now. But that doesn't mean everyone is doing it to the same standard. Some media outlets have treated all polls as equal, and that's problematic. 

More and more, it's important for consumers to be mindful and to dig into some of the things that you mentioned. Like, what was the sample size? How were they selected? How are the questions worded? You should be able to see the exact questions they asked; who they reached out to; the mode of interview—did they call them or text? Any credible pollsters should disclose all that when publishing.

It’s important to be critical consumers of the polls we read about. Take the 2022 election cycle. I saw coverage relying on not-so-credible polls that anticipated this red wave. And then when it didn't happen, there was this larger narrative that polling was completely wrong. But really, polling performed very well that year. People were just not given information to help them distinguish between a survey that’s quality and one that’s not. 

In light of that, has the Hughes Center team had to preemptively address mistrust when publishing—or even change the way you publish your findings?

It’s an interesting question. Basically, we don't know the population that will actually turn out. We make estimates on who’s going to show up and then try to communicate what led us there—probabilistic scoring on someone's intent to vote, their enthusiasm, their past voting history.

But ultimately, we don't know. Communicating that uncertainty is something we considered if we had continued to do those types of polls. Because there's sort of an art to it. You can produce multiple models with the same data. It's not exactly like issues-based polling, which is a little more cut-and-dry. So that’s an option for pollsters: communicating the level of uncertainty and showing possible outcomes using different models.

Why do you think some pollsters across national news media struggle with disclosing or explaining that level of detail? 

I imagine the goal is probably to make things really easy to understand for readers. And getting into the nitty-gritty of polling is maybe not as interesting to your average reader—or take away from the story narrative. I’m not in the media, so I’m just speculating. But ultimately I do think that it's not great for the polling industry.

Or citizens. Especially because national news media is still where most of the eyeballs go, even in a changing media landscape.

I just think the one thing that gets lost in polling coverage is that pollsters are not doing anything nefariously. They're trying to get it right. In recent years, a lot of discussion has focused on the flaws in polling, which is understandable. I really do understand being critical of the issues that we’ve seen pop up in recent election cycles. Polling is still in a transition period from 2016.

But the goal of polling, really, is to be a democratic tool. It's meant to serve the public and give them a voice. That should never be forgotten.

So how should that be framed for public consumption? Not just by professional pollsters, but by media members as well. Is there an education that needs to happen before introducing a more civic-minded framework to polling?

When it comes to introducing a civic framework, we really try to stay mindful of question-wording. Like I said, we want to treat this as a tool to serve people. So when we're crafting our questions, we always try to be as clear and specific as possible. We don't want to leave things up to interpretation. We always use neutral language. We never want to lead a respondent to a particular answer. 

Last question, since we’re speaking so soon after the Iowa caucuses. Thinking civically, can we take anything away from the low turnout? Less than 15% of registered Republicans in the state participated.

I think that's something we might see this cycle because of the possibility of a rematch. Of course, that could change, but when framed that way it can keep enthusiasm low.


Interview conducted and condensed by Patrick Scafidi

CivicStoryComment