You Ask, We Answer: How The Times/Siena Poll Is Conducted
Credit...Lucy Jones

You Ask, We Answer: How The Times/Siena Poll Is Conducted

The New York Times/Siena College Poll is conducted by phone using live interviewers at call centers based in Florida, New York, South Carolina, Texas and Virginia. Respondents are randomly selected from a national list of registered voters, and we call voters both on landlines and cellphones. In recent Times/Siena polls, more than 90 percent of voters were reached by cellphone.

One of the most common questions we get is how many people answer calls from pollsters these days. Often, it takes many attempts to reach some individuals. In the end, somewhere around 2 percent of the people our callers try to reach will respond. We try to keep our calls short — less than 15 minutes — because the longer the interview, the fewer people stay on the phone.

For battleground polls, we called voters who live in six of the states considered to be key in the upcoming presidential race: Arizona, Georgia, Michigan, Nevada, Pennsylvania and Wisconsin. Since presidential elections are decided based on the electoral college, not the popular vote, we focus much of our polling on the states that are likeliest to decide the outcome of the race.

Phone polls used to be considered the gold standard in survey research. Now, they’re one of many acceptable ways to reach voters, along with methods like online panels and text messages. The advantages of telephone surveys have dwindled over time, as declining response rates increased the costs and probably undermined the representativeness of phone polls. At some point, telephone polling might cease to be viable altogether.

But telephone surveys remain a good way to conduct a political survey. They’re still the only way to quickly reach a random selection of voters, as there’s no national list of email addresses, and postal mail takes a long time. Other options — like recruiting panelists by mail to take a survey in advance — come with their own challenges, like the risk that only the most politically interested voters will stick around for a poll in the future.

In recent elections, telephone polls — including The Times/Siena Poll — have continued to fare well, in part because voter registration files offer an excellent way to ensure a proper balance between Democrats and Republicans. And perhaps surprisingly, a Times/Siena poll in Wisconsin had similar findings to a mail survey we commissioned that paid voters up to $25 to take a poll and obtained a response rate of nearly 30 percent.

Our best tool for ensuring a representative sample is the voter file — the list of registered voters that we use to conduct our survey.

This is a lot more than a list of phone numbers. It’s a data set containing a wealth of information on 200 million Americans, including their demographic information, whether they voted in recent elections, where they live and their party registration. We use this information at every stage of the survey to try to ensure we have the right number of Democrats and Republicans, young people and old people, or even the right number of people with expensive homes.

On the front end, we try to make sure that we complete interviews with a representative sample of Americans. We call more people who seem unlikely to respond, like those who don’t vote in every election. We make sure that we complete the right number of interviews by race, party and region, so that every Times/Siena poll reaches, for instance, the correct share of white Democrats from the Western United States, or the correct share of Hispanic Republicans in Maricopa County, Ariz.

Once the survey is complete, we compare our respondents to the voter file, and use a process known as weighting to ensure that the sample reflects the broader voting population. In practice, this usually means we give more weight to respondents from groups who are relatively unlikely to take a survey, like those who didn’t graduate from college.

You can see more information about the characteristics of the voters we reached and how much each group was adjusted in the weighting step at the bottom of our poll cross-tabs, under “Composition of the Sample.”

In 2022, we did an experiment to try to measure the effect nonresponse has on our phone polls. In our experiment, we sent a mail survey to voters in Wisconsin and offered to pay them up to $25 to respond. Nearly 30 percent of households took us up on the offer, a significant improvement over the 2 percent or so who typically respond by phone.

What we found was that, overall, the people who answered the mail survey were not all that dissimilar from the people we regularly reach on the phone, on matters including whom they said they would vote for. However, there were differences: The respondents we reached by mail were less likely to follow what’s going on in government and politics; more likely to have “No Trespassing” signs; and more likely to identify as politically moderate, among other things.

But the truth is that there’s no way to be absolutely sure that the people who respond to surveys are like demographically similar voters who don’t respond. It’s always possible that there’s some hidden variable, some extra dimension of nonresponse that we haven’t considered.

The core concept underlying survey research is the idea of sampling: You don’t need to talk to everyone in order to get a good idea of the whole population; you just need a sample.

You may not know it, but sampling is something that you probably use in your everyday life. You don’t need to eat a whole pot of soup to know what the soup tastes like; you only need a spoonful.

Of course, sampling only works if the subset you taste is representative of the whole. Pollsters usually attempt to obtain a representative sample through random sampling, where everyone has an equal chance of selection.

If you had a truly random sample of Americans, then in theory merely a few hundred people would be enough to measure public opinion with reasonable accuracy — much as you would probably realize that a coin flip is a 50-50 proposition after a few hundred tries.

One interesting aspect about sampling is that a survey of 10,000 people is not 10 times better than a survey of 1,000, and that larger poll could be less accurate if the people surveyed are not representative, such as if the poll with the larger sample size uses a less rigorous approach.

A survey of 1,000 voters has a margin of sampling error of around three to four percentage points. In practice, that means if the poll shows that 57 percent of voters approve of something, the real figure could be closer to 54 percent or 60 percent.

If we doubled the number of people we polled — or better yet, tripled it — the margin of error would go down only slightly, to maybe plus or minus two percentage points. Which is to say, the overall accuracy of the survey would not improve very much.

However, if the number of respondents decreases too much, the margin of error can increase drastically. That’s important to understand when looking at results among demographic subgroups that are smaller in size.

When we poll states, we often survey more than one at the same time. One value of that is that we can understand the dynamics of the election in each state and also can get a large enough set of respondents across all the states to analyze demographic groups, such as young voters and Latino voters.

Polls of registered voters include everyone who is registered to vote nationally or in a given state. But even in the highest-turnout elections, not everyone who is registered to vote casts a ballot.

Polls of likely voters attempt to assess how likely a person is to vote, and then use that as a lens for their poll.

There are many valid ways to determine a likely voter, but there is one truism: Research shows that people overstate how likely they are to vote. Voting is what pollsters call a “socially desirable behavior,” meaning it is something people want to tell you they have done. Over the years, pollsters have turned to statistical modeling to help account for this overreporting.

In The Times/Siena Poll, we know each voter’s history of turning out to vote from the voter file, and past voting behavior is a very strong predictor of future voting behavior. We also ask respondents how likely they are to vote. We use that information to assess how likely someone is to vote this fall.

Far in advance of an election, pollsters tend to emphasize results among registered voters. And it is often valuable to look at results among registered voters if your intention is to go beyond just the decisions voters are making related to an election.

But as elections draw near, most pollsters, including The Times/Siena Poll, emphasize results among likely voters, allowing them to understand the decisions that are being made by the voters who are most likely to decide the election.

In the 2022 midterm elections, Times/Siena poll results were, on average, within two points of the actual result across the races we surveyed in the last weeks of the race. That helped make The Times/Siena Poll the most accurate political pollster in the country, according to the website FiveThirtyEight.

At the same time, all polls face real-world limitations. For starters, polling is a blunt instrument, and as the margin of error suggests, numbers could be a few points higher or a few points lower than what we report. In tight elections, a difference of two percentage points can feel huge. But on most issues, that much of a difference isn’t as consequential.

Historically, national polling error in a given election is around two to four percentage points. In 2020, on average, polls missed the final result by 4.5 percentage points, and in some states the final polls were off by more than that. In 2016, national polls were about two percentage points off from the final popular vote.

The national popular vote and the results in the Electoral College do not always line up. But national polls can still provide important insight into the issues and attitudes that are shaping the election, and they reflect the opinions of the entire country, not just voters in battleground states.

National polls often help us understand the themes that are driving an election and how voters are thinking about the candidates. In the 2024 election cycle, national polls helped us understand how deep Democratic discontent with Joe Biden was, and they are useful tools to investigate issues of broad import to the country, such as abortion, immigration and the economy.

When we are getting ready to field a poll, we think about what is happening in the world that might be changing people’s attitudes and what might happen soon that might cause public opinion to change.

Sometimes we conduct a poll to measure the impact of a specific event, like a presidential debate. And sometimes we conduct a poll because we want to check in on how Americans are thinking about a particular issue, like the economy.

Once we have topics nailed down for a poll, we spend a tremendous amount of time debating and crafting the questions. Our goal with any single question is to feel that every respondent — across the political spectrum — feels their viewpoint is accurately represented as a response option. We want to make sure everyone sees the question as fair.

We also want to make sure that the question is understood by everyone to mean the same thing. And finally, it is important that we are measuring real views that people have, not putting ideas into their heads or pushing them in a particular direction. Crafting accurate survey questions is an art that we take very seriously.

You can see the exact questions that were asked in each poll, in the order they were asked, by looking at our full results.

The Times/Siena Poll is produced by Camille Baker, Nate Cohn, William P. Davis, Ruth Igielnik, Christine Zhang and the team at the Siena College Research Institute.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *