Award-Winning Journalist & Speaker - Expert in ERISA Fiduciary, Child IRA, and Hamburger History
Did you ever have a dream you kept putting off? A place you always wanted to visit? A story you always wanted to tell?
So did I. (Notice the past tense.)
This site might give you a clue about how I accomplished this. Who knows? It may even reveal to you how you can realize your own greatest goals.
Interested in learning more? Find me on Twitter and LinkedIn. You can also subscribe to the RSS feed.
Copyright © 2024 Pandamensional Solutions, Inc.
You cannot copy content of this page
Here’s What I Learned When I was a Professional Political Pollster
Imagine being a physics and astronomy major at a school interested in politics and government where the most popular major is political science and economics. It’s tough. You can’t engage in discussions, you can only listen. You know nothing, unless the conversation turns towards nuclear energy policy (which it almost never does) or space exploration (which it doesn’t ever).
That was me heading into the 1980 presidential primary season. I was nothing more than a naïve cheerleader. I wanted to be more, but what? In an ocean of future neo-cons, think tank thinkers, and government policy makers, I was merely a small deserted isle that didn’t even merit a place on the map. I tried and tried to think of a way I could add value, to discover something in one of the classes I took that would generate at least interest, if not respect, among my more politically knowledgeable classmates. About the only unique differentiator I offered was that I had lived in Jack Kemp’s congressional district, but that was just a novelty of coincidence.
Then it struck me. While all these talking heads spent their class time debating the merits of competing political philosophies, I consumed tons of pencil lead scribbling complex equations into blue lined books. I had math. And with it, I had statistics.
I also had room for one elective that particular term, and – lo and behold! – it was a poly-sci class. I meekly went up to the professor and asked if he would be the advisor to a new student organization I was creating. I called it Student Polling Services. I might know more about Pluto than Plato, but I just as certainly knew more about statistics, the life blood of poll taking, and taking polls was the heart of any political campaign.
The professor of my political science class was F. Christopher Arterton. Not only did we share a name, but we also shared an interest in political polls. At the time, he was a consultant to Newsweek magazine on polling and campaign coverage. He leapt at the prospect of acting as my new group’s faculty adviser.
It turned out, he was so ardent in his enthusiasm when it came to mentoring me, it was like I was taking another class. I may not have gotten credit for it, but I got something better. I got paid. Somehow – and in all honesty I don’t remember how – I convinced the local presidential campaigns to hire me as their pollster (all of them, Republicans and Democrats). Once word got out, student eagerly volunteered to help administer the polls (both via phone and in person). The campaign of a Connecticut State Assemblyman even hired me.
Not only did I help pay the next semester’s tuition, I also learned a lot about political polling and market research. (I also added to my baseball card collection because when the local Carter campaign discovered it was short on cash, I kiddingly said they could pay me in baseball calls. They did. In vintage 1950s baseball cards, including the rookie card of Roberto Clemente, which very soon ended up being worth more than what I had charged all the campaigns.)
I also learned about newspaper editors and reporters. My polling soon attracted the attention of the newspaper publisher and, during the fall of the 1980 campaign, I was conducting weekly polls for the paper and a reporter was assigned to interview me regarding the results. One time a grad student wrote a nasty letter to the editor about one of my conclusions. I was instructed to come to the editor’s office and explain myself. Worried, I quickly scrambled to Professor Arterton’s office first and told him of my dilemma. He looked at my math, immediately said I was correct and therefore had nothing to worry about. I asked him if I could drop his name and tell the editor and the accusing grad student (who was also going to be at the meeting) he said the numbers were good. Professor Arterton told me not to use his name, not because he wanted to remain anonymous, but because I was right and didn’t need to rely on someone else. “Besides,” he said, “this is a graduate student we’re talking about, and they’re notorious for backing down when challenged.”
That’s exactly what happened. In the meeting at the editor’s office, I calmly and confidently explained the numbers. I expected a challenge, but the grad student simply said, “Oh. Right. I didn’t think of that.” After he left, the editor said, “I figured you were right, but it’s good to let our readers know we’re listening to them.”
But my favorite anecdote regarding this experience occurred on Election Day. The day before I had given my final poll to the reporter as well as my final interview. As a scientist, I learned the importance of using hedging language when discussing results. The reporter apparently wasn’t aware of this and removed it. The morning of the Election, I read the paper and saw that I was quoted as “predicting” Reagan might do better than the polling indicated. Most people thought Carter was going to win. I was livid. Scientists don’t “predict,” they merely report the numbers.
Unfortunately, I couldn’t do anything about it. I had signed up to be a paid poll watcher that day and I was literally at the polls from 6am until 9pm. Once the polls let out, I was looking forward to watching the long night of returns with my roommates. As I was walking back to my room, various people saw me and congratulated me. I had no clue what they were talking about. When I arrived in my room, all my friends were there. They called me a genius. I figured it was some kind of joke. They explained word spread across campus very fast when the paper came out. At first everyone thought I was just some amateur who knew nothing about politics. Then, when Walter Cronkite announced Carter had conceded, they all wanted to know how I knew.
I told them it was no big deal. When you’re a scientist, you’re expected to know these things.
* * *
I thought of my 1980 experience this past week when all the talk was about polls being inaccurate or rigged. I can tell you it’s really easy to rig a poll, and it’s really easy to misinterpret results. Back in 1980, the rigging was done mainly through the questions. That’s why any published political poll also publishes the exact wording of all the questions in the poll and the order they were asked (since it’s not just wording, but also placement). It’s now very difficult to rig a poll through the questions.
Today, thanks to “big data,” poll rigging is easier to do and a little harder to detect (because you won’t know for sure until after the results). The Clinton campaign, through their emails, showed that it understood this. The way you rig a poll is through the people you ask. Here’s an almost too obvious way to do it on a national poll:
In reality, you need only 700 “randomly selected” people to have a reasonable +/- error. (You need about 1,000 for +/- 3%). When Professor Arterton told me I was doing nothing wrong, I remember asking him, “So you’re saying I AM using a random sample.” He said, “No sample is truly random.” He went on to explain why and how to avoid tricking yourself into thinking you had a “random” sample before he said my technique was indeed random for the purposes I was using (in fact, he said my margin of error (MOE) was probably significantly less given the way I was doing it).
Back to today’s world. Remember, no sample is truly random, and you won’t know until the poll is tested on Election Day. So, say you want to interview 1,000 people nationally to get a MOE of +/- 3%. You can randomly select people from just a handful of states (say, the northeast and the west coast) in a text book “random” fashion and guess who will win the poll: The Democrat. (Likewise, you can produce a Republican winner by just selecting southern states). It would be just as bad as picking only Democrats or only Republicans as respondents (party overweighting was occurring and major media pollsters were called out on it, but they ignored those complaints (some major media polls were assuming Democrats were 50% more enthusiastic this year vs. 2012 and we now know that wasn’t the case).
Now, this is too obvious a way to rig a national poll, so no one is foolish enough to do it. Instead, as the Clinton campaign noted, they would encourage major media outlets to oversample demographic cohorts more favorable to Clinton. This would include city residents, those with advanced college degrees, and other non-race-based non gender-based identity. Thanks to the leaked emails, we most certainly know this was being done.
But that’s not the real reason why this year’s polling will become a case study in how not to conduct a poll – no matter who would have won.
The election of 2016 is bound to join 1940 in the lexicon of “bad polling techniques.” The election of 1940 saw the dawn of a rapid use of a new technology – telephony (i.e., the telephone). Pollsters across the nation thought it was the Holy Grail in terms of generating random samples. They were right… eventually (I used it extensively in 1980 as it had become the norm by the late 1950s). The trouble was, unlike in the 1960s, 70s, 80s, 90s and much of the 00s, the telephone was not yet universal in 1940. Sure, the upper class all had them, but poorer people and few in the rural countryside had them. So, while all the polls indicated Willkie would win, Roosevelt breezed to reelection easily. Why? No one was polling Roosevelt’s key demographic because they didn’t have phones. (BTW, the same phenomenon may have occurred when “Dewey defeats Truman” in 1948).
Fast forward to 2016. It’s not simply that people don’t want to answer political surveys, although that is true. According to the Wall Street Journal, 20 years ago a third of the people were willing to answer surveys, today it’s only 9%. (How many of you don’t answer your phone when an unrecognizable number appears?). In addition, there are fewer and fewer landlines and private cell phone numbers aren’t easily attainable (especially for the purposes of randomization). The truth is, short of boots on the ground, we have no real reliable way to poll a large population right now.
There are political polls that aren’t rigged or misinterpreted. These are the ones taken by the campaigns themselves. Why? Because the campaigns need to know precisely what’s going on. While campaigns don’t publish their polls, you have a pretty good chance to correctly guess what they say by watching what the candidates do and where they go (and maybe where they advertise).
Incidentally, when it’s all said and done, I think candidates will question the value of traditional political media advertising. It certainly didn’t work for Bush in the primary and it appears to have been less effective for Hillary than theory would suggest (given the disparity in advertising, she really should have been leading by 50%).
Related