Monday, December 23, 2024

Latest Posts

Science and Misinformation: Should Flawed Polls Be Censored?

Check out the Focus on Marriage Podcast for great insights on building a strong and healthy marriage.

7 Foods That Can Help Reduce Stress

Stress is unavoidable in our lives; what matters is how we deal with it. There are countless things you can do, from...

Eating These Food Groups Can Harm Your Deep Sleep, Study Finds

The difference, however, was in the quality of sleep. "We were particularly interested in investigating the properties of their deep sleep," adds Cedernaes....

This Therapist Explains How To *Actually* Get Closure After A Breakup

Just like it took time to learn to love this person, it takes time to unlearn how to love this person. Source link



The American Psychological Association states: “Misinformation is false or inaccurate information—getting the facts wrong. Disinformation is false information which is deliberately intended to mislead—intentionally misstating the facts” and adds, “By providing valuable insight into how and why we are likely to believe misinformation and disinformation, psychological science can inform how we protect ourselves against its ill effects.” [1] But defining and analyzing misinformation and disinformation can be challenging—especially because scientific inquiry is continually evolving and incorporating new findings, it is not uncommon for today’s “facts” to be revised in the future as more data are accumulated.

I teach a graduate course in statistics at Vanderbilt University. One the most interesting exercises the students complete every two years relates to reviewing political polling results immediately prior to an election. Presidential elections are especially useful because there are ample state and national polling data available for post hoc (after the event) analyses of the actual election results, which can then be used to evaluate polling accuracy—and inaccuracy. Stated directly, polls and elections are an ideal opportunity to review “facts” prior to an election and compare these to “facts” after the election to illustrate the limits (and perils) of “fact finding” processes.

Polling methodology utilizes a common approach to conducting scientific experiments: Scientists gather a representative sample of a target population—in the case of polling, US voters—in order to estimate the actual population value: the election results. In my own work developing and testing speech and language assessments and treatments for autistic children, children with Down Syndrome, deaf children with cochlear implants, and late talking children, I gather a sample of these children, administer assessments or provide treatment (while including an appropriate control/comparison condition) in the hopes of not only learning about the particular children in the study, but also about the broader populations of autistic children, children with Down Syndrome, children using cochlear implants, and so on.

Because scientists (and pollsters) know that random sampling will have variation relative to the “true” general population value, statistics are used to generate a range of population parameter estimates. You may have noticed that all polls include “a margin of error.” This statistic yields of range of scores around poll results to estimate not only the actual election outcome, but also a “confidence interval” that tells polling scientists—and poll readers—the likely range of election outcomes. In other words, polling scientists know that the value reported in the sample is unlikely to precisely match the actual election outcome. Instead, there is a range of outcomes that the poll predicts. Because of this, even when a poll does not correctly predict the eventual winner, that doesn’t necessarily indicate the poll was “misinformation,” “disinformation,” or in any way nefarious.

As an example, a recent pre-election Washington Post poll of the 2024 US presidential race in Pennsylvania predicted that the Democratic nominee would receive 1% more votes on election day than the Republican nominee.[2] The reported variation in the poll, what statisticians call “error,” was +/- 3%, so this poll could be considered “correct” if the actual vote total ranged anywhere between a 4% outcome favoring the Democratic candidate to a 2% vote total favoring the Republican candidate. Because the election outcome was a 2% advantage for the Republican candidate in Pennsylvania,[3] this poll was “accurate.” But the voters supporting the Democratic nominee are no doubt unhappy about the election results because although the poll was accurate within the limits of precision in scientific polling, their preferred candidate lost the election.

The other side of the coin is that the supporters of the Republican candidate may look at this poll, with an estimate of a 1% win for the Democratic candidate and incorrectly conclude that the polling group—and the Washington Post—tried to mislead voters to influence the outcome of the election in favor of the Democratic candidate using “misinformation” or, worse, “disinformation.”

Unfortunately, when pollsters get it wrong, this could also inadvertently promote a broader mistrust of science and the scientific method, especially when the public sees political bias as potentially contaminating research results and interpretation. To be sure, bias in science is a longstanding challenge; researchers must actively conduct studies that minimize or control for potential bias. For example, I am part of a team conducting a clinical trial of two different treatments for children using cochlear implants. In this study, I am “blinded” as to which treatment a child is receiving so we can be sure that my testing is not inadvertently influenced by any conscious or unconscious belief I may have as to which treatment I hypothesize is better (or worse). Pollsters, like any other scientists, need to be rigorous to ensure that bias or any other methodological flaw does not taint their research.

A careful review of polls of the 2024 election indicates that although many indicated a small percentage favoring the Democratic candidate, who ultimately lost, the results fell within the “margin of error” and so should not be viewed as misinformation, disinformation, or unscientific.

But what about some other polls that were strikingly inaccurate, falling far outside any reasonable margin of error? Should these pollsters be branded as purveyors of “misinformation” or “disinformation” and censored in future elections?

An Outlier Iowa Poll

As an example, a few days before the election, one poll conducted in Iowa was published by the Des Moines Register with the headline “Iowa Poll: Kamala Harris leapfrogs Donald Trump to take lead near Election Day. Here’s how.“[4] The story reported a 3% advantage for the Democratic candidate. Because the poll was conducted by a credible polling company with a decades-long track record of correctly predicting election outcomes, the result was widely reported in the media as portending a positive outcome for the Democratic candidate in Iowa; some outlets suggested this was likely to be true not only in Iowa, but elsewhere in the Midwest, [5] even though the poll only included a sample of Iowans, and the Des Moines Register did not in any way suggest “generalizability” to any other state. The poll had a “margin of error” of 3.4%, so that any margin between a 0.6% Republican vote advantage in Iowa to a 6.4% vote advantage for the Democratic candidate would be considered an accurate poll.

But the actual election result in Iowa [6] was 13.6% in favor of the Republican candidate. Therefore, the poll was “off” by 16.6%—nearly 5 times larger than the margin of error. Obviously, this poll was wildly inaccurate and “normal” science would suggest this was very unlikely to be a random outcome. My own rough computation indicated that the probability of this happening by chance alone was approximately 0.0000010623.

This strongly suggests that there were fundamental problems with the polling methodology. A likely error was that the pollsters failed to gather a representative sample, which often happens. All scientists and pollsters are very familiar with the pitfalls of “biased” sampling, so this poll is potentially an example of an experienced polling company failing to be sufficiently skeptical of their own “outlier” result. Most other polls were more in line with the actual election outcome, so this organization can be rightly critiqued for not validating its sampling—and results—prior to publishing them.

Because this poll ultimately was proven to be remarkably inaccurate, was it “misinformation” or, worse, “disinformation?” Should the Des Moines Register story reporting this poll have been censored or “taken down” from social media?

It is not at all uncommon for an individual poll—or an individual scientific study for that matter—to report a finding that later proves to be incorrect. However, there was no way of knowing prior to the election whether this poll was an outlier or the truth. It is inarguable that this preelection “fact” was disproven by a post-election “fact.” But. what if the Des Moines Register poll had proved to be correct on election day? How can any individual, news agency, or media outlet “know” which polls will be within the margin of error and which will not? I would argue that the Register—and this pollster—should not have been restricted, nor should they be restricted in future elections. Candidly, I hypothesize that future polls published by the Des Moines Register and conducted this pollster will be viewed with more skepticism because of this poll.

The point here is that in polling—and in science—it is vitally important to maintain open platforms, without censorship. Pollsters and scientists should be free to publish their findings while being transparent about their methodology. If and when subsequent events and data show that flawed methodology or another intervening factor led to results and conclusions found to be inaccurate, pollsters and scientists should also be willing to accept the perils of being proven wrong. I would argue that any processes designed to label prospective and retrospective designation of “facts” as “misinformation” or “disinformation” should themselves be subjected to forensic analysis—and a great deal of skepticism.



Source link

Latest Posts

Don't Miss