The Times podcast: Our Masters of Disasters know it’s windy. But their predictions go against the facts.
I’ve been trying to get the Times’s David Brooks to talk about his columns from last weekend.
Brooks, of course, had his team of opinion writers, the so-called “masters of disinformation,” predict the results of Tuesday’s midterm elections as if it were a referendum on the president. They predicted a victory for Republicans. Then Brooks himself, using the Times’s data, the “polling of polls,” went back and recomputed the results and found that Republicans actually won only the majority of the white vote.
Brooks’s latest columns look like they were written by his “masters of disinformation,” as well.
Brooks writes that a pollster’s question is not a poll. It’s a statistical blunder. They’re “confused by the term ‘margin of error,’ ” Brooks writes, “in which the error is to be expected.”
He takes a page from the master of disinformation and makes a similar mistake. Brooks writes that “the margin of error is the difference between what is expected and what is observed.” That is the same as saying that the “margin of error” is the difference between what you expected and what actually happened.
The difference between what you expected and what actually happened is a difference between what the pollster (or those with a mathematical background, like your wife) said they were asking you and what actually came out.
Brooks’s “masters of disinformation” wrote something similar — calling it a statistical blunder, too. “We all do this,” Brooks wrote in the column, “and it’s especially frustrating to an election-cycle analyst because a pollster’s question isn’t what the pollster is looking at.” (And that’s why it’s a statistical blunder, too. Because only what’s in the question is the data that gets processed.)
And Brooks wrote that “a pollster’s