Pew, That Stinks: Liberals Use Rotten Charts to Claim 'You Can't Sanely Argue' Pro-Obama Bias
This is hot on the liberal Twittersphere: “The Charts That Should Accompany All Discussions of Media Bias” by James Fallows, a former U.S. News & World Report editor (and Jimmy Carter speechwriter). Fallows is now a weekend contributor to National Public Radio.
Once again, they drag out charts based on a Pew ”study” of the media: “They are the ones presented this morning by John Sides, drawing on Pew analyses of positive, negative, and neutral press coverage of all Republican candidates and of President Obama through this past year.” Fallows insists he has proven “you can't sanely argue that the press is in the tank for Obama.”
Chart 1 shows “Romney endured slightly-to-somewhat more negative-than-positive coverage in much of 2011, during the intense primary debates and negative ads, but has had much more positive-than-negative coverage through this year.”
Chart 2 shows “President Obama has always had more negative-than-positive coverage through the past year.”
Combining them into Chart 3 shows “At no time in the past year has coverage of President Obama been as positive as that of Governor Romney. Indeed, at no time in the past year has it been on-balance positive at all.” Here's that one:
Fallows concluded: “You can argue that negative coverage of the administration is justified. You can argue that incumbents are -- and should be -- held to a tougher standard, since they have a record to defend. But you can't sanely argue that the press is in the tank for Obama, notwithstanding recent ‘false equivalence’ attempts to do so.” [He's citing the recent Politico story that outraged liberals.]
You don’t really need the charts, because the data is garbage. MRC’s Rich Noyes demolished these studies last year on several counts:
Way Too Much Media: In what appears to be an effort to be comprehensive, the Pew researchers stretched the concept of “media” so wide, that it’s really a study of nearly everything on the Internet. Influential and top-rated media outlets (like ABC, CBS and the New York Times) are buried in a sea of “coverage and commentary on more than 11,500 news outlets, based on their RSS feeds” and analyzed by a computer software program.
For a study to include 11,500 news outlets (English-language only, the report says), the researchers have cast their net so widely that their study necessarily includes a huge number of insignificant or derivative news outlets — hundreds of iterations of the same AP story on the Web sites of local TV stations, for example. Such a study design makes it impossible to discover how the candidates were covered by the relatively small number of news media outlets that reach hundreds of thousands or millions of people each day.
[Pew also separately looked at “hundreds of thousands” of blogs, which again means that the few dozen top-ranked influential blogs are buried in a mass of data that includes vast numbers of low-trafficked and irrelevant sites.]
Horse race: The key to understanding Pew’s numbers is that they incorporated “horse race” assessments into their measure of good and bad press. As their methodological explanation confirmed: “A story that is entirely about a poll showing Mitt Romney ahead of the Republican field — and that his lead is growing, would be a good example to put in the ‘positive’ category.” Careful researchers would avoid blurring such “horse race” statements into an overall measure of good press/bad press.
So Obama wasn't winning primaries, and Romney was, so Romney gets better press? But this is the most important reason to put these charts through the shredder:
Letting the Computer Do Most of the Work: Determining the tone of news coverage is based on a technique called “content analysis,” where researchers develop categories and rules to measure the content of news stories. A particular content analysis scheme is deemed “reliable,” i.e., valid, if other researchers can take the same set of rules and get similar results.
In theory, this would seem ideal work for a computer, which has no political prejudices and cannot be numbed by going over countless stories on the same topic, day after day. But in practice, I have discovered, the key is to have analysts who understand the context of the stories they are reading or watching. Campaign news changes from day to day, new issues arise, and new buzzwords can become a kind of journalists’ shorthand, referring to some episode or incident that has a shared definition among political insiders.
Pew reports that their human researchers worked up models for the computer algorithm, feeding it examples of “positive” and “negative” stories until the computer matched the human researchers “in 97% of the cases analyzed.” But with such a vast number of stories, it’s impossible that human researchers could cross-check even a tiny fraction of the coverage. Nearly all of the “anti-Obama” or “pro-Perry” stories were never reviewed by an actual researcher to check the context and meaning of the keywords the computer was trained to spot.