While much has been made on the Internet about "Fox Viewers" being misinformed, the full data shows something else: people watching any news (Fox, CNN, MSNBC) "2-3 Times Weekly" were incredibly similar in accuracy. Only, and only, the "Daily Viewers" of Fox and MSNBC had serious deviations from the median answers. In fact, studying the crosstabs, you see that this represents a small number of people in the "Daily Viewers" categories.
Overall, only 616 voters, +/- 3.9% accuracy, were surveyed. They offer no data to other researchers on how many people were in each category. What if only 10 people claimed this? The data don't show, instead WPO supplies only the percentage of each self-selected category.
For example, the question reading:
1. Most economists who have studied it estimate that the stimulus legislation saved or created a few jobs or caused job losses.
The "2-3 Times Weekly" viewers getting the answer wrong for Fox (88%), CNN (85%), and MSNBC (87%) are not that far apart. In fact, this is within the margin of error.
The "Daily" viewers are substantially different, but only between Fox (91%) and MSNBC (64%), but the daily audience of Public Broadcasting (87% incorrect) is as misinformed as Fox viewers. I can't believe that. This makes MSNBC viewers unique, since 90% of network viewers and 86% of daily newspaper readers also answered "incorrectly" in the survey.
Obviously, there are some problems with this study that need to be examined. But, the headline "Fox Viewers Misinformed" is better than "PBS Viewers Misinformed." At least within the margin of error, the daily Fox viewer isn't "worse" -- just as ignorant as most Americans.
What really should concern us: not one group, not one, was accurate more than 50% of the time.
Rhetorically, the "least informed" label feeds the bias of various audiences. And, because the questions were not balanced -- most of the questions were ones I believe dedicated "conservative voters" viewed through a specific bias, as well -- we don't know how self-identified "liberal voters" might have done on other questions. For example, would MSNBC viewers know that Pres. Obama was the leading recipient of money from Wall St.? That Goldman Sachs gives nearly equal amounts to both parties, but that Democrats usually edge out GOP candidates? Would MSNBC viewers know that, generally, Democratic candidates outspent GOP candidates and that SEIU gave as much money to candidates as American Crossroads (the "Karl Rove" PAC) did?
In other words, the questions matter, too. But these questions were, overall, favorable to a particular point of view: the researchers seemed to want to prove ignorance on specific issues, and those issues "favored" a particular survey outcome.
A better approach would have been 20-30 questions, coming from a mix of political biases.