partisanBiasInFactualBeliefs

Transcript

1 Quarterly Journal of Political Science, 2015, 10: 519–578 Partisan Bias in Factual Beliefs about Politics 1 2 3 John G. Bullock , Alan S. Gerber , Seth J. Hill and Gregory 4 ∗ A. Huber 1 Department of Government, University of Texas at Austin, 158 West 21st Street, Stop A1800, Austin, TX 78712, USA; [email protected] 2 Department of Political Science, Institution for Social and Policy Studies, Yale University, 77 Prospect Street, PO Box 208209, New Haven, CT 06520-8209, USA; [email protected] 3 Department of Political Science, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093-0521, USA; [email protected] 4 Department of Political Science, Institution for Social and Policy Studies, Yale University, 77 Prospect Street, PO Box 208209, New Haven, CT 06520-8209, [email protected] ABSTRACT Partisanship seems to affect factual beliefs about politics. For example, Republicans are more likely than Democrats to say that the deficit rose during the Clinton administration; Democrats are more likely to say that inflation rose under Reagan. What remains unclear is whether such patterns reflect differing beliefs among partisans or instead reflect a desire to praise one party or criticize another. To shed light on this question, we present a model of survey response ∗ An earlier version of this paper was circulated as NBER Working Paper 19080 and was presented at Harvard, MIT, NYU, Princeton, Stanford, UCSD, UT-Austin, Yale, and the Annual Meetings of the American Political Science and Midwest Online Appendix available from: http://dx.doi.org/10.1561/100.00014074_app Supplementary Material available from: http://dx.doi.org/10.1561/100.00014074_supp MS submitted 14 June 2014; final version received 11 May 2015 ISSN 1554-0626; DOI 10.1561/100.00014074 © 2015 J. G. Bullock, A. S. Gerber, S. J. Hill and G. A. Huber

2 520 Bullock et al. in the presence of partisan cheerleading and payments for correct and “don’t know” responses. We design two experi- ments based on the model’s implications. The experiments show that small payments for correct and “don’t know” an- swers sharply diminish the gap between Democrats and Republicans in responses to “partisan” factual questions. Our conclusion is that the apparent gulf in factual beliefs between members of different parties may be more illusory than real. The experiments also bolster and extend a major finding about political knowledge in America: we show (as others have) that Americans know little about politics, but we also show that they often recognize their own lack of knowledge. A persistent pattern in American public opinion is the presence of large differences between Democrats and Republicans in statements of factual beliefs. Partisan divisions are expected for questions about political tastes, but they extend even to evaluations of economic trends during a president’s tenure (Bartels, 2002, pp. 133–138). What do these differences mean? One view is that Democrats and Republicans see “separate realities” (Kull et al. , 2004), with differences arising because of partisanship’s effect as a “perceptual screen” in information acquisition et al. , 1960; Gerber et al. , 2010, esp. and processing (e.g., Campbell ch. 8). By this account, scholars and commentators are correct to take survey respondents’ statements at face value (e.g., Bartels, 2002; Jerit and Barabas, 2012; Shapiro and Bloch-Elkon, 2008), because those statements reveal respondents’ beliefs. Partisan differences in responses to questions about important facts therefore raise concerns about polarization in the mass electorate. Such differences also threaten defenses of democracy that are based on retrospective voting (Fiorina, 1981): voters may be unable to hold elected officials accountable for Political Science Associations. We thank seminar participants at those institutions, as well as Kevin Arceneaux, Matias Bargsted, Michael Cobb, David Doherty, Conor Dowling, Mo Fiorina, Matt Levendusky, Neil Malhotra, Markus Prior, Bob Shapiro, Gaurav Sood, and the editors and reviewers, for useful comments and advice.

3 Partisan Bias in Factual Beliefs about Politics 521 their performance in office if even their views of economic performance are colored by their partisanship (see also Healy and Malhotra, 2009). An alternative view is that survey responses are not entirely sincere. Instead, they may reflect the expressive value of making statements that portray one’s party in a favorable light (Brennan and Lomasky, 1997; Hamlin and Jennings, 2011; see also Schuessler, 2000). Parti- san divergence in surveys may therefore measure the joy of partisan “cheerleading” rather than sincere differences in beliefs about the truth. Furthermore, divergence in expressed survey responses may occur under two different conditions: when partisans are aware that their responses are inaccurate, or when they understand that they simply don’t know the truth. In either of these cases, partisan differences in factual assess- ments would be of less concern than is suggested by prior work, because survey responses would not reveal actual beliefs about factual matters. Despite this possibility, almost no research has attempted to determine the extent to which partisan divergence in responses to factual questions reflects sincere differences in beliefs. This paper reports results from two novel experiments designed to distinguish sincere from expressive partisan differences in responses to factual survey questions. We motivate our experiments with a model in which respondents value both partisan responding and incentives for correct and “don’t know” responses. The model shows that incentives can reduce partisan divergence when expressive responding would otherwise mask shared (i.e., bipartisan) beliefs about factual matters. In both experiments, all subjects were asked factual questions, but some were given financial incentives to answer correctly. We find that even small incentives reduce partisan divergence substantially — on average, by about 55% and 60% across the questions for which partisan gaps appear when subjects are not incentivized. Our model also reveals that incentives for correct responses may not deter cheerleading among those who recognize that they don’t know the correct response. Even when paid to answer correctly, those who are unsure expect to earn less for offering the response that they think is most likely to be correct (relative to those who are sure of the correct response), and so they are more likely to continue offering an expressive partisan response. In our second experiment, we therefore offer to pay some participants both for correct responses and a smaller amount for admitting that they do not know the correct response. We

4 522 Bullock et al. find that large proportions of respondents choose “don’t know” under these conditions. Furthermore, partisan gaps are even smaller in this condition — about 80% smaller than for unincentivized responses. This finding shows that partisan divergence in responses to these questions and by respondents understanding that is driven by expressive behavior they do not actually know the correct answers. To the best of our knowledge, this is the first analysis which demonstrates that people are 1 aware of their own ignorance of political facts. These results speak to questions about the meaning of public opin- ion and the mechanisms through which partisanship affects important outcomes. Most importantly, they call into question the claim that partisan divergence in expressed beliefs about factual matters is cause for concern about voters’ abilities to judge incumbent performance. To the extent that factual beliefs are determined by partisanship, paying partisans to answer correctly should not affect their responses to factual questions. But it does. We find that even modest payments substan- tially reduce the observed gaps between Democrats and Republicans, which suggests that Democrats and Republicans do not hold starkly different beliefs about many important facts. It also suggests that, when using survey data to understand why people make the political choices that they do, analysts should be cautious in interpreting correlations between factual assessments and those choices. Survey responses to factual questions may not accurately reflect beliefs, and the correlation between vote choice and factual assessments (of candidates or political 2 conditions) observed in surveys may be in part artifactual. Thus, even if partisanship is a crucial influence on votes and other political outcomes (Gerber , 2010), it may operate more through its effects et al. on tastes than through its effects on perceptions of reality. These results also affect our interpretation of partisan polarization in the mass electorate. Republicans and Democrats do hold different factual beliefs, but their differences are likely not as large as naïve analysis of survey data suggests. Just as people enjoy rooting for their 1 In this regard, the most closely related work is Bishop et al. (1984) and Luskin and Bullock (2011). 2 Our results confirm concerns in the literature on economic voting (e.g., An- solabehere et al. , 2013) that survey reports of economic conditions may be contami- nated by expressive partisan responding.

5 Partisan Bias in Factual Beliefs about Politics 523 favorite sports teams and arguing that their teams’ players are superior, even when they are not, surveys give citizens an opportunity to cheer et al. , 2002). Deep down, however, for their partisan teams (Green many individuals understand the true merits of different teams and players — or, at minimum, they understand that they don’t know enough to support their expressive responding as correct. And while our experimental approach cannot be used to discern whether partisan divergence in attitudes is sincere, an implication of our work is that if respondents misstate their factual beliefs in surveys because of their partisan leanings, they may misstate their attitudes in surveys for the same reason. We return to this point in the discussion section. Our work is also of significance for survey methodology. In partic- ular, how should one interpret experiments which show that partisan cues increase partisan divisions in survey response? Such results are commonly taken to show that partisanship affects attitudes (e.g., Cohen, 2003). Our results raise the possibility, however, that partisan cues merely remind participants about the expressive utility that they gain from offering partisan-friendly survey responses. One implication is that studies in which partisan cues bring about partisan variation in survey response may not be showing that partisanship alters actual attitudes or beliefs. A key task for researchers is thus to understand when survey responses reflect real attitudes and when they reflect more expressive tendencies. On the whole, then, understanding whether polarization in sur- vey responses is real or artificial speaks to core concerns of subfields throughout political science, for scholars who rely on survey methods to study attitudes and behavior and for those who are interested in po- larization of mass attitudes. Finally, it speaks to political psychologists because it bears directly on the connection between partisan identity and perceptions of political reality. 1 Theory and Prior Evidence Prior research documents partisan differences in expressed factual beliefs (e.g., Gaines et al. , 2007; Jacobson, 2006; Jerit and Barabas, 2012), and some of it focuses on differences in evaluations of retrospective economic

6 524 Bullock et al. 3 et al. conditions (e.g., Bartels, 2002; Conover , 1986, 1987, pp. 133–38). Many of these differences arise because members of one party issue economic assessments that deviate starkly from objective conditions. For example, despite the large improvement in unemployment and inflation during Reagan’s presidency, Bartels (2002) shows that, in 1988, Democrats were especially likely to report that unemployment and inflation had increased since 1980. This pattern was reversed in 2000, when Republicans were more likely to offer negative retrospective 4 evaluations at the conclusion of the presidency of Democrat Clinton. How should we interpret these partisan gaps? Bartels presents one common view when he argues that partisans likely believe their divergent assessments: “Absent some complicated just-so story involving stark differences in the meaning of ‘unemployment’ and inflation. . . these large differences can only be interpreted as evidence of partisan biases in perceptions” (Bartels, 2002, pp. 136–137). An alternative view is that differences in survey responses are the result of a combination of motivations. Individuals may offer responses that are consistent with their partisanship not solely because they believe those responses, but also because doing so gives them the opportunity to support their “team” et al. , 2010; Green et al. , 2002). (e.g., Gerber Many social scientists have wrestled with the problem of insincere survey responses (e.g., Berinsky, 2005). But they typically focus on responses to sensitive topics like race rather than on problems that may 5 be caused by “expressive benefits” in survey response. And the methods used to overcome problems associated with responses to sensitive topics — et al. , 1997) — may not for example, “list experiments” (Kuklinski apply to the problem of eliciting sincere responses when people derive expressive benefits from answering insincerely. 3 A related but distinct literature concerns partisan differences in responses to nonfactual questions (see Berinsky, 2015). 4 Additional work examines conditions that can exacerbate apparent partisan gaps. Asking political questions prior to economic ones increases the correlation et al. , 1990; Palmer between partisanship and subjective economic evaluations (Lau and Duch, 2001; Sears and Lau, 1983; Wilcox and Wlezien, 1993), and partisan gaps are larger when elections are more salient (Lavine et al. , 2012, Chapter 5; see also Stroud, 2008). As we note earlier, what is unclear is how to interpret these patterns. Do circumstances that make partisanship more salient call relevant information to mind, or do they simply increase the expressive value of partisan responses? 5 An exception to this characterization is the literature on economic voting discussed earlier.

7 Partisan Bias in Factual Beliefs about Politics 525 Instead, scholars have long used incentives to elicit honest or rational responses. In a review of relevant experiments, Morton and Williams (2010, pp. 358–361) argue that incentives often reduce the size and frequency of decision-making errors. But almost all of the studies that they review are apolitical and do not involve tests of factual knowledge. Prior and Lupia (2008) do study the effect of financial incentives on responses to factual questions about politics, and they find that the 6 However, they do not examine the effects of effects are real but weak. incentives on partisan patterns in responding. To date, only Prior (2007) and Prior (2015) have examined the et al. effects of incentives on partisan response patterns to factual questions about politics. Prior (2007) asked subjects 14 questions about politics; some were assigned at random to receive $1 for each correct answer. The results were mixed, but they suggest that $1 incentives can reduce 7 party differences in responses to such questions. Prior et al. (2015) present two experiments in which they urge people to answer correctly or provide relatively large financial incentives ($1 or $2 for each correct response). Both treatments reduce errors in answers to questions about the performance of the U.S. economy during the George W. Bush administration. Across the two experiments, financial incentives appear to reduce the rate of error by about 40%; simply urging people to answer correctly may be still more effective. An important unanswered question from that work, however, is how respondents who do not know the correct answers behave in the presence and absence of incentives for correct responses. It may be, for example, that partisan responses are insincere, but that respondents continue to offer them when given incentives because they do not know which other answer might be correct. If respondents could express their lack of knowledge about the truth, would partisan gaps be even smaller? To address these questions, we present a model of survey response which incorporates the possibility that individuals (a) receive utility 6 All subjects in the Prior and Lupia (2008) study were asked 14 factual questions about politics. Subjects in a control condition averaged 4.5 correct answers, while those who were paid $1 for each correct answer averaged 5.0 correct answers (Prior and Lupia, 2008, p. 175). 7 In Prior (2007), incentives reduced partisan gaps in responses to four items. Results on a fifth item were mixed. Results were null for two other items. There was no partisan gap in the control group for three further items, and results for the remaining four items were not reported.

8 526 Bullock et al. from offering partisan-tinged responses and (b) differ in their underlying knowledge of the truth. We use this model to understand the effect of incentives on a respondent’s tendency to answer questions in a manner that reflects either her partisan affinity or her beliefs about the truth. We also show that our model can be used to understand the extent to which partisan differences arise because people are uncertain about the truth. 2 A Theory of Expressive Survey Response To explore the role that insincere “cheerleading” plays in the partisan polarization of survey responses, and to motivate our experimental design, we present in the Appendix a formal model of responses to factual questions in the presence and absence of financial incentives. As in our experiments, incentives take two forms: respondents may be paid for offering the correct answer or for admitting that they don’t know the correct answer. We present here a summary of results from the model. The first results show that incentives for correct responses reduce partisan divergence under three conditions: (1) participants would give inaccurate, partisan-tinged responses in the absence of incentives; (2) the value of the incentive is greater than the value of partisan cheerleading; and (3) the same strong beliefs about the correct answer are held by 8 members of both parties. The intuition for this result is straightforward: giving a response that one does not believe but that portrays one’s party in a favorable light is more costly when it entails giving up the chance to earn a reward for answering correctly. Therefore, under the three conditions listed earlier, a researcher can reduce partisan divergence and elicit responses more informative of people’s true beliefs by offering incentives to answer correctly. 8 If members of different parties have different underlying beliefs about the truth, there is no strong reason to expect that responses in the presence of incentives will be less divergent than in the absence of those incentives. Additionally, it may be that only members of one party change their responses in the presence of incentives, in which case divergence will be reduced only if members of that party move in the direction of the other party’s responses.

9 Partisan Bias in Factual Beliefs about Politics 527 The third condition — if incentives for correct responses are to reduce partisan divergence, members of different parties must share the same belief about the truth — requires elaboration. This condition is an implication of our model, not an assumption that underpins it. There are surely cases in which members of different parties hold different beliefs about the truth. In these cases, paying them to answer truthfully will not cause their survey responses to converge. On the other hand, to the extent that payments for correct answers do cause partisans’ survey responses to converge, we can infer that partisans’ beliefs about the correct answers are more similar than they seem to be under normal survey conditions. An alternative interpretation of partisan convergence when people are paid for correct answers does not imply that they “know” the correct answers with much confidence. Instead, it suggests that partisan differences arise because of “congenial inference”: when trying to answer a question under ordinary conditions, partisans are especially likely to call to mind those considerations that put their own party in a favorable light, and they infer the correct answer to the question at hand from this congenial set of considerations (e.g., Zaller, 1992, Chapter 5). But payment for correct answers heightens the desire to provide a correct answer. In turn, respondents who are paid for correct answers undertake a more even-handed (and perhaps more effortful) search of their memory for relevant considerations. They make more accurate inferences on the basis of this different set of considerations — even though they were not at all sure of the correct answer before the question was asked. In this paper, we are agnostic about which mechanism better explains the effects of payment for correct answers, but both mechanisms reveal that conventional survey responses do not fully characterize individuals’ beliefs about political facts. In addition to identifying the conditions under which incentives promote partisan convergence, our model highlights a little-appreciated explanation for divergent factual responses: even when partisans are paid for correct responses, their answers may diverge because they are unsure of the correct response and therefore default to an expressive response. To see how uncertainty can increase partisan divergence, note that the expected value of an uncertain respondent’s best guess is discounted by her uncertainty. If she is sufficiently uncertain, the expected value of her best guess may be smaller than the expected

10 528 Bullock et al. value of partisan cheerleading. At the extreme, if there are two answers to a question and she is completely uncertain about which response is correct, in expectation she earns the incentive for a correct response half the time for offering either response, and she therefore has no reason to deviate from her preferred partisan response. This will be true even if the incentives are very large. In light of this ambiguity, we extend the model by incorporating incentives for admitting one’s lack of knowledge. When respondents are paid for both correct and “don’t know” answers, our analysis shows that the proportion of respondents choosing “don’t know” is increasing in the proportion who (1) place low value on partisan cheerleading relative to the incentive for choosing “don’t know,” (2) are so unsure of the and correct answer that they are better off choosing “don’t know” than any other option. This is so because one can earn the incentive for a “don’t know” response with certainty (by choosing “don’t know”), whereas the incentive for a correct response is earned only if the respondent chooses the response that is correct. Overall, incentives for “don’t know” responses allow us to understand the proportion of partisan divergence that arises because respondents default to expressive responding when they are unsure of the correct answer. Our model implies that an experiment in which subjects receive incentives for correct and “don’t know” responses to factual questions can identify the presence of partisan cheerleading. We now describe two experiments that meet these conditions. 3 Experiment 1: Effects of Incentives for Correct Responses on Partisan Divergence Our first experiment was fielded on the Cooperative Congressional Election Study in October 2008. CCES subjects are part of a nationally representative opt-in sample. In our experiment, 626 participants were randomly assigned to the control group ( N = 312 ) or the treatment group ( = 314 ). We restrict our analysis to the 419 participants who N 9 identified as either Democrats or Republicans. 9 In our analysis, Democrats are those who responded “Democrat” to the first question in the standard two-question measure of party identification. Republicans are those who responded “Republican.” We discuss the behavior of partisan “leaners” later, and we present question wording for both the experiments, along with further

11 Partisan Bias in Factual Beliefs about Politics 529 We told control-group subjects that they would be asked questions about politics, that they would have 20 seconds to answer each question, and that their scores would not be shared with anyone. Treated subjects received the same instructions and were told that answering correctly would increase their chance of winning a prize: For each question that you answer correctly, your name will be entered in a drawing for a $200 Amazon.com gift certificate. For example, if you answer 10 questions correctly you will be entered 10 times. The average chance of winning is about 1 in 100, but if you answer many questions correctly, your chance of winning will be much higher. After receiving their instructions, all subjects were asked the 12 fac- 10 tual questions shown in Table 1. The first 10 items had closed (i.e., multiple-choice) response options and were similar to questions for which other research has found partisan differences. No “don’t know” option was offered. Each question referred to a potentially salient partisan issue. The last two “placebo” questions were open-ended and required participants to enter numerical responses. We fielded the placebo ques- tions, which were about obscure historical facts, to ascertain whether participants were using their allotted 20 seconds to look up answers using outside references. Using these questions, we find little evidence that participants “cheated”: rates of correct responding were below 3% and statistically indistinguishable between the control and payment conditions. This experiment allows us to understand whether some partisan divergence in responses to factual questions arises because of the expres- sive benefit of providing partisan responses. Specifically, we can learn about the role of expressive benefits by comparing partisan divergence in the treatment and control conditions. If divergence is lower in the treatment group, it suggests that, for some respondents, our incentives are of greater value than partisan cheerleading. Given the modest size of the incentives offered, we view the estimates that we obtain information about the construction of the sample, in the Online Appendix. 10 We note that in the presence of ambiguity about which responses is correct, incentives should have weaker effects. For our purposes, what matters is not which answer is correct, but simply that partisans of different stripes have common beliefs about which answer is most likely to be correct.

12 530 Bullock et al. ) N 212 207 Continued ( test 0.000 0.000 -value of p one-tailed difference of party means, − 0.201 0.239 group Control Democrats scale scores, Republicans difference in 0.694 0.177 Control response Republican group, mean 0.894 0.416 Control response Democratic group, mean options Response Increased (1), Lower (0), Decreased (0) Stayed about the same (0.5), About the same (0.5), Higher (1) wording Question same, or higher than U.S. soldiers killed in the number who were Compared to January killed in the second Bush first took office, has the level of inflation in the country increased, stayed the same, or decreased? half of 2007? Iraq in the first half of Was the number of 2001, when President 2008 lower, about the Table 1: Experiment 1: question wording and baseline partisan differences in scale scores. change Bush inflation Iraq casualties, Question 2008 2007 versus

13 Partisan Bias in Factual Beliefs about Politics 531 ) N 208 216 Continued ( test 0.002 0.000 -value of p one-tailed difference of party means, − 0.092 0.168 group Control Democrats scale scores, Republicans difference in 0.598 0.817 Control response Republican group, mean ) 0.909 0.766 Continued Control response Democratic group, mean Table 1: ( 30% options Response Decreased (0) Stayed about Increased (1), 20% (1), the same (0.5), (0.75), 40% (0.5), 50% (0.25), 60% (0) approve of wording Question About what percentage of decreased? Compared to January the way that George Bush first took office, his job as President? unemployment in the country increased, stayed the same, or has the level of 2001, when President W. Bush is handling Americans approval Estimated Bush change unemployment Bush Question

14 532 Bullock et al. ) N 215 210 211 213 Continued ( test 0.055 0.039 0.035 0.013 -value of p one-tailed difference of party means, − 0.070 0.087 0.044 0.050 group Control Democrats scale scores, Republicans difference in 0.508 0.114 0.637 0.724 Control response Republican group, mean ) 0.794 0.681 0.200 0.558 Continued Control response Democratic group, mean Table 1: ( 8,000 52 (1) 77 (1) 72 47 60% 70% options Response 4,000 (0), 37 (0), 42 62 (0), 67 40% (1), 50% (0.25), 12,000 (0.33), (0.75), 20,000 (1) (0.25), 80% (0) (0.33), (0.5), 16,000 (0.75), (0.66), (0.66), (0.5), approve wording Question About how many U.S. Obama? invasion in March McCain? percentage of killed in Iraq since the soldiers have been About what handling his job as George W. Bush is of the way that President? How old is Barack How old is John 2003? Republicans Question Iraq total casualties Estimated Bush approval among Obama age McCain age Republicans

15 Partisan Bias in Factual Beliefs about Politics 533 ) N 212 208 Continued ( test 0.589 0.430 -value of p one-tailed difference of party means, .006 − 0 0.010 group − Control Democrats scale scores, Republicans difference in 0.944 0.598 Control response Republican group, mean ) 0.608 0.938 Continued Control response Democratic group, mean Table 1: ( (0.5), options Response Decreased (0) Higher (1) the same (0.5), Stayed about Increased (1), Lower (0), same About the wording Question Compared to January same, or decreased? Bush first took office, has the federal budget deficit in the country increased, stayed the U.S. soldiers killed in higher than the number who were killed in the second half of 2007? first half of 2008 lower, Was the number of about the same, or 2001, when President Afghanistan in the change Bush deficit casualties, Afghanistan Question 2008 2007 versus

16 534 Bullock et al. N test -value of p one-tailed N/A difference of party means, − group Control Democrats scale scores, Republicans difference in 0.680 0.185 Control response Republican group, mean ) 0.151 0.791 Continued Control response Democratic group, mean 0, = Table 1: ( 1971 1, 1, 0, = = = options Response In dollars, 0 In years, Correct is Correct is 2000 1,000 1800 and $900 between $800 wording Question independent of Pakistan? Bangladesh become In what year did gold, in dollars per What was the price of ounce, on January 18, 1980? 2008 CCES. Questions are ordered by size of partisan gap in control-group responses, with placebo questions at the bottom. All responses are Placebo: of gold in 1980 Placebo: price Question Bangladeshi independence scaled from 0 to 1; 1 is the most Democratic response. In the “response options” column, the correct response options are italicized. Placebo questions were open-ended and were recoded to range from 0 to 1. Source: Note:

17 Partisan Bias in Factual Beliefs about Politics 535 from treatment–control comparisons as lower bounds on the extent of expressive partisan responding in this experiment. To measure partisan divergence, we create scale scores by coding responses to each question to range linearly from 0 to 1. These scores are the dependent variables in our analyses. The most Republican response to each question (either the largest or smallest response) is coded as 0; the most Democratic response is coded as 1. For example, when we ask about the change in unemployment under President Bush, the response “decreased” is coded as 0 because it portrays a Republican president most positively, “stayed about the same” is coded as 0.5, and “increased” is coded as 1 because it portrays the president most negatively. If partisans are answering in a manner consistent with their partisanship, Democrats should offer “larger” responses than Republicans. Table 1 shows the average partisan difference in scale score, by question, for those in the control group. The questions in Table 1 are ordered by the size of these control-group partisan gaps. For 9 of the 10 questions, the gaps are consistent with our expectations 11 Eight of the differences are about patterns of partisan responding. p < . 10 (one-tailed). The gaps for these eight items significant at 0 vary substantially in size, with the largest gaps appearing for questions about casualties in Iraq and Bush’s economic performance. Because our theory of expressive responding is about the effects of incentives on partisan differences, we focus on these eight items, that is, the items to which partisanship makes a difference under ordinary survey conditions. (In the online Appendix, we analyze our data while including responses to all questions, including those for which we do not find partisan gaps.) What effect do incentives for correct responses have on observed partisan divergence? To measure the effects, we estimate a model in which we predict scale score R for individual i and question j : + R b + b Democrat = b PayCorrect + b (Democrat 2 i ij i 0 3 1 i PayCorrect ) + Question + e , × i i j where Democrat equals 1 for Democratic participants and 0 for Re- publicans, PayCorrect equals 1 for those assigned to the incentive 11 The exception is the question about the change in the deficit under George W. Bush. For both Democrats and Republicans, 92% of respondents correctly reported the deficit had increased.

18 536 Bullock et al. Question is a vector of question-specific fixed effects. condition, and b The coefficient is therefore the average party difference in scale scores 1 is the average party difference + b in the control condition, while b 1 3 in the incentive condition. Prior research suggests b > 0 , while our 1 b will be negative if partisans offer theoretical model predicts that 3 partisan-tinged responses in the absence of incentives, share common and sufficiently strong beliefs about the truth, and give less weight to partisan responding than to the expected value of the incentive. OLS estimates, with standard errors clustered at the respondent level, appear in Table 2. Pooling across the eight questions for which we observe statistically significant partisan gaps in the control condition, column (1) provides estimates of the average effect of incentives on ( p < 0 . responses. The 0.118 coefficient for Democrat ( b is the av- ) 001) 1 erage gap between Democrats and Republicans in the control condition. The − 0 . 065 ( p < 0 . 001) coefficient for Democrat × PayCorrect ( b ) 3 means that this gap is reduced to 0.053 . 118 − 0 . 065) , or by 55%, when (0 incentives are offered. In column (2), we add demographic controls; the 12 results are nearly unchanged. In Table A.1 of the Appendix, we repeat the analysis for each b question individually. The estimate for is negative in all eight cases. 3 While most of these individual-question estimates are not statistically significant — perhaps because the impact of sampling variability is heightened when we examine individual questions — the estimates are large, accounting for between 13% and 100% of the partisan gap between Democrats and Republicans. These estimates are especially noteworthy for the questions about the most salient issues in the 2008 campaign: the Iraq War and Bush’s performance on unemployment. On these matters, incentives reduced partisan gaps by between 33% and 73%. Importantly, these questions about war and unemployment were not salient only in 2008: they speak to the issues that political scientists often use when they link objective conditions to election outcomes (e.g., Hibbs, 2000). 12 We have also repeated our analysis excluding the Bush approval item, which is the item for which we find our largest estimate of b . In this case, we continue to 3 find a negative and statistically significant coefficient for b in the pooled analysis 3 ( − 0 . 06 , p < 0 . 01) . Our analysis excludes cases in which participants didn’t provide a response, which occurs 3% of the time in both treatment and control conditions. Replacing nonresponses to each question with party averages for each question produces substantively similar results.

19 Partisan Bias in Factual Beliefs about Politics 537 Table 2: Experiment 1: effect of payment for correct responses on partisan differences in scale scores. (1) (2) (3) 105 ) . 118 Democrat ( . 0 0 . 082 b 0 1 ∗∗∗ ∗∗∗ ∗∗∗ . [0 015] [0 . 022] . [0 016] × 0 . 059 Political interest Democrat ∗∗ [0 . 030] × − 0 . 065 − 0 . 059 − 0 . 057 Payment for correct response ∗∗∗ ∗∗∗ . . 022] b [0 ) 022] [0 [0 . 037] Democrat ( 3 × − 0 . 023 Payment for correct response × [0 . 046] Democrat Political interest 0 . 038 0 . 031 0 . 045 Payment for correct response ∗∗ ∗ 016] [0 [0 . 016] . [0 . 029] Payment for correct response × 0 . 005 − [0 . Political interest 035] . 0 Knowledge (0–1) 013 [0 . 015] 0 . 017 White . 024] [0 0 040 Hispanic . [0 . 028] Other race . 051 0 ∗ [0 . 030] Female 0 . 016 [0 . 012] 0 001 Age (in years) . . 002] [0 2 Age /100 − 0 . 001 [0 . 002] Region: Northeast . 043 0 ∗∗∗ [0 . 017] Region: Midwest 0 . 042 ∗∗∗ [0 . 016] ( Continued )

20 538 Bullock et al. Continued ) Table 2: ( (3) (1) (2) 0 . 014 Region: South [0 . 014] < $10,000; 14 = > 0 . 005 Income (1 = ∗∗ RF/Missing) . 002] $150,000; 15 = [0 0 − 046 Income missing . ∗ [0 . 024] = Education (1 0 . 000 no high school; 6 graduate degree) [0 . 006] = 0 . Education: No high school 006 024] . [0 0 . 019 Education: Some college . 014] [0 Education: 2-year college 0 032 . . [0 026] − 0 Education: 4-year college 003 . [0 . 019] Married or in a domestic − 0 . 007 partnership [0 . 013] 0 002 − . Religious attendance (1–6) 004] [0 . 0 . 034 − Political interest (0,1) . 021] [0 0 0 239 0 . 160 . . 261 Constant ∗∗∗ ∗∗∗ ∗∗∗ [0 [0 . 059] [0 . . 024] 021] Observations 3321 3305 3299 2 0 . 398 R . 0 0 . 400 407 Note: The dependent variable is the mean scale score for the eight questions on which we observed control-group partisan gaps of p < 0 . 10 . It ranges from 0 to 1. The analysis includes only Democrats and Republicans from the control and pay-for-correct-response conditions. Cell entries are OLS coefficients with robust standard errors, clustered by ∗∗ ∗ significant at 10%; respondent. Question fixed effects not reported. significant at 5%; ∗∗∗ significant at 1% (two-tailed tests). Source: 2008 CCES.

21 Partisan Bias in Factual Beliefs about Politics 539 These results show that even modest incentives can substantially reduce partisan divergence in factual assessments. For example, in this experiment, participants are told that answering correctly will improve their chances of earning a $200 gift certificate, and that the baseline chance of winning was around 1 out of 100. If they estimate that answering all questions correctly would double their chances of winning this prize, the expected value of answering any given question correctly 13 is approximately 17 cents. In turn, the finding that incentives reduced partisan gaps by more than 50% means that more than half of the party gap may be generated by participants for whom partisan responding to any given question is worth less than 17 cents. Of course, the effects of incentives are unlikely to be equal across all of the people in our data set. We focus on two characteristics across which variation might be expected: political interest and strength of partisanship. So far as interest is concerned, partisans who are most interested in politics may be most likely to engage in partisan cheerlead- ing under ordinary survey conditions. In this case, they may be more affected than low-interest respondents by incentives for correct response. Another possibility, however, is that highly interested partisans are most likely to sincerely hold different factual beliefs about politics (e.g., Abramowitz and Saunders, 2008; Taber and Lodge, 2000). If they do, less affected by incentives. The estimates presented in they may be column (3) of Table 2 show that both accounts are informative. In the control group, partisan gaps are larger among high-interest respondents, that is, those who report being “very much interested” in politics and current events. The average partisan gap is 0.14 for high-interest respon- dents and 0.08 for all others (whom we label “low-interest respondents”). The treatment reduces partisan gaps more for high- than for low-interest respondents — but only to an insignificant extent ( − 0 . 08 versus − 0 . 06 ), 13 Suppose that respondents believe that (a) they will answer 6 of our 12 questions correctly if they simply respond in a partisan manner, and (b) answering 6 questions correctly will give them a 1-in-100 chance of winning $200. If they also believe that answering all 12 questions correctly will double their chances to 2 in 100, then the expected value of answering all 12 questions correctly, relative to the “baseline” of answering 6 correctly, is [($200 × 2 / 100) − ($200 × 1 / 100)] /12 questions = $0.167 per question. These calculations are speculative, because we did not verify how subjects interpreted the instructions. In our second experiment, the calculations are more straightforward, because subjects were given specific rewards on a question-by- question basis rather than entries in a lottery.

22 540 Bullock et al. and high-interest respondents in the treatment group remain more polarized than their low-interest counterparts. (The treatment-group partisan gaps are 0.06 for high-interest respondents and 0.03 for low- interest respondents). Thus, highly interested people are initially more polarized, and their slightly greater responsiveness to incentives is not enough to overcome their initially greater polarization. Political interest is associated with polarization, but it does not significantly moderate 14 the effects of incentives. The analyses that we report earlier exclude partisan “leaners” who may identify with a party less strongly than other partisans. In the Online Appendix, we present parallel analyses that include leaners. The results are similar: partisan leaners appear to behave like those who identify more strongly with the major American political parties. Treatment-effect heterogeneity aside, the main finding of Experi- ment 1 is that small incentives for correct answers reduce partisan gaps in responses to factual questions by about 55%. Of course, Experi- ment 1 cannot tell us why 45% of the partisan gap remains. Following our model, the people responsible for this gap may sincerely disagree about which response is correct. Or they may agree about the correct response but value partisan cheerleading more than giving a correct answer. Or they may be so uncertain about which response is correct that incentives for correct responses cannot offset the expressive value of partisan responding. To evaluate these explanations, we turn to our second experiment. 4 Experiment 2: Effects of Incentives for Correct and “Don’t Know” Responses on Partisan Divergence We fielded our second experiment in 2012 using subjects recruited from Amazon.com’s Mechanical Turk marketplace (Berinsky et al. , 2012). Subjects were required to pass a two-question attention screener and were then randomly assigned to a control group ( N = 156) or to one 14 Sixty-five percent of our CCES subjects report being “very much interested” in politics and current events. By contrast, the corresponding percentage among partisans in the 2008 ANES is 38%. That said, the overrepresentation of the interested in the 2008 CCES does not seem to affect the results. See the Online Appendix for a discussion of this point.

23 Partisan Bias in Factual Beliefs about Politics 541 15 of three treatment groups, two of which we examine here. In the first treatment group, participants were paid for each correct response ( N = 534) . In the second treatment group, participants were paid for ( N = 660) . Later, each correct response and each “don’t know” response we restrict our analysis to the 795 individuals in these three groups who 16 identified as either Democrats or Republicans. There are two major differences between this experiment and Exper- iment 1. First, and of greatest importance theoretically, we introduce a new condition here, in which we offer subjects a “don’t know” response option and incentives for both correct and “don’t know” responses. Therefore, unlike Experiment 1, Experiment 2 permits us to assess the extent to which partisan divergence that persists in the face of incentives for correct responses reflects self-aware ignorance, rather than partisan cheerleading or sincere differences in beliefs. Second, in both treatment conditions, we pay subjects for each correct response (instead of entering them into a lottery, as in Experiment 1), and we vary the amount offered for correct responses across participants. In the treatment that includes payment for “don’t know” responses, we also vary the amount offered for that response across participants. These randomizations allow us to assess the degree to which partisan divergence is affected by the size of 17 incentives. As before, we gave subjects 20 seconds to answer each question to limit opportunities for consultation of outside information sources. In all conditions, participants were initially asked five questions that were selected at random from a larger list that we describe later. All questions had a closed response format without a “don’t know” option. Subjects 15 In the third treatment, we paid participants a flat fee to answer questions post-treatment, just as we did in the control group. However, in this condition, we also allowed respondents to offer “don’t know” answers. 14.8% of responses in this condition were “don’t know.” 16 We fielded a one-item replication of this experiment on the 2012 CCES. The item was an economic retrospection item similar to those that have been used in the past to document partisan divergence (e.g., Bartels, 2002). The results were similar. See the Online Appendix for a discussion. 17 As we discuss in the online appendix, one additional difference is that we used a graphical input device — a “slider” — to gather responses for this experiment. The advantage of this input device is that it allows subjects to provide responses continuously across the entire range of possible responses instead of requiring them to select one response from a small set of predefined options.

24 542 Bullock et al. then received instructions that indicated how they would be paid for answers to the subsequent questions. They were then asked seven more questions: two new questions followed by the same five questions that they had previously been asked. (See the Online Appendix for details.) This design feature addresses one potential objection to our analysis of Experiment 1, which is that we use the control group in that experiment both to identify questions for which party gaps arise and as a baseline against which to evaluate the treatment group. In this experiment, by contrast, we use pre-treatment responses from all subjects to identify items for which partisan divergence arises, and we then compare post-assignment responses across treatment and control 18 conditions. In the control condition, participants were paid a flat $0.50 bonus to answer those seven post-treatment questions. In the pay-for-correct (PC) condition, participants were informed that they would be paid for each correct response. The amount offered for each correct response was randomly assigned to be $0.10 (at probability = 0 . 25) , $0.25 p ( p = 0 . 25) , $0.50 ( p = 0 . 25) , $0.75 ( p = 0 . 15) , and $1.00 ( p = 0 . 10) . (These amounts varied only across subjects, not within subjects across questions.) Finally, in the pay-for-correct-and-“don’t know” (PCDK) condition, participants were again informed they would be paid for each correct response, and the amount offered for each correct response was assigned as in the prior treatment. Participants in this condition were also given “don’t know” response options, and if they selected “don’t know,” they were randomly assigned to receive a fraction of the amount offered for a correct response: 20% of the payment for a correct response ( p = 1 / 3 ), 25% ( p = 1 / 3 ), and 33% ( p = 1 / 3 ). We list the 12 questions that we fielded in this experiment in Table 3, which also shows the correct response and the range of the response options that we offered. The correct responses varied across the en- tire range of potential answers: they were not concentrated at either end of the scale or in the middle. The effects of incentives therefore cannot be attributed to a tendency among treated subjects to offer middle-of-the-scale responses. The direction of partisan responding also 18 In the Online Appendix, we also show that if we leverage this pre–post design by conducting a within-person analysis, we find results similar to those that we obtain when we focus only on post-assignment comparisons across conditions.

25 Partisan Bias in Factual Beliefs about Politics 543 ) N 383 389 Continued ( test one- 0.000 0.000 tailed means, -value of of party difference p Pre- cans 0.132 0.174 Republi- scores, in scale difference treatment − Democrats 0.583 0.378 response treatment Mean pre- Republican 0.552 0.715 cratic Demo- response treatment Mean pre- Correct response 0.5% Increased by Increased by 3.6% Range of response line increased) decreased) to 4% –2 (Unemployment increased) –2 (Unemployment decreased) to 4% (Unemployment (Unemployment Table 3: Experiment 2: question wording and baseline partisan differences in scale scores. wording Question office, how had the unemployment rate in the country changed? the unemployment rate in the country changed? first took office, to From January 2001, first took office, to February 2012, how had President Bush left From January 2009, January 2009, when when President Bush when President Obama employment Bush II un- employment Obama un- Question

26 544 Bullock et al. ) N 366 355 Continued ( test one- 0.001 0.000 tailed means, -value of of party difference p Pre- cans 0.100 0.101 Republi- scores, in scale difference treatment − Democrats 0.444 0.631 response treatment Mean pre- Republican 0.544 0.731 cratic Demo- response Mean pre- treatment ) 53.70% Continued Correct response 19.4 cents Table 3: ( 50–62% 3–27 cents Range of response line wording Question In the 2008 Presidential Election, Barack federal government Republican challenger Defense (US Military)? nation as a whole, of all the votes cast for Obama and McCain, For every dollar the to Obama? to the Department of about how much went spent in fiscal year 2011, Obama defeated his John McCain. In the what percentage went in 2008 Obama vote spending Question Defense

27 Partisan Bias in Factual Beliefs about Politics 545 ) N 343 373 Continued ( test one- 0.013 0.006 tailed means, -value of of party difference p Pre- cans 0.085 0.075 Republi- scores, in scale difference treatment − Democrats 0.502 0.344 response treatment Mean pre- Republican 0.430 0.577 cratic Demo- response treatment Mean pre- ) Continued 9.90% Correct response 7.5 cents Table 3: ( 9–21% Range of 3–27 cents response line wording Question and needy people. For every dollar the federal government spent in fiscal year 2011, about how much went to Medicaid? Medicaid is a jointly funded, Federal-State program for low-income health insurance Iraq since the invasion of US Soldiers killed in of the US population is Approximately 12–13% in 2003 are Black? Black. What percentage spending Medicaid Iraq deaths: black Question percent

28 546 Bullock et al. ) N 349 382 Continued ( test one- 0.027 0.013 tailed means, -value of of party difference p Pre- cans 0.068 0.045 Republi- scores, in scale difference treatment − Democrats 0.324 0.640 response treatment Mean pre- Republican 0.391 0.685 cratic Demo- response treatment Mean pre- ) Continued 69.56% Correct response Increased by 1.1 degrees Table 3: ( Range of response line 1 (temperatures cooler) to − 1 (less repaid) to 2 (temperatures warmer) 100 (more repaid) wording Question temperature between temperatures, in According to NASA, by average global degrees Fahrenheit, differ in 2010 from the average annual global how much did annual and auto companies. Of the $414 billion spent, Department initiated been repaid, as of March 15, 2012? during the financial The Treasury crisis of 2008. TARP involved loans to banks, insurance companies, what percentage had TARP (the first bailout) 1951 and 1980? Global back percent paid TARP: Question warming

29 Partisan Bias in Factual Beliefs about Politics 547 ) N 360 382 Continued ( test one- 0.095 0.072 tailed means, -value of of party difference p Pre- cans 0.044 0.043 Republi- scores, in scale difference treatment − Democrats 0.458 0.504 response treatment Mean pre- Republican 0.549 0.501 cratic Demo- response treatment Mean pre- ) Continued 4,486 Correct response 6.2 cents Table 3: ( Range of 1000–7000 3–27 cents response line wording Question soldiers were killed in Iraq between the invasion in 2003 and the December 2011? For every dollar the federal government spent in fiscal year 2011, about how much went to pay interest on those The Treasury Department finances U.S. Government debt by selling bonds and other financial products. withdrawal of troops in Treasury securities? Question Iraq deaths About how many U.S. spending Debt service

30 548 Bullock et al. N 388 of test one- -value 0.239 tailed means, p of party difference N/A Pre- cans 0.013 Republi- scores, in scale difference treatment Democrats − 0.772 0.319 response treatment Mean pre- Republican 0.785 0.339 cratic Demo- response treatment Mean pre- ) 54 Continued 12.92% Correct response Table 3: ( 36–60 1–100% Range of response line wording Question States (foreign-born)? United States was born total population of the Census Bureau, in 2010 According to the outside of the United home runs did his that year? In 1961, Roger Maris Mickey Mantle hit in record for most home runs hit in a major league baseball season. He hit 61 home runs that year. How many broke Babe Ruth’s what percentage of the Yankees teammate Mechanical Turk, March–April 2012. Questions are ordered by size of partisan gap in pre-treatment responses, with placebo question at the bottom. All responses scaled Question Foreign-born population Placebo: home runs Mantle Note: 0 to 1; 1 is the most Democratic response. Source: 1961

31 Partisan Bias in Factual Beliefs about Politics 549 varied: sometimes, responses at the higher end of the scale favored the Democratic Party; sometimes, they favored the Republican Party. As before, we fielded a placebo question to assess whether participants were consulting outside references, and we found little evidence of this 19 behavior. (See the Online Appendix.) As with Experiment 1, we recoded all responses to range from 0 to 1, with 0 corresponding to the response that portrayed Republicans most favorably and 1 corresponding to the response that portrayed 20 Democrats most favorably. Table 3 reports, for each non-placebo question, the observed pre-treatment difference in mean scale scores between Democrat and Republican participants. (Recall that each participant was asked five pre-treatment questions.) We find statistically p < 0 . 10 significant ( , one-tailed) partisan gaps for 10 of the 11 questions, with the largest gaps for questions about unemployment under Bush and Obama, and the smallest gaps for a question about the proportion of the population that is foreign-born. Our subsequent analysis is restricted to these 10 questions, that is, the questions to which partisanship makes 21 a difference under ordinary survey conditions. (Including all items produces similar results; see the Appendix.) 19 In this experiment, subjects were explicitly asked, after they had completed the entire experiment, whether they had consulted any outside resources for an answer. (We told them that their pay would be unaffected by their answers to this question.) In the control condition, 1% of respondents reported consulting an outside reference, compared to 4% who reported doing so when paid $1.00 for a correct response. In the Online Appendix, we show that excluding all responses from any respondent who reported looking up the answer to any question produces highly similar results. 20 We coded one end of the (continuous) input range at 0 and the other end at 1. Empirically, subjects use the entire scale range for all 10 questions. Our scaling implies that identical movements on the scale response range (e.g., 1 additional point of unemployment) are equivalent across the entire scale range. 21 In pooled models, we assume movements across the scale range are on average the same across questions. As the units and endpoints of each question are different, this is a simplification for ease of presentation. While this is not a necessary assumption for our data analysis, we do not have strong ex ante theoretical reasons for presuming a different functional relationship for each question. We present a question-by-question analysis, which does not use this approximation, in Table A.2 of the Appendix.

32 550 Bullock et al. 4.1 The Effect of Incentives for Correct and “Don’t Know” Responses We begin by reporting the effect of the treatments on the frequency of selecting “don’t know.” Our model suggests that the rate at which participants select “don’t know” when offered a payment for doing so indicates the degree to which they understand that they don’t know the correct responses. In particular, if participants are sufficiently uncertain about the correct response and preferences for expressive partisan responding are not too large, then choosing “don’t know” when paid to do so will yield greater expected utility than either expressive or sincere responses. Pooling across the 10 questions for which we found pre-treatment partisan gaps, we find that 48% of responses in the PCDK condition are “don’t know.” That is, nearly half of participants forgo a response that would allow them to support their party or give them a chance to earn the larger payment that we offered for a correct response. Recall that for “don’t know” responses, participants were randomly assigned to receive 20%, 25%, or 33% of the payment that they received for correct responses. Across these conditions, “don’t know” responses were given 46%, 47%, and 50% of the time, respectively. These percentages are ordered as the theoretical model predicts, but only the difference between the 20% and 33% conditions approaches statistical significance 22 ( . 07 , one-tailed). 0 p < This pattern — frequent “don’t know” responses when subjects are paid to give that response, even when they are also offered more for correct responses — implies that many participants are so uncertain about the correct answers that they expect to earn more by selecting “don’t know.” In this experiment, uniformly distributed blind guesses will be correct about 17% of the time. Subjects who are completely unsure of the correct answers can therefore receive, in expectation, 17% 22 One concern is that respondents may choose “don’t know” simply because it allows them to avoid thinking about the question altogether. In footnote 15, we show that when offered a “don’t know” option without payment, only 15% of responses were “don’t know,” a much lower rate than in this condition. Of note, as our model shows, choosing “don’t know” when also offered a payment for a correct response is optimal only if the respondent is uncertain enough about the correct answer that it makes sense to give up the chance to guess and potentially earn a much larger amount.

33 Partisan Bias in Factual Beliefs about Politics 551 of the payment that we offer for correct answers just by guessing blindly. Yet, when we paid subjects just 20% of the correct-answer payment for “don’t know” responses, 46% chose to say “don’t know” rather than to guess. We therefore infer that many respondents are highly unsure of which response is correct and give low weight to partisan responding. As in the previous section, we study the effect of the treatments on party polarization by examining whether post-treatment partisan gaps differ between the control and treatment conditions. Our analysis initially takes the following form: + b PayCorrectDK R = + b PayCorrect b + b Democrat 0 1 2 3 ij i i i + ) (PayCorrect Democrat × b 4 i i e b (PayCorrectDK , × Democrat + ) + Question + i i 5 i j where Democrat 1 for Democratic participants and 0 for Republi- = cans, PayCorrect = 1 for those assigned to the PC condition, PayCor- rectDK = 1 for those assigned to the PCDK condition, and Question is 23 a vector of question-specific fixed effects. b In this specification, is 1 the amount of partisan divergence in the control condition, while + b b 4 1 b is the gap in the PC condition, and + b is the gap in the PCDK 5 1 24 condition. Our model predicts that b . That is, > 0 , b 0 < 0 , and b < 5 4 1 both treatments will reduce partisan divergence relative to the control condition. Additionally, our theoretical model suggests that some 23 We have multiple observations from the same respondent, which is why we cluster our standard errors by respondent. To test whether this clustering is sufficient to account for the correlated nature of multiple responses by the same respondent, we have also collapsed the data (to one observation per respondent) and estimated an otherwise identical specification. The results are highly similar, and we present them in the Online Appendix. 24 To incorporate “don’t know” responses into our analysis of partisan divergence, we must decide where to place those responses on the 0–1 scale that we use to analyze other responses. Because participants who admit that they don’t know thereby forgo the opportunity to express support for their party, we treat these responses as being non-polarized. That is, we assign both Democrats and Republicans who choose “don’t know” to the same position on the 0–1 scale. Specifically, we assign “don’t know” responses for a given question to the average pre-treatment response that participants offered to that question. In practice, the specific value makes little difference to our analyses; the important point is that Democrats and Republicans are assigned to the same position on the scale if they say “don’t know.” If everyone chose “don’t know,” we would therefore find no differences between the parties.

34 552 Bullock et al. partisans who will not respond to incentives for correct responses will nonetheless respond to incentives for “don’t know” responses. For this b < b (a larger reduction of partisan differences reason, we also predict 4 5 in the PCDK condition than in the PC condition). The first column of Table 4 reports OLS estimates of the equation. (Parallel analysis for each individual question appears in Table A.2 of the Appendix.) The estimate of is 0.145 ( p < 0 . 01) , which means that, b 1 on average, control-group Democrats and Republicans differ by about b is 15% of the range of the scale. The estimate of − 0 . 087 ( p < 0 . 01) , 4 . . 145 − 0 . 087) (0 so the total partisan gap in the PC condition is 0.058 In other words, only 40% of the previously observed party gap remains when participants are paid small amounts for correct responses. Despite the differences between Experiments 1 and 2 in subject pools, questions, and other respects, this effect is similar to the effect that we find in Experiment 1. And like Experiment 1, this experiment shows that analyses of ordinary survey responses are likely to overstate the true 25 extent of partisan polarization. This experiment also allows us to estimate the effect of incentives for “don’t know” responses on polarization. The estimate of b is − 0 .117 ( p < 5 − 0 , so the total partisan gap in the PCDK condition is 0.028 (0 . 145 . 01) 0 . 117) , or 80% smaller than the control-condition gap and about 50% smaller than the PC–condition gap. (These differences are significant at p < 0 . 01 and p < 0 . 05 , respectively.) In practical terms, whereas the control-group difference between Democrats and Republicans was about 15% of the range of the scale, it shrinks to 3% of the range when we offer incentives for both correct and “don’t know” responses. In column (2), we estimate a Tobit specification because our response scales were bounded and unable to accommodate extreme responses. The estimates are similar to those shown in column (1). Indications of statistical significance do not change. In column (3), we leverage the variation in incentive size to assess more fully the effect of differences in correct and “don’t know” payments 25 As in Experiment 1, question-by-question analysis yields less precise estimates and reveals heterogeneity across topics. Incentives have their largest effects on responses to questions about unemployment under Obama and the racial composition of Iraq War casualties. They also have large effects on basic retrospective assessments, reducing average partisan divergence by 41% and 72% in responses to questions about unemployment under Bush and Obama, respectively. (See Table A.2.)

35 Partisan Bias in Factual Beliefs about Politics 553 Table 4: Experiment 2: effect of payment for correct responses on partisan differences in scale scores. (3) (1) (2) OLS Tobit OLS ) 0 . 145 0 b 152 0 . 145 Democrat ( . 1 ∗∗∗ ∗∗∗ ∗∗∗ [0 . 028] [0 . 028] . [0 029] . 0 087 − 0 − 091 Payment for correct . ∗∗∗ ∗∗∗ b . ) [0 . 030] response [0 × Democrat ( 032] 4 − 0 117 − 0 . 123 Payment for correct response . ∗∗∗ ∗∗∗ × [0 . 029] and DK [0 . 030] Democrat (b ) 5 0 . 018 Payment for correct response . 018 0 [0 025] [0 . 026] . 0 . 049 0 Payment for correct response 052 . ∗∗ ∗∗ [0 and DK 024] [0 . 025] . Amount correct = $0.10 × Dem. − 0 . 082 ∗∗ [0 033] . $0.25 Dem. − 0 . 092 = × Amount correct ∗∗∗ [0 . 033] $0.50 × Dem. − 0 . 096 = Amount correct ∗∗∗ 033] [0 . = × Dem. − 0 . 061 Amount correct $0.75 ∗ 036] . [0 = $1.00 × Dem. − 0 . 116 Amount correct ∗∗∗ . 036] [0 (Proportional payment − . 031 0 ∗ = × Democrat for DK . 018] 0.20) [0 (Proportional payment − 0 . 016 for DK = 0.25) × Democrat [0 . 020] (Proportional payment 0 . 041 − ∗∗ = × Democrat [0 . 020] for DK 0.33) Amount correct = $0.10 0 . 010 [0 . 027] ( Continued )

36 554 Bullock et al. ) Table 4: ( Continued (2) (1) (3) OLS OLS Tobit 0 Amount correct 028 = $0.25 . 027] . [0 Amount correct 0 . 020 = $0.50 . [0 027] $0.75 0 = 005 Amount correct . [0 . 029] = $1.00 0 . 042 Amount correct . [0 029] . Proportional payment for 023 0 ∗ DK . 013] = 0.20 [0 0 . 030 Proportional payment for ∗ = [0 . 017] 0.25 DK 0 . 034 Proportional payment for ∗∗ = 0.33 [0 . 016] DK Constant 0 614 0 . 617 0 . 614 . ∗∗∗ ∗∗∗ ∗∗∗ . . 026] . [0 026] 026] [0 [0 Observations 4608 4608 4608 2 181 0 . 179 N/A 0 . R 0 F Dem.’ > N/A 020 . × . 020 0 -test, ‘Pay Correct Dem.’ × ‘Pay Correct and DK Note: The dependent variable is the mean scale score for the 10 questions on which we observed pre-treatment partisan gaps of 0 . 10 . It ranges from 0 to 1. The analysis p < includes only Democrats and Republicans. Cell entries are coefficients with robust standard errors, clustered by respondent. Question fixed effects are not reported. *Significant at 10%; **significant at 5%; ***significant at 1% (two-tailed tests). Source: Mechanical Turk, March–April 2012. on observed divergence. Our specification includes indicators for each level of payment, each interacted with partisanship. The specification is highly flexible because it does not make assumptions about the functional form that relates incentive size to responses (e.g., a linear interaction between incentive size and responses).

37 Partisan Bias in Factual Beliefs about Politics 555 ( p < . 05) coefficient for Under this specification, the estimated 0.145 0 is the average difference between Democrats and Republicans Democrat in the control condition. As expected, all five interactions between the Democrat are negative and amount paid for a correct response and 0 . statistically significant at , which means that party gaps are p < 10 smaller when participants are offered incentives for correct responses. With one exception, larger payments are associated with smaller partisan gaps. For example, we estimate that partisan gaps are 56% smaller in the $0.10 payment condition than in the control group and 80% smaller in the $1.00 payment condition. The difference between the two coefficients ( = $0.10 × Democrat and Amount correct = Amount correct $1.00 Democrat ) is marginally significant ( p < 0 . 10 × , one-tailed test). The third column of Table 4 also reports the effects of variation in the amount paid for “don’t know” responses. All of the interactions between the fractional payment amounts and partisanship are in the expected negative direction, meaning that payments for “don’t know” responses further reduce partisan gaps. For payments that are 20% or 33% as large as the payments for correct responses, the estimates are statistically significant at p < 0 . 10 (two-tailed), and the pooled estimate of the effect of “don’t know” payments is significant at p < . 05 . To interpret these 0 coefficients, one can fix the payment for a correct response at $0.10, in p < . 145 − 0 which case the estimated partisan gap is 0.063 ( 082 , 0 0 . 01 ). . Adding the “don’t know” payment is estimated to reduce this party gap by between 0.02 (a 25% reduction for a “don’t know” payment of $0.025) and 0.04 (a 65% reduction for a payment of $0.033). The ordering of the effects for the proportional payments is non- monotonic. The largest reduction in partisan divergence is associated with the 33% payment for “don’t know” responses, the next-largest reduction is associated with the 20% payment, and the smallest reduc- tion is associated with the 25% payment. None of these estimates are statistically distinguishable from one another, perhaps reflecting the relatively small sample sizes in each condition. At the same time, the estimates imply that the combination of a $1.00 payment for a correct response and a $0.33 payment for a “don’t know” response will eliminate gap between Democrats and Republicans in responses to the entire 26 partisan factual questions. 26 , which is actually slightly smaller than 0. 0 . 145 − 0 . 116 − 0 . 041 This calculation is

38 556 Bullock et al. Taken as a whole, these results have two implications. First, as in Experiment 1, modest incentives for correct responses substantially reduce partisan gaps, which is consistent with these gaps being due partly to expressive responding rather than to sincere differences in beliefs. Second, at least half of the partisan divergence that remains in the presence of incentives for correct responses alone appears to arise because people know that they do not know the correct response but continue to engage in expressive responding. On average, payments for correct responses in this experiment reduce partisan gaps by 60%. Adding “don’t know” payments reduces partisan gaps by an additional 20 percentage points, leaving only 20% of the original gap. This result implies that fully half of the remaining gap arose because participants were unaware of the correct response and understood their lack of knowledge. Indeed, the relatively high rate of “don’t know” response (about 48%) reveals that a surprising number of respondents were aware that they lacked clear knowledge of partisanship-relevant facts. 5 Expressive Survey Response and the Relationship Between Facts and Votes Our experiments speak most directly to the role that partisan cheerlead- ing plays in responses to factual questions about politics. But they also speak to the relationship between factual assessments and the political choices that people make. In particular, they suggest that efforts to un- derstand the relationship between facts and votes with survey responses are likely to be biased in the absence of efforts to account for partisan cheerleading. To make this concern clear, we use Experiment 1 to assess the correlation between factual assessments and candidate preference in 2008. By comparing the correlations in the control and treatment conditions, we can understand whether the use of survey measures of economic perceptions to predict vote choice — a common practice in the literature on retrospective economic voting (e.g., Duch and Stevenson, 2006) — leads to biased conclusions when those measures are affected by partisan cheerleading. With the data from Experiment 1, we estimate = b + PayCorrect FactualAssessments + b PresVote b i 2 i 1 0 i , b e (PayCorrect ) + + FactualAssessments × i 3 i i

39 Partisan Bias in Factual Beliefs about Politics 557 = where PresVote 1 indicates an intended vote for Obama and PresVote = 0 indicates an intended vote for McCain. (We exclude from the analyses those who aren’t registered, prefer other candidates, or report that they won’t vote.) FactualAssessments is the mean of the eight items that we included in our earlier analysis of the experiment, with each item coded so that 1 is the most Democratic response and 0 is the most Republican response. PayCorrect is an indicator for assignment to the pay-for-correct-response condition. Existing research suggests b : statements of factual beliefs that favor the Democratic > 0 that 1 Party are associated with voting for the Democratic candidate. But if those statements are affected by cheerleading under ordinary survey conditions, then the association should be weaker in the treatment b condition, implying < 0 . 3 27 Table 5. We present OLS estimates with clustered standard errors in Per these estimates, a one-standard-deviation increase (0.124) in the factual assessments scale is associated with a 22-percentage-point in- crease in the probability of voting for Obama ( p < 0 . 01) . Among those assigned to the treatment group, however, the negative estimate for b 3 means that this effect is reduced. For those subjects, the same shift in the assessments scale increases the probability of voting for Obama by ( p < 0 13 percentage points, a decrease of more than 40% 05) in the . association between those assessments and vote choice. This finding suggests that the observed correlation between normal (unincentivized) survey reports of factual assessments and voting is exaggerated by partisan cheerleading. We are not suggesting that partisanship does not shape vote choice. However, the clear implication of our experiments is that standard survey measures of factual beliefs are affected by expressive responding. It is therefore difficult to use those measures to test the claim that partisanship works by shaping factual beliefs. When incentives are used to measure factual assessments more accurately, the apparent role of factual assessments in vote choice is reduced. 27 In this sample, the mean FactualAssessments score is 0.59 and 50% of respon- dents prefer Obama. Probit results are substantively similar.

40 558 Bullock et al. Table 5: Experiment 1: association of factual assessments with vote choice. Vote for Democratic presidential candidate . 1 Average factual assessments scale score 770 ∗∗∗ ; 0 ( most Republican, 1 b most Democratic) [0 . 222] = = 1 Payment for correct response b 418 ) 0 . ( 2 ∗ . [0 224] × Average Payment for correct response 0 . 741 − ∗∗ ( b . ) [0 factual assessments scale score 367] 3 Constant 0 . 548 − ∗∗∗ . [0 135] Observations 373 2 0 R 130 . Note: The dependent variable is coded 1 for subjects who expressed an intention to vote for the Democratic candidate (Barack Obama), 0 for those who expressed an intention to vote for the Republican candidate (John McCain). The analysis includes only those Democrats and Republicans who expressed an intention to vote for one of the major-party candidates. “Payment for correct response” is coded 0 or 1. “Average factual assessments scale score” is computed by averaging across the eight non-placebo questions for which we found partisan gaps in the control condition. *Significant at 10%; **significant at 5%; ***significant at 1%. Source: 2008 CCES. 6 Discussion and Conclusion Differences between Democrats and Republicans in statements about factual matters are a hallmark of American politics. How should those differences be interpreted? One view is that they reveal perceptual biases. That is, Democrats and Republicans answer questions differently because they perceive “separate realities” (e.g., Kull et al. , 2004). An- other possibility, highlighted in this paper, is that differences in survey responses arise because surveys offer partisans low-cost opportunities to express their partisan affinities. To explore the distinction between beliefs and expressive statements made in surveys, we have presented a model of survey response that ac- counts for the possibility of expressive partisan responding. Our model shows that, if respondents have this sort of knowledge, incentives for correct responses can be used to distinguish sincere from insincere parti-

41 Partisan Bias in Factual Beliefs about Politics 559 san responding. It also shows that incentives — no matter how large — may fail to reduce partisan responding. However, by providing incen- tives for both correct and “don’t know” responses, one can estimate the proportion of partisan responding that arises either because of partisan cheerleading or because of uncertainty about the correct answers. Guided by the model, we designed and fielded two novel experiments. In the first experiment, some participants were paid for correct answers to factual questions. The payments reduced observed partisan gaps by about 55%. In the second experiment, we also paid some participants for “don’t know” responses. Payments for correct responses reduced partisan gaps by 60%. Payments for both correct and “don’t know” responses reduced them by an additional 20%, yielding gaps that were 80% smaller than those that we observed in the absence of payments. Taken together, these results from experiments with small incentives provide lower-bound estimates of the extent to which partisan divergence arises because of expressive partisan returns and self-aware ignorance of the truth. Why do we observe partisan responding in the first place? We have suggested that it follows from a conscious desire to offer a partisanship- consistent message. But it may also arise unconsciously. Survey respon- dents may not think seriously about correct answers under ordinary survey conditions, but incentives may reduce partisan gaps by causing respondents to think more carefully about correct answers (e.g., Kahan et al. , 2015; Kuklinski et al. , 2001, pp. 419–420). In either case, the takeaway is the same: conventional survey measures overstate partisan 28 differences. et al. (2015), which The article most closely related to ours is Prior et al. focus also appears in this issue. One basic difference is that Prior on the accuracy of answers to factual questions about politics, while we focus on partisan differences in responses to those questions. That is, Prior et al. examine the extent to which payments or unpaid appeals 28 We also designed Experiment 1 to test whether merely enhancing accuracy motivations would reduce partisan gaps. Specifically, we fielded an additional treatment, not discussed earlier, in which some respondents were told that their answers would be scored. This condition is similar to the “accuracy appeal” condition in Study 2 of Prior et al. (2015). But unlike those authors, we did not find that this treatment made much difference to partisans’ responses, perhaps due to imprecision in our estimates.

42 560 Bullock et al. for accurate responses reduce respondents’ factual errors in surveys. By contrast, we examine the extent to which payments reduce differences in responses between Democrats and Republicans, and we do not focus on whether respondents answer correctly. Despite this difference, the basic results are complementary: ordinary surveys seem to exaggerate both the differences between partisans and the extent to which they are misinformed. et al. (2015) Two other differences between this article and Prior merit attention, and we hope that they will guide future research. First, while we asked partisans about a range of issues in our two studies, Prior et al. focused on economic issues. Both articles show that ordinary surveys have been overstating partisan bias on a set of economic issues. However, our work shows that this pattern also extends to important issues beyond the economy, including evaluations of foreign affairs. Second, we present a model of survey response which allows for the possibility that respondents know that they do not know the correct answers to factual questions. This model shows that, if respondents have this sort of knowledge, increasing the incentive to be accurate alone will not reduce partisan bias in survey responses. However, the experimental manipulation that we undertake, in which individuals are paid for “don’t know” responses, permits us to gauge how many respondents recognize their own lack of knowledge about basic political matters. We find that a surprisingly large proportion of subjects appear willing to admit their ignorance by choosing “don’t know” for a small financial incentive, despite the fact that this means forgoing the chance to express one’s partisan feelings or to earn a larger reward by choosing the correct response. Furthermore, paying respondents to admit their own ignorance further reduces partisan divergence beyond what is achieved by only encouraging accuracy. We see this finding and its implications as particularly deserving of further study. 6.1 Implications of Our Findings The main implication of our findings is that partisan differences in re- sponses to factual questions may not imply partisan differences in beliefs. Instead, some portion of partisan polarization in survey responses about facts — perhaps a very large portion — is affective and insincere. Our

43 Partisan Bias in Factual Beliefs about Politics 561 results thus call into question the common assumption that what people say in surveys reflects their beliefs. Of course, this assumption has often been called into question for sensitive topics. But our results suggest that a broader range of survey responses should be subject to scrutiny. In light of this concern, efforts to assess the dynamics of public opinion should grapple with the possibility that over-time changes in partisans’ expressed attitudes do not reflect changes in real beliefs. Instead changes in survey responses may reflect changes in the social et al. , 2012) or in the degree to which returns to cheerleading (see Iyengar different responses are understood to convey support for one’s party. For example, elections may make more salient the need to support one’s party, explaining why party polarization is more pronounced during campaigns (Iyengar et al. , 2012), just as “sorting” (Levendusky, 2009) may arise because holding particular policy positions may come to be associated with public support of one’s party. Our results may also help to resolve the tension between partisans’ divergent assessments of objective conditions in surveys and the power of those conditions to explain aggregate election outcomes (e.g., Bartels and Zaller, 2001; Hibbs, 2000). We show not only that partisans do not fully believe their own survey responses, but also that they appear to be aware of their own ignorance. This self-awareness may make it easier to inform them of the facts and in turn change their votes. It may also help to explain why even some simple informational interventions appear to have relatively large effects on voting (e.g., Ferraz and Finan, 2008; et al. , 2015). And if these interventions have large effects on Kendall vote choice, then partisan patterns in voting may reflect, in large part, a self-aware lack of information rather than some persistent unwillingness to tie electoral sanctions to performance. While our experiments are confined to factual questions, our ar- gument applies to a wider range of questions. Our model suggests that, in the absence of a motivation to answer “partisan” questions accurately, partisan divergence should be large. These factors are also likely to apply to nonfactual matters. In particular, when survey reports of attitudes have expressive value, they may be inaccurate measures of true attitudes. And survey reports of vote intention may also be systematically biased by expressive responding. We have focused on factual statements because our experimental design requires objectively verifiable responses. But other approaches

44 562 Bullock et al. that do not rest on payments for objectively correct answers may also create pressures to be objective. For example, forward-looking judgments can be tied to incentives that are paid based on the realization of future events. And creative studies in psychology show that enhancing accuracy motivations can reduce partisan divergence even for questions that lack objectively correct answers (Campbell and Kay, 2014; Waytz et al. , 2014). In other words, partisans may be exaggerating not only their statements of factual belief but also their attitudinal statements. Another area for subsequent research is the potential heterogeneity of treatment effects. We asked questions about many different policy areas, and we found variation across questions, in both the degree of partisan divergence that exists in the absence of incentives and the degree to which incentives reduce that divergence. Further exploration of this variation will be useful. For example, are certain policy topics perceived as more important, leading partisans to feel that they must stay “on message” when answering questions about those topics? In Experiment 2, we find the largest baseline partisan gaps for questions about economic performance, which is a key issue in almost all presiden- tial campaigns. (See Table A.2.) Did partisans feel that straying from their team’s message on these questions would be particularly damning? (Interestingly, despite the large initial gaps, incentives for correct and “don’t know” responses reduced partisan divergence for these items by about as much as they did for other items.) Similarly, which sorts of people are most likely to engage in expressive responding, and how do those people respond to incentives for correct responses? In our discussion of Experiment 1, we find that strength of partisanship does not seem to moderate expressive responding. Political interest does moderate expressive responding — as expected, more interested partisans are more polarized — but neither strength of partisanship nor political interest changes the effects of incentives. That said, these results are tentative, and a comprehensive examination of heterogeneity across subjects awaits future research. Additionally, the imprecision of our estimates about the effects of increasing incentive size, for example, means that it would be valuable to conduct additional experiments with larger samples. The apparent effect of increasing incentive size also implies that it would be desirable to ascertain whether even larger incentives can further reduce apparent bias.

45 Partisan Bias in Factual Beliefs about Politics 563 Our main contributions are a model of expressive survey response and two experiments that distinguish cheerleading behavior from sincere partisan divergence. We find that small financial inducements for correct responses can substantially reduce partisan divergence, and that these reductions are even larger when inducements are also provided for “don’t know” answers. In light of these results, survey responses that indicate partisan polarization with respect to factual matters should not be taken at face value. Analysts of public opinion should consider the possibility that the appearance of polarization in American politics is, to some extent, an artifact of survey measurement rather than evidence of real and deeply held differences in assessments of facts. Appendix: A Model of Expressive Survey Response We begin with a model in which respondents derive utility from their survey responses in three ways: by offering answers that cast their party in a favorable light, by expressing their sincere beliefs, and by earning financial rewards. For now, we set aside the possibility that people can choose to say “don’t know.” For simplicity, we focus on the case in which there are two survey responses, r and r . Individuals, indexed by 1 2 i , are either Democrats ( the subscript = D ) or Republicans ( T = R ). T Individuals differ in their taste for partisan cheerleading and their beliefs about the truth. Turning first to expressive benefits, individual ’s taste for partisan i cheerleading is denoted by the parameter c , for cheerleading, which i ranges from 0 (no taste for it) to any positive number. Beliefs about the p ( r ) , which is the probability that truth are described by the function i j i believes response r or 2, is correct. In this example, we assume , j = 1 j that response r portrays Democrats most favorably, that response r 2 1 portrays Republicans most favorably, and that these assumptions are shared by respondents from both parties. Specifically, the expressive e ( function to the personal ) maps an individual’s partisanship T T, r j benefit of offering response r = , and is defined as e ( T = D, r T ) = e ( 1 j R, r . That is, Democrats ) = 1 and e ( T = D, r ) = 0 ) = e ( T = R, r 1 2 2 and Republicans receive an expressive partisan utility boost from offering the response that portrays their party in a favorable light, and they receive no partisan utility from offering the response that is inconsistent with their partisan leanings.

46 564 Bullock et al. The utility associated with providing a sincere response is measured by the “honesty” function ( r h ) . For simplicity, we assume h ) = ( r j j i i ( r p ) , that is, the honesty value of offering response r is the probability j j i that the respondent believes it is true. Finally, some respondents may 0 , which is the additional reward for a also receive an incentive, I > . correct response. We assume utility is linear in I These assumptions allow us to describe a respondent’s expected r utility for offering response as the sum of three terms. We omit the j individual subscript i for clarity: r (A.1) | . ) = h ( r . ) + I × p ( r ) ) + EU( × e ( T, r c j j j j r The first term is simply the honesty value of response . The second j in the presence j term is the additional value of providing response I (realized with the probability that response is correct). of incentive The third term is the partisan value of offering response weighted by r j c . Using the the respondent’s value of expressive partisan responding, h () is equivalent to p () , we rewrite (A.1) as: assumption that ) + EU( | . ) = (1 + I ) × p ( r (A.2) , c × e ( T, r ) r j j j which is the form of the expected utility we focus on here. A respondent will offer the response r that maximizes (A.2). from ( r ) , r 2 1 j To make the exposition as clear as possible, we suppose that the ( = D ) . The analysis for the Republican respondent is a Democrat T partisan mirrors that for the Democratic partisan and is omitted. Recall ( is the partisan Democratic response, and so e that D, r r ) = 1 and 1 1 D, r e ) = 0 ( . 2 First, consider how our model predicts that partisans will respond to a survey in the absence of incentives for correct responses. In this case, equation (A.2) reduces to r EU( | . ) = p ( r (A.3) ) + c × e ( T, r . ) j j j r is p , r Using (A.3), the utility from reporting response ) + c ( 1 1 − and the utility from reporting is p ( r . Therefore, the ) = 1 ) p ( r r 1 2 2 Democrat will report r . whenever c ≥ c ∗ = 1 − 2 p ( r ) 1 1 As c is weakly positive, whenever p ( r (i.e., the Democrat ) > 0 . 5 1 believes response r is at least as likely to be correct as r ) , the Democrat 1 2

47 Partisan Bias in Factual Beliefs about Politics 565 r will offer the partisan response even in the absence of expressive 1 grows small (i.e., as = 0 ( r c ) p returns (i.e., even if ). By contrast, as 1 the Democrat becomes increasingly likely to believe the pro-Republican response is correct), larger values of c are required to cause her to offer . To produce a response of r r , the partisan expressive return must be 1 1 larger to offset the greater cost of providing an answer that is likely to be untrue. This relationship is displayed graphically in Figure A.1(a), which ( r ) shows that for each value of there is a value of expressive partisan p 1 c at least this large, responding such that, for those Democrats with will be their survey response. Democrats offering r r are therefore 1 1 composed of two groups. The first group consists of those who believe that is more likely to be correct than r ; this group is represented r 1 2 p ( r . The second ) > 0 . by the right-hand side of the panel, for which 5 1 r is more likely to be correct, group consists of those who believe that 2 but for whom that belief is offset by a larger return from offering an expressive partisan response. This group is represented by the upper segment of the left-hand side of the panel, which is labeled “insincere r . ” choice of 1 To link expressive returns to polarization of partisan responses, consider Panels (b) and (c). Panel (b) shows the response pattern for Republicans, which is a mirror image of Panel (a). And Panel (c) displays both partisan response patterns at once. It shows that in the presence of expressive returns, Democrats and Republicans who (are at the same position on the share common beliefs about the truth horizontal axis) can nonetheless offer polarized survey responses if their value of expressive partisan responding is large enough . When beliefs about the truth are shared, polarization is most prevalent when beliefs p ( r r ) = p ( are most uncertain, that is, when . Polarization ) = 0 . 5 2 1 will also arise, even in the absence of returns to expressive partisan responding (i.e., when c = 0 ), if Democrats and Republicans hold different beliefs about the truth. We next consider what happens when incentives are offered for I > 0 . From Equation (A.2), for a correct responses, that is, when ′ I , there is a unique c ∗ given value of = (1 + I )(1 − 2 p ( r such that )) 1 all Democrats with an expressive responding parameter greater than ′ c ∗ will offer r . As before, incentives have no effect on the responses 1 of Democrats who believe that response r ). is correct (i.e., p ( r 5 ) > 0 . 1 1

48 566 Bullock et al. A) Democrats’ Survey Responses B) Republicans’ Survey Responses 2 2 1.8 1.8 1.6 1.6 1.4 1.4 1.2 1.2 Insincere Insincere Sincere Sincere (expressive) (expressive) choice of r choice of r 1 2 1 1 choice of r choice of r 1 2 0.8 0.8 0.6 0.6 0.4 0.4 Sincere Sincere c, Value of Expressive Partisan Responding c, Value of Expressive Partisan Responding choice of r choice of r 0.2 0.2 1 2 0 0 1 0.8 0.4 0 0.3 0.5 00.30.5 0.2 0.2 0.4 0.1 0.7 0.8 1 0.6 0.9 0.1 0.9 0.6 0.7 ), Belief Democratic-Expressive Response r p(r is Correct is Correct p(r ), Belief Democratic-Expressive Response r 1 1 1 1 C) Observed Polarization 2 1.8 1.6 1.4 Polarization region (Democrats choose r , 1 Republicans choose r ) 1.2 2 1 0.8 0.6 0.4 Democrats and Democrats and c, Value of Expressive Partisan Responding Republicans Republicans 0.2 choose r choose r 1 2 0 0.8 1 0.5 0.9 0.1 0.3 0.4 0.2 0 0.7 0.6 is Correct ), Belief Democratic-Expressive Response r p(r 1 1 Figure A.1: Patterns of survey response in the absence of incentives by value of expressive partisan responding and beliefs about correct responses. Note: Panel (a) displays Democrats’ survey responses in the absence of incentives for dif- ferent levels of returns to expressive partisan responding and beliefs about whether response r is correct. Panel (b) displays responses for the same parameters for Republicans. Finally, 1 the grey area in Panel (c) is the range of parameters for which Democrats and Republicans offer different survey responses despite common beliefs about which response is correct. r is more likely to be correct, But for Democrats who believe response 2 a larger return to cheerleading is now required to offset the earnings ′ r . Formally, c ∗ that are likely to be lost by offering response = 1 c ∗ +( I × (1 − 2 p ( r . This relationship is shown in Panel (a) of Figure A.2. )) 1 (For simplicity, we assume throughout Figure A.2 that I = 1 .)

49 Partisan Bias in Factual Beliefs about Politics 567 B) Republicans’ Survey Responses A) Democrats’ Survey Responses 2 2 1.8 1.8 1.6 1.6 Insincere Insincere (expressive) (expressive) choice of r choice of r 1.4 1.4 2 1 1.2 1.2 Induced Sincere Induced Sincere choice of r choice of r choice of r choice of r 2 1 1 2 1 1 0.8 0.8 0.6 0.6 0.4 0.4 Sincere Sincere c, Value of Expressive Partisan Responding c, Value of Expressive Partisan Responding choice of r choice of r 0.2 0.2 2 1 0 0 0 0.3 0.5 0.4 0.2 0.4 0.2 0.9 0.1 0.9 0.6 1 0.8 0.7 0.1 00.30.5 0.6 1 0.8 0.7 is Correct is Correct p(r ), Belief Democratic-Expressive Response r p(r ), Belief Democratic-Expressive Response r 1 1 1 1 C) Observed Polarization 2 1.8 1.6 1.4 Polarization region (Democrats choose r , 1 Republicans choose r ) 1.2 2 1 0.8 0.6 0.4 Democrats and Democrats and c, Value of Expressive Partisan Responding Republicans Republicans 0.2 choose r choose r 1 2 0 1 0.4 0.3 0.7 0.8 0 0.6 0.9 0.1 0.5 0.2 is Correct p(r ), Belief Democratic-Expressive Response r 1 1 Figure A.2: Patterns of survey response given incentives for correct responses ( I = 1) by value of expressive partisan responding and beliefs about correct responses. Note: I = 1 for correct Panel (a) displays Democrats’ survey responses given incentives responses for different levels of returns to expressive partisan responding and beliefs about r is correct. Panel (b) displays responses for the same parameters for whether response 1 Republicans. Finally, the grey area in panel (c) is the range of parameters for which Democrats and Republicans offer different survey responses despite common beliefs about which response is correct. Comparison of Panel (a) in Figure A.1 and Panel (a) in Figure A.2 draws out a basic but important result: incentives for correct responses reduce expressive partisan responding by causing some of those who know that response instead. r is less likely to be true to offer response r 1 2

50 568 Bullock et al. In Panel (a) of Figure A.2, these respondents are represented by the region that is labeled “induced choice of .” r 2 Figure A.2 draws out a second important result: when a Democrat believes that is more likely to be correct, the additional value of r 2 expressive returns (c) that is required to make her offer response r 1 ′ r c ∗ increases in her belief that is increasing in is correct. Formally, c ∗ − 2 ( r ) p . To see this result graphically, note that the vertical gap between 2 the dashed and solid lines increases as one approaches the left side of ′ the -axis. This gap increases because the difference between c ∗ and c ∗ x p ( r is a function of ) . In other words, for those who are more uncertain 1 ( ( r is closer to 0.5), incentives have smaller effects. The intuition for p ) 1 this result is that a person who chooses the answer she thinks is most likely to be correct only earns the incentive for a correct response if that answer is in fact correct, which she expects to occur with the probability that she believes that response is correct. If a person believes r is 1 correct with probability 0.75, she earns the incentive with probability I r and 0.25 if she chooses r . At the extreme, an 0.75 if she chooses 2 1 r r and individual who believes that are equally likely to be true — 2 1 that is, she knows that she does not know the truth — continues to offer r regardless of incentives for correct responses because she won’t 1 (in expectation) do better by giving up the certain benefit of a partisan response because she earns the incentive I , in expectation, half the time for either response. To illustrate the effect of incentives on polarization, Panel (b) of Figure A.2 shows the effect of incentives for Republican partisans, and Panel (c) displays both partisan response patterns at once. Comparison of Panel (c) in Figure A.1 to Panel (c) in Figure A.2 shows that increasing incentives decreases polarization. In particular, incentives reduce the frequency with which Democrats and Republicans who share common beliefs about the truth offer different survey responses, apart from the case in which p ( r . ) = p ( r 5 ) = 0 . 2 1 This exposition leads us to two conclusions. First, incentives for correct answers reduce partisan divergence in the presence of shared beliefs about the truth. Second, partisan divergence may persist in the face of incentives. It is clear that if partisan groups have different sincere beliefs about which response is most likely to be true, paying respondents for correct responses will not reduce polarization. However, although it may seem intuitive that persistent partisan divergence in the

51 Partisan Bias in Factual Beliefs about Politics 569 implies underlying differences presence of incentives for correct responses in beliefs about the truth, our analysis suggests partisan divergence may nonetheless persist for two other reasons. First, the taste for expressive partisan cheerleading (c) may be large. Second, even if that taste is small, individuals may be uncertain about the truth. In that case, they will offer partisan responses even in the face of large incentives for correct responding. We have considered respondents who must provide either a partisan- consistent or a partisan-inconsistent response. But giving respondents the option to decline to provide a response may reduce observed po- larization. To explore this possibility, we consider a model with an additional response option: “don’t know.” A.1 Incorporating “Don’t Know” Responses To incorporate a “don’t know” response option, we must specify the utility that a respondent receives from selecting “don’t know.” For simplicity, we assume that a “don’t know” response ( yields some r ) dk V > 0 plus whatever financial fixed positive psychological benefit dk I ) incentive is offered for giving that response ( . (The results here dk are robust to allowing negative values of V .) Specified this way, dk U r as the honesty value of ) = V ( + I V . One can think of dk dk dk dk choosing “don’t know” relative to an incorrect response. As before, the individual is offered an incentive I for providing a correct response. When will a respondent choose “don’t know”? Note that the value c or p () , so a respondent chooses “don’t of “don’t know” is unaffected by know” when the values of c and p () make both r less attractive and r 2 1 I with than “don’t know.” Critically, one can earn the incentive dk certainty by choosing “don’t know”, unlike the incentive for a correct response which is realized only if the chosen response is revealed after the fact to be correct, which occurs with the belief r p ) . Ceteris paribus, ( j ( r therefore, increasing uncertainty = 0 . 5) will make the “don’t know” j option more attractive. Recall from the previous analysis (illustrated in Panel (a) of Figure A.2) that a Democrat’s selection of r or r depends 2 1 ′ p is greater or less than c ∗ on whether = (1 + I )(1 − 2 c ( r . )) 1 Consider first a Democrat who would otherwise choose the “Repub- lican” response, r . Her expected utility for choosing this response is 2

52 570 Bullock et al. I ) (1 − p ( r (1+ )) . This utility is greater than the utility associated with × 1 / r p ) < p ∗ ( r . ) = 1 − ( V selecting “don’t know” when + I ) ) ( (1 + I 1 1 dk dk ( ∗ r This p is the lowest probability that the Democratic response ( r ) ) 1 1 is correct for which the Democrat will select “don’t know” rather than p ( r ) the Republican response. When is below this critical value, the 1 Democrat prefers to report the Republican response. Note that this p critical value of ∗ ) is unaffected by the expressive value of partisan ( r 1 responding c, because the return to is unaffected by c . r 2 Figure A.3 illustrates this logic. For presentation, we assume that 29 , I is thus = 0 . 75 , and V = 0 I . 5 . = 1 The value of p ∗ ( r ) 1 dk dk / − . 5+0 1 75) (0 (1+1) = 0 . 375 . Graphically, this solution is represented . in Panel A by the leftmost line that defines the “induced don’t know” region. Substantively, the point is that when p r ( ) exceeds the critical 1 value ( r ) , all cases in which the Democrat would have offered the ∗ p 1 Republican response are replaced by “don’t know” answers. We next examine how a Democrat who otherwise would have chosen , behaves in the presence of incentives for r the “Democratic” response, 1 ′ = c “don’t know.” We have already shown that if c , the Democrat is ∗ indifferent between the Democratic and the Republican responses, and ∗ p r , she is also indifferent between those responses ) = that if ( ( r ) p 1 1 and “don’t know.” However, as p ( r , the expected ) rises above p ∗ ( r ) 1 1 return from choosing the “Democratic” response increases. This means that as the Democratic response becomes more likely to be true, smaller returns to expressive responding are required to keep the Democratic response more attractive than “don’t know.” In Panel (a) of Figure A.3, this condition is illustrated by the downward-sloping line that defines the top of the region labeled “induced don’t know.” Formally, c = ′′ ∗ )) = ( V is the critical value, such that when + I c ) / ( p ( r I )(1 + 1 dk dk ′ ′′ ), the Democrat chooses the Democratic response c > c c > c ∗ (and ∗ over “don’t know.” Parallel analysis for Republicans appears in Panel (b) of Figure A.3. For both Democrats and Republicans, the subjects who offer “don’t 29 We choose a relatively high level of I because Figure A.3 illustrates the dk logic of our model when there are only two survey responses (in addition to “don’t know”). Given only two responses, even complete uncertainty means that one is, in expectation, correct half of the time. In a model with more response options, the value of I necessary to sustain don’t know responses would be smaller. For dk example, one could also allow the value of I to be negative. dk

53 Partisan Bias in Factual Beliefs about Politics 571 B) Republicans’ Survey Responses A) Democrats’ Survey Responses 2 2 1.8 1.8 1.6 1.6 Insincere Insincere (expressive) (expressive) choice of r choice of r 1.4 1.4 2 1 1.2 1.2 Induced Induced Sincere Sincere choice of r choice of r choice of r choice of r 1 1 2 2 1 1 0.8 0.8 0.6 0.6 0.4 0.4 Sincere Sincere c, Value of Expressive Partisan Responding c, Value of Expressive Partisan Responding choice of r choice of r 0.2 0.2 1 2 Induced Induced don’t know don’t know 0 0 0.2 0.1 0.1 0.9 0.6 1 0.8 0.9 0.6 1 0.8 0.2 0.4 00.30.5 0.7 0 0.3 0.5 0.4 0.7 ), Belief Democratic-Expressive Response r p(r is Correct p(r ), Belief Democratic-Expressive Response r is Correct 1 1 1 1 C) Observed Polarization 2 1.8 1.6 1.4 Polarization region (Democrats choose r , 1 Republicans choose r ) 1.2 2 1 0.8 0.6 0.4 Democrats and Democrats and c, Value of Expressive Partisan Responding Dem. and/or Rep. Republicans Republicans 0.2 choose choose r choose r 1 2 don’t know 0 0.7 0.8 1 0.6 0.9 0.1 0.2 0.4 0.5 0.3 0 is Correct p(r ), Belief Democratic-Expressive Response r 1 1 ( I = 1) and Figure A.3: Patterns of survey response given incentives for correct ( I don’t know = 0 . 75) Responses by value of expressive partisan responding and dk beliefs about correct responses. Note: Panel (a) displays Democrats’ survey responses in the absence of incentives for dif- ferent levels of returns to expressive partisan responding and beliefs about whether response r is correct. Panel (b) displays responses for the same parameters for Republicans. Finally, 1 the grey area in Panel (c) is the range of parameters for which Democrats and Republicans offer different survey responses despite common beliefs about which response is correct.

54 572 Bullock et al. know” responses are drawn from those who are most uncertain about is close to ( r p which answer is correct, that is, from subjects for whom ) 1 0.5. Our analysis above establishes that it is this uncertainty that makes incentives for correct answers least likely to affect survey responses. Accordingly, for these uncertain respondents, the “sure thing” of a “don’t know” payment is a more effective inducement than the smaller probability of earning a potentially larger payment for a correct response. Combining these analyses, as we do in Panel (c), and comparing that plot to panel (c) of Figure A.2 allows us to assess the effect on observed polarization of offering incentives for both correct and “don’t know” responses. Relative to simply offering incentives for correct responses, adding incentives for “don’t know” responses decreases the frequency with which Democrats and Republicans who share common but weak beliefs about the correct response ( p ( r ) ) is not close to 1 for any j j provide divergent (non-“don’t know”) survey responses.

55 Partisan Bias in Factual Beliefs about Politics 573 ∗ ∗∗∗ age 422 (8) 025] 044 053 033] 010 024] 019] 637 012 1% . . . . . . . . . . 0 0 0 0 0 [0 [0 [0 [0 − ∗∗∗ (7) age 419 048 045] 005 034] 023] 508 008 2% 119 050 031] . . . . . . . . . . 0 0 0 0 0 [0 [0 [0 [0 95 − ∗ ∗∗∗ (6) 416 070 039] 072 055] 026 039] 724 008 029] 4% Est. . . . . . . . . . . 0 0 0 0 0 [0 [0 [0 [0 − 103 ∗∗ ∗∗∗ (5) 412 087 064 038] 054] 051 036] 114 014 024] 3% Iraq total Bush approval Obama McCain . . . . . . . . . . 0 0 0 0 0 [0 [0 [0 [0 73 − ∗∗∗ ∗∗∗ ∗∗∗ (4) 421 092 023] 100 034] 018 024] 818 044 016] 7% Est. . . . . . . . . . . 0 0 0 0 0 approval casualties among reps. [0 [0 [0 [0 − 108 ∗∗∗ ∗∗∗ (3) 407 168 074 056] 079] 091 058] 598 032 042] 4% Bush . . . . . . . . . . change 0 0 0 0 0 [0 [0 [0 [0 44 − ∗∗∗ ∗∗∗ (2) 409 201 026 044] 061] 059 052] 694 093 036] 9% . . . . . . . . . . change 0 0 0 0 0 inflation unemployment bush [0 [0 [0 [0 12 − ∗∗∗ ∗∗∗ (1) 415 077] 043 051] 177 064 033] 5% 239 078 052] . . . . . . . . . . change 0 0 0 0 0 [0 [0 [0 [0 casualties 32 − Iraq 07 to 08 Bush 2008 CCES. Includes only Democrats and Republicans. Cases included are from control and paid for correct response condition. OLS Coefficients Table A.1: Experiment 1: Effect of payment for correct responses on partisan divergence in scale scores by question. Yes, Republican) = = Democrat Payment for (1 0 Democrat correct response* Payment for correct response Constant Observations R-squared response Percentage of partisan gap eliminated by payment for correct Note: Source: with robust standard errors. *Significant at 10%; **significant at 5%; ***significant at 1% (two-tailed tests).

56 574 Bullock et al. ∗∗∗ 010 089] 062 099] 035 092] 042 082] 039 076] 490 074] 029 030 7% 467 (10) . . . . . . . . . . . . . . . Debt service 0 0 0 0 0 0 0 0 [0 − ∗∗∗ (9) 479 051 072] [0 043 081] [0 052 077] [0 009 064] [0 013 062] [0 522 057] 005 420 3% 0% 335 Iraq . . . . . . . . . . . . . . . . 0 0 0 0 0 0 0 0 [0 [0 [0 83 − − − ∗∗ ∗ ∗∗∗ (8) 466 133 057] 107 063] 093 060] [0 058 055] [0 038 053] [0 605 050] 028 340 5% 3% 101 . . . . . . . . . . . . . . . . Global amount deaths spending 0 0 0 0 0 0 0 0 [0 80 70 − − ∗∗∗ (7) 452 346 066] 017 280 3% 8% 107 091] [0 059 100] [0 091 096] [0 019 073] [0 053 071] [0 . . . . . . . . . . . . . . . . TARP 0 0 0 0 0 0 0 0 [0 84 55 − − − ∗∗∗ (6) 442 136 086] [0 088 096] [0 139 089] [0 081 079] [0 067 073] [0 489 071] 022 150 8% 9% . . . . . . . . . . . . . . . . 0 0 0 0 0 0 0 0 [0 [0 [0 [0 [0 64 − − ∗∗∗ ∗∗∗ ∗∗ ∗∗ ∗∗ (5) 470 219 081] 207 092] 209 086] 113 069] [0 158 064] 241 060] 023 490 2% 1% 101 Iraq black spending back . . . . . . . . . . . . . . . . 0 0 0 0 0 0 0 0 [0 94 95 − − ∗∗∗ (4) 457 126 086] [0 091 095] [0 056 092] [0 028 080] [0 031 078] [0 467 073] 029 250 1% 7% . . . . . . . . . . . . . . . . 0 0 0 0 0 0 0 0 [0 72 44 − − − − ∗∗∗ (3) 446 118 085] [0 021 093] [0 092 088] [0 053 080] [0 059 076] [0 630 073] 050 060 8% 2% . . . . . . . . . . . . . . . . 0 0 0 0 0 0 0 0 Defense Obama deaths % Medicaid % paid warming [0 [0 [0 [0 [0 [0 17 78 − − − ∗∗∗ ∗∗∗ ∗∗∗ (2) 485 239 068] 097 077] 202 073] 049 072] 079 068] 586 066] 099 010 6% 5% . . . . . . . . . . . . . . . . Bush II 0 0 0 0 0 0 0 0 [0 [0 [0 [0 [0 [0 40 84 − − − ∗∗∗ ∗∗∗ ∗∗ ∗∗∗ (1) 444 293 065] 210 078] 184 073] 021 057] 019 054] 401 048] 077 310 6% 7% . . . . . . . . . . . . . . . . Obama 0 0 0 0 0 0 0 0 [0 [0 [0 [0 [0 [0 2012 Mechanical Turk study. 71 62 − − − unemployment unemployment spending vote 08 Sources: Yes, = Includes only Democrats and Republicans. Comparison of post-treatment responses in control, pay correct, and pay correct and Republican) -test, Pay Correct * -squared = and correct response Constant R Observations Correct * Dem. F Dem. > Pay DK and correct response Percentage of partisan gap eliminated by payment for correct response partisan gap eliminated Increases percentage of by payment for DK and Democrat (1 0 Payment correct * democrat Payment DK and correct * Democrat Payment for correct response Payment for DK Note: don’t know conditions. OLS Coefficients with robust standard errors. *Significant at 10%; **significant at 5%; ***significant at 1% question. Table A.2: Experiment 2: Effect of payment for correct and don’t know responses on partisan divergence in scale scores by (two-tailed tests).

57 Partisan Bias in Factual Beliefs about Politics 575 References Abramowitz, A. I. and K. L. Saunders (2008), “Is Polarization a Myth?”, The Journal of Politics , 70(April), 542–55. Ansolabehere, S., M. Meredith, and E. Snowberg (2013), “Asking about Political Analysis , 21(January), 48–69. Numbers: Why and How”, Bartels, L. M. (2002), “Beyond the Running Tally: Partisan Bias in Political Perceptions”, Political Behavior , 24(June), 117–50. Bartels, L. M. and J. Zaller (2001), “Presidential Vote Models: A Re- PS: Political Science and Politics count”, , 34(March), 9–20. Berinsky, A. J. (2005), Silent Voices: Public Opinion and Participation , Princeton, NJ: Princeton University Press. in America Berinsky, A. J. (2015), “Rumors and Health Care Reform: Experiments British Journal of Political Science . in Political Misinformation”, Berinsky, A. J., G. S. Lenz, and G. A. Huber (2012), “Using Mechanical Turk As a Subject Recruitment Tool for Experimental Research”, Political Analysis , 20(3). Bishop, G. F., R. W. Oldendick, and A. Tuchfarber (1984), “What Must My Interest in Politics Be If I Just Told You ‘I Don’t Know’?””, Public Opinion Quarterly , 48(2), 510–9. Democracy and Decision: The Pure Brennan, G. and L. Lomasky (1997), , New York: Cambridge University Theory of Electoral Preference Press. Campbell, A., P. E. Converse, W. E. Miller, and D. E. Stokes (1960), The American Voter , Chicago: University of Chicago Press. Campbell, T. H. and A. C. Kay (2014), “Solution Aversion: On the Relation Between Ideology and Motivated Disbelief”, Journal of Personality and Social Psychology , 107(November), 809–24. Cohen, G. L. (2003), “Party Over Policy: The Dominating Impact of Group Influence on Political Beliefs”, Journal of Personality and Social Psychology , 85(November), 808–22. Conover, P. J., S. Feldman, and K. Knight (1986), “Judging Inflation and Unemployment: The Origins of Retrospective Evaluations”, Journal of Politics , 48(3), 565–88. Conover, P. J., S. Feldman, and K. Knight (1987), “The Personal and Political Underpinnings of Economic Forecasts”, American Journal of Political Science , 31(August), 559–83.

58 576 Bullock et al. Duch, R. M. and R. Stevenson (2006), “Assessing the Magnitude of the Economic Vote over Time and across Nations”, Electoral Studies , 25, 528–47. Ferraz, C. and F. Finan (2008), “Exposing Corrupt Politicians: The Effects of Brazil’s Publicly Released Audits on Electoral Outcomes”, The Quarterly Journal of Economics , 123(2), 703–45. Retrospective Voting in American National Elec- Fiorina, M. P. (1981), tions , New Haven, CT: Yale University Press. Gaines, B. J., J. H. Kuklinski, P. J. Quirk, B. Peyton, and J. Verkuilen (2007), “Same Facts, Different Interpretations: Partisan Motivation Journal of Politics and Opinion on Iraq”, , 69(November), 957–74. Gerber, A. S., G. A. Huber, and E. Washington (2010), “Party Affiliation, Partisanship, and Political Beliefs: A Field Experiment”, American Political Science Review , 104(November), 720–44. Green, D., B. Palmquist, and E. Schickler (2002), Partisan Hearts and Minds: Political Parties and the Social Identities of Voters , New Haven, CT: Yale University Press. Hamlin, A. and C. Jennings (2011), “Expressive Political Behaviour: Foundations, Scope and Implications”, British Journal of Political Science , 41(3), 645–70. Healy, A. and N. Malhotra (2009), “Myopic Voters and Natural Disaster Policy”, , 103(3), 387–406. American Political Science Review Hibbs, D. A. (2000), “Bread and Peace Voting in U.S. Presidential Public Choice Elections”, , 104(1–2), 149–80. Iyengar, S., G. Sood, and Y. Lelkes (2012), “Affect, Not Ideology: A So- cial Identity Perspective on Polarization”, Public Opinion Quarterly , 76(Fall), 405–31. Jacobson, G. C. (2006), A Divider, Not a Uniter: George W. Bush and the American People , Upper Saddle River, NJ: Pearson. Jerit, J. and J. Barabas (2012), “Partisan Perceptual Bias and the Information Environment”, Journal of Politics , 74, 672–84. Kahan, D. M., H. Jenkins-Smith, T. Tarantola, C. L. Silva, and D. Bra- man (2015), “Geoengineering and Climate Change Polarization: Testing a Two-Channel Model of Science Communication”, The Annals of the American Academy of Political and Social Science , 658(1), 192–222.

59 Partisan Bias in Factual Beliefs about Politics 577 Kendall, C., T. Nannicini, and F. Trebbi (2015), “How Do Voters Respond to Information? Evidence from a Randomized Campaign”, American Economic Review , 105(January), 322–53. Kuklinski, J. H., M. D. Cobb, and M. Gilens (1997), “Racial Attitudes Journal of Politics , 59(May), 323–49. and the ‘New South’”, Kuklinski, J. H., P. J. Quirk, J. Jerit, and R. F. Rich (2001), “The Political Environment and Citizen Competence”, American Journal of Political Science , 45(April), 410–24. Kull, S., C. Ramsay, S. Subias, and E. Lewis (2004), “The Separate Realities of Bush and Kerry Supporters”, Program on International Policy Attitudes. Available at: http://www.pipa.org/OnlineReports/ Iraq/IraqRealities_Oct04/IraqRealitiesOct04rpt.pdf (accessed on 20/04/2015). Lau, R. R., D. O. Sears, and T. Jessor (1990), “Fact or Artifact Revis- ited: Survey Instrument Effects and Pocketbook Politics”, Political Behavior , 12(3), 217–42. Lavine, H. G., C. D. Johnston, and M. R. Steenbergen (2012), The Ambivalent Partisan: How Critical Loyalty Promotes Democracy , Oxford University Press. Levendusky, M. (2009), The Partisan Sort: How Liberals Became Democrats , Chicago, IL: University of and Conservatives Became Republicans Chicago Press. Luskin, R. C. and J. G. Bullock (2011), “‘Don’t Know’ Means ‘Don’t Know’: DK Responses and the Public’s Level of Political Knowledge”, , 73(April), 547–57. Journal of Politics Experimental Political Science Morton, R. C. and K. C. Williams (2010), and the Study of Causality: From Nature to the Lab. New York: Cambridge University Press. Palmer, H. D. and R. M. Duch (2001), “Do Surveys Provide Representa- tive or Whimsical Assessments of the Economy?”, , Political Analysis 9(1), 58–77. Prior, M. (2007), “Is Partisan Bias in Perceptions of Objective Con- ditions Real? The Effect of an Accuracy Incentive on the Stated Beliefs of Partisans”, Presented at the Annual Conference of the Midwest Political Science Association, Chicago. Prior, M. and A. Lupia (2008), “Money, Time, and Political Knowledge: Distinguishing Quick Recall and Political Learning Skills”, American Journal of Political Science , 52(January), 169–83.

60 578 Bullock et al. The Impact of Accuracy Prior, M., G. Sood, and K. Khanna (2015), , Incentives on Partisan Bias in Reports of Economic Perceptions Manuscript: Princeton University, Princeton, NJ. Schuessler, A. A. (2000), A Logic of Expressive Choice , Princeton, NJ: Princeton University Press. Sears, D. O. and R. R. Lau (1983), “Inducing Apparently Self-Interested Political Preferences”, American Journal of Political Science , 27 (May), 223–52. Shapiro, R. Y. and Y. Bloch-Elkon (2008), “Do the Facts Speak for Themselves? Partisan Disagreement as a Challenge to Democratic Competence”, Critical Review , 20(1), 115–39. Taber, C. and M. Lodge (2000), “Three Steps Toward a Theory of Elements of Reason , ed. A. Lu- Motivated Political Reasoning”, in pia, M. D. McCubbins, and S. L. Popkin, New York: Cambridge University Press. Waytz, A., L. L. Young, and J. Ginges (2014), “Motive Attribution Asymmetry for Love vs. Hate Drives in Intractable Conflicts”, Pro- ceedings of the National Academy of the Sciences of the United States of America , 111(44), 15687–92. Wilcox, N. and C. Wlezien (1993), “The Contamination of Responses to Survey Items: Economic Perceptions and Political Judgments”, Political Analysis , 5(1), 181–213. Zaller, J. R. (1992), The Nature and Origins of Mass Opinion , New York: Cambridge University Press.

Related documents