Treffer: The Role of Familiarity in Correcting Inaccurate Information
Weitere Informationen
People frequently continue to use inaccurate information in their reasoning even after a credible retraction has been presented. This phenomenon is often referred to as the continued influence effect of misinformation. The repetition of the original misconception within a retraction could contribute to this phenomenon, as it could inadvertently make the "myth" more familiar--and familiar information is more likely to be accepted as true. From a dual-process perspective, familiarity-based acceptance of myths is most likely to occur in the absence of strategic memory processes. Thus, we examined factors known to affect whether strategic memory processes can be utilized: age, detail, and time. Participants rated their belief in various statements of unclear veracity, and facts were subsequently affirmed and myths were retracted. Participants then rerated their belief either immediately or after a delay. We compared groups of young and older participants, and we manipulated the amount of detail presented in the affirmative or corrective explanations, as well as the retention interval between encoding and a retrieval attempt. We found that (a) older adults over the age of 65 were worse at sustaining their postcorrection belief that myths were inaccurate, (b) a greater level of explanatory detail promoted more sustained belief change, and (c) fact affirmations promoted more sustained belief change in comparison with myth retractions over the course of 1 week (but not over 3 weeks), This supports the notion that familiarity is indeed a driver of continued influence effects.
As Provided
The Role of Familiarity in Correcting Inaccurate Information
<cn> <bold>By: Briony Swire</bold>>
> <bold>Ullrich K. H. Ecker</bold>
>
> <bold>Stephan Lewandowsky</bold>
>
<bold>Review of: </bold>zfv999173535so1.docx
<bold>Acknowledgement: </bold>This research was facilitated by a Fulbright Postgraduate Scholarship from the Australian American Fulbright Commission and a University Postgraduate Award from the University of Western Australia to the first author, and a Discovery Grant from the Australian Research Council to the second and third author. The Stephan Lewandowsky was supported by a Wolfson Research Merit Fellowship from the Royal Society while this work was conducted. We thank Charles Hanich for research assistance. The lab web address is http://www.cogsciwa.com.
Every day we process an extraordinary amount of information, and it is often up to the individual to discern fact from fiction. A proportion of this information is inevitably inaccurate and deserves to be corrected after initial encoding. To maintain an accurate and up-to-date representation of the world, ideally we would disregard invalidated information. However, we are far from perfect at performing this task, as corrected misinformation often continues to influence memory and reasoning—this persistence is termed the
The Illusory Truth Effect
>
The
The illusory truth effect could be problematic when attempting to correct misinformation, as a correction often repeats the original claim. For example, truthfully stating that playing Mozart to your child will
Strategic and Automatic Memory Processes
>
The potential familiarity-related difficulties that arise during the correction of misinformation may be explained from a dual-processing perspective. Dual-process theories of memory assume a dichotomy between automatic memory processes, which include familiarity, and strategic memory processes such as recollection and output monitoring (cf. Brown & Warburton, 2006; Diana, Yonelinas, & Ranganath, 2007; Rugg & Curran, 2007; Yonelinas, 2002; Yonelinas & Jacoby, 2012; Zimmer & Ecker, 2010). Familiarity is thought to be a fast, context-free automatic process that allows for the rapid recognition of previously encountered information. Recollection, by contrast, is a slower process thought to allow for the retrieval of contextual details, such as the information’s source, its spatiotemporal encoding context, or its veracity. In the case of corrected misinformation, it is often assumed that a “negation tag” is linked to the original statement, for example, “playing Mozart to your child will boost its IQ—
Regardless of whether statements are correct or have been invalidated, existing memory representations will be activated in response to cues via automatic retrieval to the extent that the information is familiar (cf. Ayers & Reder, 1998). To avoid reliance on familiar but invalid information, strategic memory processes are required to act as a filter of automatically retrieved memory output. However, strategic memory processes take effort and often fail (e.g., Herron & Rugg, 2003), and people can rely upon invalid but automatically retrieved information in their judgments (Ecker et al., 2011; Koutstaal & Schacter, 1997; Reyna & Lloyd, 1997; Roediger, Watson, McDermott, & Gallo, 2001). A postcorrection misinformation effect is therefore likely to occur when misinformation has been automatically activated but strategic memory processes have failed (Ecker, Lewandowsky, & Tang, 2010). Familiarity can thus hinder the remediating effect of a correction when the repetition of misinformation in the course of its correction boosts an invalid item’s familiarity such that it outweighs the correction’s strategic-retrieval dependent corrective effect.
The Familiarity Backfire Effect
>
Some reports suggest that the familiarity boost associated with a correction can be so detrimental that it causes a
Skurnik et al.’s (2007) finding that people had a less favorable attitude toward vaccines than those who did not view the flyer may reflect a familiarity backfire effect. However, given that Skurnik et al. only focused upon one contentious issue, there is an alternative explanation—namely a
Regarding the misidentification of myths as facts in the Skurnik et al. (2007) study, results from the comparison between the myths vs. facts condition and the no-flyer control condition are not available, so it is unclear whether myth belief was
Theoretically, if people are presented with explanations affirming facts or refuting myths, belief in facts may be sustained over time, whereas myth items appear to be “forgotten,” simply because automatic and strategic memory processes operate in concert for facts but stand in opposition for myths (cf. Brainerd & Reyna, 2008; Jacoby, 1991; Toth, 1996). For fact items, regardless of whether automatic processes or strategic memory processes are used, both would lead a participant to conclude that the item is true. By contrast, if a participant is unable to correctly recall the correction of a myth because of the forgetting that primarily affects strategic memory processes, the familiarity of the myth—boosted by its repetition during the correction—could lead to the myth being inaccurately accepted as true.
Factors Likely to Influence the Correction of Information: Detail, Time, and Age
>
It follows from the dual-process notion that the relative impact of familiarity on corrections could potentially be influenced by factors that are known to affect strategic memory processes, including (a) the amount of detail presented in the corrective explanation, (b) the retention interval between encoding and a retrieval attempt, and (c) the age of the participant.
Regarding the correction’s level of detail, providing sufficiently detailed explanations as to why a piece of misinformation is false—in other words, providing a detailed
Regarding the retention interval, failure of strategic processes is particularly likely when there is some delay between encoding and attempted retrieval, as strategic recollection of details diminishes with time, whereas familiarity stays relatively constant (Knowlton & Squire, 1995). Thus, false acceptance of myths based on their familiarity seems particular likely at longer retention intervals.
Regarding age, older adults have less efficient strategic memory processes than young adults, whereas automatic processing such as familiarity-detection remains relatively age-invariant (e.g., Prull, Dawes, Martin, Rosenberg, & Light, 2006). In particular, older adults seem to become less efficient at binding item and context information (Naveh-Benjamin, 2000); therefore, the mnemonic link between a statement and its veracity could be weaker in older adults. This is in line with the finding that source memory—memory for where or how information was acquired—is particularly susceptible to the effects of ageing (e.g., Glisky, Rubin, & Davidson, 2001). Consistent with this notion, Skurnik, Yoon, Park, and Schwarz (2005) found that older adults were particularly likely to misremember myths as facts after repeated retractions (compared with single retractions) after a 3-day retention interval (but not after 30-min, and not in younger adults as per the Skurnik et al., 2007 study). However, it is difficult to draw firm conclusions from the Skurnik et al. (2005) study for several reasons: (a) premanipulation belief ratings were not measured nor was there a control group where corrections were not presented; (b) no cognitive screening task was given to older adults, potentially reducing the generalizability of findings; and (c) health claims were used that were arbitrarily labeled as valid or invalid without explanation, even though all claims were actually true—thus, some corrections were misleading, and distrust in the corrections may have contributed to the results, as it is well established that source credibility is an influential factor in the persuasiveness of messages (Eagly & Chaiken, 1993; Guillory & Geraci, 2013).
In summary, factors such as the correction’s level of explanatory detail, retention interval, and participant age are likely to play a role in determining the success of a correction, but their specific importance is unclear and findings have been inconsistent. By manipulating and comparing these factors, the present research aimed to clarify if and under what conditions familiarity is most problematic. Experiment 1 tested young adults, Experiment 2 tested older adults.
Experiment 1
>
This study presented an undergraduate population with both incorrect and correct claims (i.e., myths and facts), then corrected the false claims in a way that boosted their familiarity. To this end, participants were presented with a range of statements of unclear veracity that were subsequently labeled as true or false. People’s belief in the statements was measured both before the true/false explanation and in a postmanipulation test phase to yield a measure of belief change. To avoid the problems associated with posttest-only designs (Morris, 2008), we used a pretest-posttest design so that each individual could be used as their own control (Hunter & Schmidt, 2004).
Level of explanatory detail and study-test retention interval were manipulated to identify the parameters of corrections that promote successful discounting of misinformation. The experiment used a 2 × 2 × 3 within-between design, with within-subjects factors type of item (myth vs. fact) and type of explanation (the veracity of each statement was explained either briefly or in some detail), and the between-subjects factor retention interval (immediate, 30-min, or 1-week). In some studies, continued influence effects were found primarily in more indirect measures of belief that require participants to use the misinformation in reasoning (cf. Johnson & Seifert, 1994). Therefore, inference questions were also administered at test, serving as a more indirect measure of belief that could help avoid issues related to social desirability.
We hypothesized that (a) detailed explanations would lead to stronger belief change than brief explanations for both myths and facts, and (b) belief change would be more sustained over time after fact affirmation compared with myth retraction, as false familiarity-based acceptance of myths would seem particular likely at longer retention intervals. We did not expect a backfire effect, as there are no clear demonstrations of a true familiarity backfire effect in the peer-reviewed literature. However, we hypothesized that one was theoretically most likely to occur with a brief retraction after a 1-week delay.
<h31 id="xlm-43-12-1948-d369e376">Method</h31><bold>Participants</bold>
A power analysis (conducted with GPower3; Faul, Erdfelder, Buchner, & Lang, 2009) suggested that 78 participants were required to detect a small-to-medium effect (effect size
<bold>Stimuli</bold>
There were 20 myths and 20 facts, each with a corresponding brief explanation, a detailed explanation, and two inference questions. An example myth or fact and the corresponding explanations and example inference questions are given in Table 1 (see supplement A for the complete list of items, explanations, and inference questions). Brief explanations simply stated whether the item was a myth or a fact with no further clarification. They explicitly repeated the initial statement twice (once in the original and once in a negated format if the item was a myth). Thus participants encountered the initial statement three times altogether: once when being initially rated, and twice in the explanation. Detailed explanations also provided the myth/fact label but in addition included three or four sentences of further information; myth retractions did not provide a causal alternative to the myth but rather explained why the myth was wrong and/or where it originated from. Detailed explanations explicitly repeated the initial statement only once, but elements of the statement were repeated in the additional information.
>
><anchor name="tbl1"></anchor>
Inference questions were rated on an 11-point scale, with the specific scale-value range dependent on the item; for example, some items were rated on a 0–10 scale, others were rated on a 0–20% scale with 2% increments.
Two pilot studies were conducted to select stimuli from a list of 80 items (55 myths and 25 facts) that was initially compiled by selecting various items from websites such as New Scientist, Scientific American, and myth busting programs such as QI. Each item was researched to the best of our ability, and where possible evidence from the peer-reviewed literature was sought out. The aim of the first pilot study was to select a pool of items that were common and midrange believable, to allow for either reduction or increase in belief following retractions or affirmations, respectively. The second pilot study was run to ensure that the inference questions were in fact indirect measures of belief (i.e., that they correlated with the associated explicit belief measures; e.g., to ensure the inference question “What percentage of lies can FBI detectives catch just by looking at physical tells” is sufficiently measuring an individual’s belief that it is possible for liars to give themselves away by physical tells).
<bold>Pilot Study 1</bold>
The aim of the first pilot study was to select an item pool of myths that were common and at least midrange believable. Thirty-one undergraduate students from the University of Western Australia took part. Participants indicated for 55 myth and 20 fact items (a) if they had heard of the item before (i.e., familiarity) and (b) the extent to which they believed each item (i.e., believability). Familiarity was measured on a 5-point scale ranging from “
After Pilot Study 1, there were 37 myths remaining. The mean familiarity score of the myths was
<bold>Pilot Study 2</bold>
The second pilot study was run to ensure that the inference questions were in fact an indirect measure of belief in the initial claims. Participants were 100 individuals who volunteered via Crowdflower (
Participants were excluded if they reported their English skills to be only “fair” (0 on a 4-point scale ranging from “fair” to “native speaker”; 5 individuals), if they took less than 15 min to complete the task (23 individuals) or more than 85 min (3 individuals; mean completion time was
In a final step, the four remaining myths with the lowest belief ratings and correlations between inference questions and belief ratings were removed. The final stimulus set thus comprised 20 myths and 20 facts, each with two corresponding inference questions.
<bold>Procedure</bold>
Participants were seated individually in testing booths and the experiment was administered by Qualtrics survey software. Participants were presented the 40 items in randomized order, and they indicated on a 0–10 scale the extent to which they believed each item using a computer mouse. Directly after each item was rated, participants received either a brief or a detailed explanation, which were randomly counterbalanced. In the immediate test condition (i.e., no retention interval), the test phase began immediately after all items had been rated and retracted or affirmed. The test phase involved a block of 80 inference questions (two per item) in random order, followed by a block of 40 direct belief ratings in random order. Participants in the 30-min retention interval group completed an unrelated filler task before the test, and participants in the 1-week group completed the test phase a week later—this test was administered in an online format to keep participation rates high. The test phase was identical regardless of retention interval.
<h31 id="xlm-43-12-1948-d369e475">Results</h31><bold>Belief ratings</bold>
Both premanipulation facts and myths attracted midrange initial belief ratings, as expected,
After participants read the affirmations/corrections, participants’ belief for facts increased, and belief for myths decreased, as shown in Figure 1. This belief change was sustained temporarily for both myths and facts, yet after a 1-week period belief for myths regressed. As postmanipulation belief levels remained below premanipulation belief levels, no true backfire effect was elicited.<anchor name="b-fn2"></anchor><sups>2</sups>
>
><anchor name="fig1"></anchor>
A 2 × 2 × 3 within-between ANOVA (with factors type of item, type of explanation, and retention interval) was performed on the postmanipulation belief ratings. For this and all further statistical analyses, belief ratings and inference scores for myths were reverse-coded. This was to simplify the analysis and allow the type of explanation (brief vs. detailed) to register as a main effect rather than an interaction. The figures and discussion of the data trends are presented in the original untransformed format to facilitate interpretation.
The analysis revealed three significant main effects. The main effect of type of item (myth vs. fact),
Next, we ran a 2 × 2 × 2 within-between ANOVA (factors type of item, type of explanation, and retention interval) restricted to the 30-min and 1-week retention intervals to clarify specifically whether the difference between fact and (reverse-coded) myth ratings was greater after a week than 30 min, or in other words, whether belief change was more stable over time for myths versus facts. The interaction between type of item and retention interval was significant,
Even on an individual level, each item showed a consistent pattern: the retracted myths were more likely to show regression toward their premanipulation levels, whereas beliefs in affirmed fact items were relatively sustained over time. Only one myth item showed a numerically larger belief rating a week after correction compared with premanipulation belief levels.<anchor name="b-fn3"></anchor><sups>3</sups>
<bold>Inference ratings</bold>
Even if participants were successfully discounting misinformation in the direct belief ratings, they could still be using misinformation in their reasoning. To address this question, we analyzed participants’ mean inference scores. All inference scores were significantly correlated at the
A 2 × 2 × 3 within-between ANOVA was performed on the inference scores (with factors type of item, type of explanation, and retention interval). The results mimicked the pattern obtained with the postmanipulation belief scores, as Figure 2 illustrates. There were main effects of type of item,
>
><anchor name="fig2"></anchor>
Analogous to the belief ratings analysis, a 2 × 2 × 2 within-between ANOVA (with factors type of item, type of explanation, and retention interval) was run testing specifically whether inference scores were less stable over time for myths versus facts in the 30-min to 1-week interval. The type of item by retention interval interaction was significant,
Returning to the omnibus 2 × 2 × 3 analysis, there was also a marginal interaction between type of explanation and retention interval,
Experiment 2
>
Experiment 1 showed that belief change was more sustained after fact affirmation compared with myth retraction. Experiment 2 was a conceptual replication of Experiment 1 but tested older adults. As we noted at the outset, it is possible that older adults are more strongly susceptible to the effects of familiarity, as older adults have less efficient strategic memory processes than young adults, whereas automatic processing is relatively age-invariant (Prull et al., 2006). Although it is difficult to pinpoint the exact age at which recollection begins to decline, a study by Bender, Naveh-Benjamin, and Raz (2010) suggested a marked decline around the age of 40, and many studies investigating age-related differences in familiarity and recollection have used an older adult population with a mean age in the 60s (e.g., Aizpurua, Garcia-Bajos, & Migueles, 2009; Bastin & Van der Linden, 2003) or 70s (Anderson et al., 2008; Fernandes & Manios, 2012; Prull et al., 2006).
<h31 id="xlm-43-12-1948-d369e734">Method</h31>Experiment 2 was identical to Experiment 1, with two changes: (a) it was conducted with an older adult population; (b) an additional 3-week retention interval condition was added to maximize the chances of eliciting the familiarity backfire effect, given the temporal stability of familiarity in contrast to the temporal volatility of recollection.
<bold>Participants</bold>
Participants were 124 older adults over the age of 50, who volunteered after reading an ethically approved information sheet. Participants were recruited by advertising through the University of Western Australia website, Western Australian radio, and flyers around Perth. Participants were paid A$15 for their participation. Participants were screened using the Montreal Cognitive Assessment (MoCA); 13 participants were excluded as they scored below the normal range of 26 to 30 (Nasreddine et al., 2005). An additional two participants did not complete the task. Our final sample thus included
<bold>Procedure</bold>
The procedure replicated Experiment 1, although before the study participants received the MoCA. One-week and 3-week surveys were completed in an online format to keep participation rates high; two participants in the delayed conditions opted to receive paper copies of the survey. These were mailed back to the researchers once they had been completed.
<h31 id="xlm-43-12-1948-d369e755">Results</h31><bold>Belief ratings</bold>
A within-subjects ANOVA was performed on the premanipulation myth and fact belief ratings, which uncovered no significant differences between conditions,
After participants read the explanations, the belief for facts increased and belief for myths declined, as can be seen in Figure 3. In striking similarity to Experiment 1, belief for facts was sustained over a 1-week period, whereas belief for myths regressed between 30 min and 1-week. Between Week 1 and Week 3, belief scores for both facts and myths regressed to a similar extent. As postmanipulation myth belief levels remained below premanipulation belief levels, no true backfire effect was elicited.<anchor name="b-fn4"></anchor><sups>4</sups>
>
><anchor name="fig3"></anchor>
For all further analyses, belief ratings and inference score ratings for myths were reverse-coded, as in Experiment 1. A 2 × 2 × 4 within-between ANOVA on belief ratings was run, with within-subjects factors type of item (myth vs. fact) and type of explanation (veracity explained either briefly or in some detail), and the between-subjects factor retention interval (immediate, 30-min, 1-week, or 3-weeks). The analysis revealed three significant main effects. The main effect of type of item (myth vs. fact),
In subsequent contrast analyses, we focused first on the 30-min and 1-week retention intervals (analogous to Experiment 1). An interaction contrast between type of item (myth vs. fact) and retention interval (30-min vs. 1-week) demonstrated that the belief difference between myths and facts was greater after 1 week than 30 min,
Focusing on retention intervals of 1 and 3 weeks, the analogous type of item by retention interval contrast was not significant,
As postmanipulation myth belief significantly correlated at the
>
><anchor name="fig4"></anchor>
For the sake of completeness, a 2 × 2 × 4 within-between ANOVA on fact beliefs was performed (with factors type of item, type of explanation, and retention interval). The analysis yielded significant main effects of type of explanation,
<bold>Inference ratings</bold>
Returning to the analysis of the full sample, inference scores are presented in Figure 5. A 2 × 2 × 4 within-between ANOVA (with factors type of item, type of explanation, and retention interval) on the inference scores revealed main effects of type of item,
>
><anchor name="fig5"></anchor>
There was also an interaction of type of explanation and retention interval
General Discussion
>
The present research aimed to determine the parameters of differential forgetting of myth and fact veracity over time, to clarify if and under what conditions familiarity may contribute to false acceptance of corrected myths as true. Dual-process accounts of continued influence effects of misinformation (e.g., Ecker et al., 2010) suggest that postcorrection reliance on misinformation can be based on automatic memory processes (i.e., myth familiarity) in the absence of strategic retrieval and control processes. Hence familiarity-based acceptance of corrected falsehoods could be a mechanism underlying continued influence effects of misinformation. To investigate this, we presented participants with both myths and facts, obtained a premanipulation belief rating, then corrected the former and affirmed the latter. We manipulated factors known to affect strategic memory processes; thus, varying the relative impact of familiarity. Specifically, we manipulated the explanations’ level of detail and retention interval, and contrasted age groups, and we measured how these factors affected people’s postexplanation beliefs and inferences.
While some studies have shown a continued influence effect after a brief retention interval (e.g., Ecker et al., 2011; Johnson & Seifert, 1994), our corrections (and affirmations) were found to be relatively effective in the short-term. This difference may be because unlike the typical continued-influence paradigm, we retracted simple statements rather than causal relationships regarding an event, which may be particularly resistant to correction. The short-term efficacy of the explanations was more apparent for direct belief ratings (e.g., see Figure 1), whereas our inference measure (e.g., see Figure 2) closely resembled the typical result pattern found in continued-influence studies, which often also use inference questions to assess misinformation effects.
<h31 id="xlm-43-12-1948-d369e1094">Differential Forgetting of Myths and Facts Over Time</h31>Across both experiments, we found a striking asymmetry in that belief change was more sustained after fact affirmation compared with myth retraction—retractions thus seemingly have an “expiration date.” This asymmetry could be partially explained by familiarity. In the case of an affirmed fact, it does not matter if an individual relies on the recollection of the affirmation or on the boosted familiarity of the factual statement—familiarity and recollection operate in unison and lead to the individual assuming the item to be true. However, in the case of a retracted myth, recollection of the retraction will support the statement’s correct rejection, whereas the myth’s boosted familiarity will foster its false acceptance as true, as familiarity and recollection stand in opposition (Jacoby, 1991).
Our inference results mirrored the trend obtained with the belief ratings, demonstrating that familiarity effects can extend to inferential reasoning and potentially decision making. It is even possible that the act of responding to inference questions can contribute to increased familiarity of the misconception, in that the information is subjectively re-experienced during memory retrieval after exposure to the inference question, once again leading to a potentially increased perception of validity (Ozubko & Fugelsang, 2011).
<h31 id="xlm-43-12-1948-d369e1106">Age and Level of Detail</h31>Overall, the pattern of belief change over time—and in particular the asymmetry between facts and myths—was similar in young and older participants. Even young adults’ recollection fades over time, leading to an increased reliance upon familiarity in judging the veracity of information (Gilbert et al., 1990). However, “old” participants aged 65 and over were found to be comparatively worse than those aged 50–64 (“middle-aged” participants) at sustaining their postcorrection belief that myths are inaccurate. This supports the notion that older adults have less efficient strategic memory processes and thus less effective retrieval of the link between an item and contextual details (Naveh-Benjamin, 2000; Prull et al., 2006). As the mnemonic link between a statement and its veracity is weaker in older adults (Glisky et al., 2001), they seem particularly susceptible to the “re-believing” of myths. Although there was also a significant difference in fact belief between the “middle-aged” and “old” groups, this reflected the fact that the old participants were less likely to initially update their belief immediately after the affirmation. This differed from myth belief where belief change immediately after a correction was substantial yet followed by relatively steep forgetting as time progressed.
Detailed refutations seemed to somewhat mitigate the negative impact of familiarity in both younger and middle-aged adults. This is supported by parts of the educational literature, which highlight the benefits of detailed refutations (Tippett, 2010). Refutations may encourage participants to detect inconsistencies between their own inaccurate beliefs and the corrective information, leading to a facilitation of belief change even over long delays (Bedford & J. Cook, 2013; Guzzetti, 2000; Kowalski & Taylor, 2009). The benefit of directly addressing misconceptions could additionally be explained by detailed explanations fostering skepticism regarding the initial misinformation or its source (cf. Lewandowsky, Stritzke, Freund, Oberauer, & Krueger, 2013; Lewandowsky, Stritzke, Oberauer, & Morales, 2005). However, as much of this research stems from the educational literature, it has mostly used undergraduates or school-age participants (Guzzetti et al., 1993). The current study found that for “old” adults over the age of 65, correcting myths using detailed refutations was as ineffective as brief retractions.
<h31 id="xlm-43-12-1948-d369e1146">The Familiarity Backfire Effect</h31>The present research provides evidence for familiarity causing an increase in postcorrection myth belief after a delay; this meshes well with previous studies that similarly reported that myths are often “misremembered” as facts over time (Peter & Koch, 2016; Skurnik et al., 2005; Skurnik et al., 2007). However, we found no evidence for the existence of a true familiarity-based
The lack of a familiarity backfire effect conforms to a range of theoretical proposals which suggest that repeating misinformation when correcting could even
This implies that future research still faces a conundrum: while the present findings suggest that false acceptance of corrected myths as true is at least partially driven by familiarity, it seems that corrections that do not repeat the myth may be even less effective than corrections that do repeat the myth (e.g., Ecker, Hogan, & Lewandowsky, in press; Wilkes & Leatherbarrow, 1988). In other words, if a myth is not repeated when corrected, the associated lack of salience, conflict detection, and/or myth/correction coactivation may be even more detrimental to belief updating than the boost of the myth’s familiarity.
<h31 id="xlm-43-12-1948-d369e1189">Potential Limitations and Future Directions</h31>Obtaining belief measures before the experimental manipulation could be considered a limitation as it may have influenced how the corrective explanations were processed. However, in our opinion it is likely that a person’s belief is spontaneously cued when a statement of unclear veracity (e.g., a potentially dubious news headline) is encountered, or when a correction is presented by itself (e.g., if one is told that listening to Mozart does not increase IQ, it seems likely that one would consider whether or not one believes the original claim). Thus, asking for an explicit expression of belief before a correction will not necessarily have a strong impact on how the correction is processed. In our view, from a methodological perspective, the advantages of a pretest-posttest design outweigh the disadvantages. “Posttest-only with control” designs as used by Skurnik et al. (2005)—where one group received the correction and another group received no correction—can be considered quasi-experimental as the treatment and control groups cannot be adequately compared at baseline (Morris, 2008). This potentially reduces internal validity because the differences at posttest may be artificially inflated (T. D. Cook & Campbell, 1979; Morris & DeShon, 2002).
The artificial nature of the task could be seen as another limitation, as participants evaluated a long series of statements. However, people often process a large number of news headlines in a short period of time (e.g., when skimming a newspaper or scanning one’s social media feed), arguably assessing or at least monitoring the truth/belief status of each. Thus, we argue that people routinely deliberate belief before correction (i.e., in an experimental context, before a postcorrection belief rating), even with large numbers of statements.
We have interpreted our finding that myths are more likely than facts to be misremembered after a delay as an effect of familiarity when strategic memory is limited. The present research focused on factors that influence strategic memory processes; future research could test the proposed relationship between familiarity and misinformation effects more directly, for example by correcting statements that are familiar to some participants but not others. Previous research has found that misinformation effects are particularly strong if the misinformation is repeatedly presented before a correction (Ecker et al., 2011, also see Weaver, Garcia, Schwarz, & Miller, 2007), in line with the familiarity notion.
Moreover, future research could apply alternative testing procedures to further investigate the mechanisms underlying the effects reported here. For example, if myth acceptance is familiarity-driven, one might expect corrected myths to be accepted as true particularly in tasks requiring true/false categorization of statements (that may be more recognition-based) rather than in tasks that have a stronger recall component.
<h31 id="xlm-43-12-1948-d369e1217">Practical Applications</h31>The applied goal of this research was to provide empirically based advice on how to correct misconceptions. The present data suggest the following: First, corrections should include details as to why the misinformation is incorrect, as detailed refutations are more effective than brief retractions, particularly with younger participants. Thus, the misinformation should be explicitly retracted and paired with a comprehensive rebuttal.
Second, even the efficacy of detailed refutations of familiar misconceptions will lessen over time, and important corrections may need to be provided repeatedly, despite the potential risks of further boosting the myth’s familiarity (also see Ecker et al., 2011a). While this recommendation seems somewhat ironic in the context of the boosted-familiarity notion, boosting the more volatile recollection of the correction to offset myth familiarity may be necessary to achieve enduring belief change.
Third, explicitly mentioning a familiar misconception within a retraction will not typically backfire in the true sense of the word (this qualifies earlier recommendations; e.g., J. Cook & Lewandowsky, 2011; Lewandowsky et al., 2012). Repeating the myth when retracting it may be crucial for belief updating because it increases the correction’s salience and fosters conflict detection and coactivation of myth and correction (Kendeou et al., 2014; Putnam et al., 2014; Stadtler et al., 2013). However, given the aforementioned trade-off between the harm from boosting myth-familiarity and the benefit from boosting recollection of the correction (e.g., the association of the myth and its “negation-tag”), theoretically there may be circumstances where the harm outweighs the benefit. Moreover, it may also be problematic to circulate corrections if individuals have not previously encountered the relevant misconception, as this may potentially make the misinformation familiar to new audiences (Schwarz et al., 2016). It follows that, after correcting a myth, the focus should be placed upon factual information as much as possible to avoid boosting myth familiarity more than necessary (cf. Ecker et al., 2010; Johnson & Seifert, 1994; Lewandowsky et al., 2012; Seifert, 2002).
Footnotes
<anchor name="fn1"></anchor>
<sups>
1
</sups> The term familiarity backfire effect has been used somewhat inconsistently. The term is sometimes used simply when myths are misremembered as facts, without a control condition or baseline comparison (cf. Peter & Koch, 2016). However, we argue it should only pertain to cases where a correction inadvertently
<sups>
2
</sups> Nyhan et al. (2014) found that corrective information regarding the flu vaccine reduced participants’ intent to vaccinate, but
<sups> 3 </sups> This exception was “cancer screening is greatly beneficial” in the brief explanation condition, which had a mean premanipulation belief rating of 5.04, which rose to 5.33 after 1 week.
<anchor name="fn4"></anchor><sups> 4 </sups> To address the assumption that backfire effects may only occur when correcting strong belief in the original misconception, the analysis was replicated using each participants’ 30% most strongly believed myths and 30% least believed facts. The trend in was replicated, and no backfire effect was elicited.
<anchor name="fn5"></anchor>
<sups>
5
</sups> A 2 × 2 × 3 ANOVA also including the young adults from Experiment 1 (with factors type of explanation [brief and detailed], retention interval [30 min and 1-week], and age [young adults, middle aged, and old], on postexplanation myth scores likewise revealed a main effect of age,
References
<anchor name="c1"></anchor>Aizpurua, A., Garcia-Bajos, E., & Migueles, M. (2009). False memories for a robbery in young and older adults.
Anderson, N. D., Ebert, P. L., Jennings, J. M., Grady, C. L., Cabeza, R., & Graham, S. J. (2008). Recollection- and familiarity-based memory in healthy aging and amnestic mild cognitive impairment.
Ayers, M. S., & Reder, L. M. (1998). A theoretical review of the misinformation effect: Predictions from an activation-based memory model.
Bastin, C., & Van der Linden, M. (2003). The contribution of recollection and familiarity to recognition memory: A study of the effects of test format and aging.
Bedford, D., & Cook, J. (2013). Agnotology, scientific consensus, and the teaching and learning of climate change: A Response to legates, soon and briggs.
Begg, I. M., Anas, A., & Farinacci, S. (1992). Dissociation of processes in belief: Source recollection, statement familiarity, and the illusion of truth.
Bender, A. R., Naveh-Benjamin, M., & Raz, N. (2010). Associative deficit in recognition memory in a lifespan sample of healthy adults.
Berinsky, A. J. (2015). Rumors and health care reform: Experiments in political misinformation.
Brainerd, C. J., & Reyna, V. F. (2008). Episodic over-distribution: A signature effect of familiarity without recollection.
Brown, M. W., & Warburton, E. C. (2006). Associations and dissociations in recognition memory systems. In H.Zimmer, A.Mecklinger, & U.Lindenberger (Eds.),
Campbell, D. (1997).
Connolly, K., Chrisafis, A., McPherson, P., Kirchgaessner, S., Haas, B., Phillips, D., . . .Safi, M. (2016). Fake news: An insidious trend that’s fast becoming a global problem.
Cook, J., Bedford, D., & Mandia, S. (2014). Raising climate literacy through addressing misinformation: Case studies in agnotology-based learning.
Cook, J., & Lewandowsky, S. (2011).
Cook, T. D., & Campbell, D. T. (1979).
Dechêne, A., Stahl, C., Hansen, J., & Wänke, M. (2010). The truth about the truth: A meta-analytic review of the truth effect.
Diakidoy, I. A. N., Mouskounti, T., Fella, A., & Ioannides, C. (2016). Comprehension processes and outcomes with refutation and expository texts and their contribution to learning.
Diana, R. A., Yonelinas, A. P., & Ranganath, C. (2007). Imaging recollection and familiarity in the medial temporal lobe: A three-component model.
DiFonzo, N., Beckstead, J. W., Stupak, N., & Walders, K. (2016). Validity judgments of rumors heard multiple times: The shape of the truth effect.
Eagly, A. H., & Chaiken, S. (1993).
Ecker, U. K. H., Hogan, J. L., & Lewandowsky, S. (in press). Reminders and repetition of misinformation: Helping or hindering its retraction?
Ecker, U. K. H., Lewandowsky, S., Swire, B., & Chang, D. (2011). Correcting false information in memory: Manipulating the strength of misinformation encoding and its retraction.
Ecker, U. K. H., Lewandowsky, S., & Tang, D. T. W. (2010). Explicit warnings reduce but do not eliminate the continued influence of misinformation.
Faul, F., Erdfelder, E., Buchner, A., & Lang, A. G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses.
Fernandes, M. A., & Manios, M. (2012). How does encoding context affect memory in younger and older adults?
Gemberling, T. M., & Cramer, R. J. (2014). Expert testimony on sensitive myth-ridden topics: Ethics and recommendations for psychological professionals.
Gilbert, D. T., Krull, D. S., & Malone, P. S. (1990). Unbelieving the unbelievable: Some problems in the rejection of false information.
Glisky, E. L., Rubin, S. R., & Davidson, P. S. (2001). Source memory in older adults: An encoding or retrieval problem?
Guillory, J. J., & Geraci, L. (2013). Correcting erroneous inferences in memory: The role of source credibility.
Guzzetti, B. J. (2000). Learning counter-intuitive science concepts: What have we learned from over a decade of research?
Guzzetti, B. J., Snyder, T. E., Glass, G. V., & Gamas, W. S. (1993). Promoting conceptual change in science: A comparative meta-analysis of instructional interventions from reading education and science education.
Hardt, O., Einarsson, E. Ö., & Nader, K. (2010). A bridge over troubled water: Reconsolidation as a link between cognitive and neuroscientific memory research traditions.
Hart, P. S., & Nisbet, E. C. (2012). Boomerang effects in science communication.
Herron, J. E., & Rugg, M. D. (2003). Strategic influences on recollection in the exclusion task: Electrophysiological evidence.
Hoaglin, D. C., & Iglewicz, B. (1987). Fine-tuning some resistant rules for outlier labeling.
Hunter, J. E., & Schmidt, F. L. (2004).
Jacoby, L. L. (1991). A process dissociation framework: Separating automatic from intentional uses of memory.
Johnson, H. M., & Seifert, C. M. (1994). Sources of the continued influence effect: When misinformation in memory affects later inferences.
Kendeou, P., Walsh, E. K., Smith, E. R., & O’Brien, E. J. (2014). Knowledge revision processes in refutation texts.
Knowlton, B. J., & Squire, L. R. (1995). Remembering and knowing: Two different expressions of declarative memory.
Koutstaal, W., & Schacter, D. L. (1997). Gist-based false recognition of pictures in older and younger adults.
Kowalski, P., & Taylor, A. K. (2009). The effect of refuting misconceptions in the introductory psychology class.
Lavoipierre, A. (2017). ‘Fake news’ named 2016 Word of the Year by Macquarie Dictionary.
Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing.
Lewandowsky, S., Stritzke, W. G. K., Freund, A. M., Oberauer, K., & Krueger, J. I. (2013). Misinformation, disinformation, and violent conflict: From Iraq and the “War on Terror” to future threats to peace.
Lewandowsky, S., Stritzke, W. G. K., Oberauer, K., & Morales, M. (2005). Memory for fact, fiction, and misinformation: The Iraq War 2003.
Lilienfeld, S. O., Marshall, J., Todd, J. T., & Shane, H. C. (2014). The persistence of fad interventions in the face of negative scientific evidence: Facilitated communication for autism as a case example.
Morris, S. B. (2008). Estimating effect sizes from pretest-posttest-control group designs.
Morris, S. B., & DeShon, R. P. (2002). Combining effect size estimates in meta-analysis with repeated measures and independent-groups designs.
Nasreddine, Z. S., Phillips, N. A., Bédirian, V., Charbonneau, S., Whitehead, V., Collin, I., . . .Chertkow, H. (2005). The Montreal Cognitive Assessment, MoCA: A brief screening tool for mild cognitive impairment.
Naveh-Benjamin, M. (2000). Adult age differences in memory performance: Tests of an associative deficit hypothesis.
Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions.
Nyhan, B., & Reifler, J. (2015). Does correcting myths about the flu vaccine work? An experimental evaluation of the effects of corrective information.
Nyhan, B., Reifler, J., Richey, S., & Freed, G. L. (2014). Effective messages in vaccine promotion: A randomized trial.
Ozubko, J. D., & Fugelsang, J. (2011). Remembering makes evidence compelling: Retrieval from memory can give rise to the illusion of truth.
Pashler, H., Kang, S. H., & Mozer, M. C. (2013). Reviewing erroneous information facilitates memory updating.
Peter, C., & Koch, T. (2016). When debunking scientific myths fails (and when it does not): The backfire effect in the context of journalistic coverage and immediate judgments as prevention strategy.
Pietschnig, J., Voracek, M., & Formann, A. K. (2010). Mozart effect - Shmozart effect: A meta-analysis.
Poland, G. A., & Spier, R. (2010). Fear, misinformation, and innumerates: How the Wakefield paper, the press, and advocacy groups damaged the public health.
Prull, M. W., Dawes, L. L. C., Martin, A. M., III, Rosenberg, H. F., & Light, L. L. (2006). Recollection and familiarity in recognition memory: Adult age differences and neuropsychological test correlates.
Putnam, A. L., Wahlheim, C. N., & Jacoby, L. L. (2014). Memory for flip-flopping: Detection and recollection of political contradictions.
Rauscher, F. H., Shaw, G. L., & Ky, K. N. (1993). Music and spatial task performance.
Reyna, V. F., & Lloyd, F. (1997). Theories of false memory in children and adults.
Roediger, H. L., III, Watson, J. M., McDermott, K. B., & Gallo, D. A. (2001). Factors that determine false recall: A multiple regression analysis.
Rugg, M. D., & Curran, T. (2007). Event-related potentials and recognition memory.
Schultz, D. D. E. P. (2012).
Schwarz, N., Newman, E., & Leach, W. (2016). Making the truth stick and the myths fade: Lessons from cognitive psychology.
Schwarz, N., Sanna, L. J., Skurnik, I., & Yoon, C. (2007). Metacognitive experiences and the intricacies of setting people straight: Implications for debiasing and public information campaigns.
Seifert, C. M. (2002). The continued influence of misinformation in memory: What makes a correction effective?
Silverman, C. (2016). Fake news expert on how false stories spread and why people believe them.
Skurnik, I., Yoon, C., Park, D. C., & Schwarz, N. (2005). How warnings about false claims become recommendations.
Skurnik, I., Yoon, C., & Schwarz, N. (2007).
Stadtler, M., Scharrer, L., Brummernhenrich, B., & Bromme, R. (2013). Dealing with uncertainty: Readers’ memory for and use of conflicting information from science texts as function of presentation format and source expertise.
Tippett, C. D. (2010). Refutation text in science education: A review of two decades of research.
Toth, J. P. (1996). Conceptual automaticity in recognition memory: Levels-of-processing effects on familiarity.
Trevors, G. J., Muis, K. R., Pekrun, R., Sinatra, G. M., & Winne, P. H. (2016). Identity and epistemic emotions during knowledge revision: A potential account for the backfire effect.
Wang, W.-C., Brashier, N. M., Wing, E. A., Marsh, E. J., & Cabeza, R. (2016). On known unknowns: Fluency and the neural mechanisms of illusory truth.
Weaver, K., Garcia, S. M., Schwarz, N., & Miller, D. T. (2007). Inferring the popularity of an opinion from its familiarity: A repetitive voice can sound like a chorus.
Wilkes, A. L., & Leatherbarrow, M. (1988). Editing episodic memory following the identification of error.
Yonelinas, A. P. (2002). The nature of recollection and familiarity: A review of 30 years of research.
Yonelinas, A. P., & Jacoby, L. L. (2012). The process-dissociation approach two decades later: Convergence, boundary conditions, and new directions.
Zimmer, H. D., & Ecker, U. K. H. (2010). Remembering perceptual features unequally bound in object and episodic tokens: Neural mechanisms and their electrophysiological correlates.