In which of the following types of bias does a person have the tendency to believe falsely after the outcome is known that the person would have accurately predicted it?

Confirmation bias is a well-characterized phenomenon: the tendency to search for or interpret information in a way that confirms one’s preconceptions.

From: Misleading DNA Evidence, 2014

Rigor in Forensic Science

Tania Simoncelli, in Blinding as a Solution to Bias, 2017

Confirmation bias—like other forms of cognitive bias—is not unique to latent print examination, but is rather a fixture of human experience. The role of bias in human decision making was generally well understood at the time of the Madrid bombing investigation. It was also understood that analyses that rely on subjective methods and depend on a high degree of human judgment are especially vulnerable to cognitive bias. Many, if not most, forensic methods are of this nature, yet the forensic science community generally had shown little regard for the dangers posed by these sources of error. As the Mayfield case made painfully clear, even the FBI’s laboratory at Quantico—arguably the most sophisticated forensic science laboratory in the country—had failed to implement rigorous practices to mitigate bias in procedures the lab was performing every day.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128024607020040

Emerging Issues and Future Directions

Caleb W. Lack, Jacques Rousseau, in Comprehensive Clinical Psychology (Second Edition), 2022

11.04.4.1.1 Confirmation Bias

Confirmation biases are some of the most encountered, frustrating, and yet understandable biases (Nickerson, 1998). It is the tendency of individuals to favor information that confirms their beliefs or ideas and discount that which does not. This means that, when confronted with new information, we tend to do one of two things. If this information confirms what we already believe, our natural instinct is to accept it as true, accurate, and unbiased. We unreservedly accept it and are happy to have been shown it. Even if it has some problems, we forgive and forget those and incorporate this new information into our beliefs and schemas quickly. We are also more likely to recall this information later, to help buttress our belief during an argument. On the other hand, if this newly encountered information contradicts what we already believe, we have a very natural different response. We become highly critical and defensive immediately, nitpicking any possible flaw in the information, even though the same flaw would be ignored if the information confirmed our beliefs. It also fades quickly from our mind, so that in the future we cannot even recall being exposed to it.

As an example, consider that you believe that corporal punishment, such as spanking, is an effective way to discipline a child who is acting out. When you see a family member spank a child when they aren't listening to what they are told, and then the child goes and does what they were told, your brain latches onto that, and you say to yourself “I knew it works!” But later you are scrolling through your preferred social media feed, and you see a friend has shared a meta-analysis spanning five decades of research that comes to the conclusion that the more children are spanked, the more likely they are to be defiant toward their parents, as well as have increases in anti-social and aggressive behavior, MHP, and cognitive difficulties (Gershoff and Grogan-Kaylor, 2016). Since that doesn't fit with your already formed belief, you are likely to discount it in some way (e.g., “I was hit and I turned out just fine!” or “They must have ignored all the studies that support spanking in their meta-analysis!”).

In many ways, the confirmation bias undergirds the entire reason why scientific methodology needed to be developed in the first place. We naturally try to find information that supports and proves our beliefs, which can, in turn, lead to the wholesale discounting or ignoring of contradictory evidence. Science, in contrast, actively tries to disprove ideas. The scientific method allows for increased confidence in our findings and makes scientists less prone to the confirmation bias (at least, theoretically speaking and in their scientific work). But humans do not naturally think in a scientific manner, which helps make pop and pseudo-psychology so much easier to understand and absorb. And, once believed, it can be very difficult to shift someone's ideas (Ahluwalia, 2000; Nyhan and Reifler, 2010). But how do we get to that belief in the first place?

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128186978000522

The Use of Standardized Rating Scales in Clinical Practice

R. MICHAEL BAGBY, ... FIONA S.M. SCHULTE, in Psychiatric Clinical Skills, 2006

Clinical Judgment Biases and Heuristic Errors

The confirmatory bias is the tendency of clinicians to search for information to confirm existing beliefs or hypotheses that have been formed. Once a diagnostic decision has been made, therefore, you engage in confirmatory hypothesis testing. As such, subsequent probing throughout the assessment and the resulting information provided by the patient tend to be carefully assimilated in ways that only seek to confirm the initial impression. For example, if you have concluded that a patient is suffering from anxiety, the confirmatory bias posits that you will formulate your pattern of questioning to elicit responses in accordance with your hypothesis, while simultaneously construing the client's responses to align with this hypothesis. Clearly, some measure of this is absolutely necessary in the fleshing out of a clinical history from a patient based on presenting complaints and clinical hypotheses; however, the risks of this approach on its own should be evident to you as well.

Another bias that has been recognized to influence clinical judgment is the hindsight bias, which refers to the way in which impression or perception can be changed after learning the actual outcome of an event.5 In other words, it is the tendency for people with outcome knowledge to believe falsely that they would have predicted the reported outcome of an event. In clinical practice, the hindsight bias can interfere when a patient has been referred to you with a speculated diagnosis prereported. Clinicians exaggerate the extent to which they had foreseen the likelihood of its occurrence. For example, learning that an outcome has occurred, such as the attempted suicide of a patient, might lead you to perceive your initial formulation, perhaps of suicidal thoughts, as being correct.5

Heuristics, or rules that guide cognitive processing to help make judgments more quickly, introduce another source of error in human judgment. In that clinicians are often pressured by time constraints in everyday practice, it is not unusual to expect that heuristics be employed to help make decisions; indeed, you would be completely lost clinically without heuristics. However, while providing ease in assessment, heuristics often sacrifice accuracy of judgment for speed. For example, the availability heuristic is the tendency for decisions to be influenced by the facility with which objects and events can be remembered. When applied to clinical practice, the availability heuristic would posit that you might be more likely to make a diagnosis of depression as opposed to anxiety if you can more readily recall patients diagnosed with depression. Coinciding with the availability heuristic is the tendency for people to be influenced by more graphic or dramatic events, rather than real-life probabilities, otherwise known as the “base-rate fallacy.” Thus, disorders that receive considerable attention from the media tend to be perceived as occurring more often than they actually do. This can be especially problematic when it is recognized that the media tend to be fascinated by the more rare disorders, thereby implanting a view that these disorders occur with a greater frequency than is actually true.6

The representative heuristic occurs when a decision is made based on whether a person is representative of a particular category. In other words, when making a decision as to whether a patient might be diagnosed with borderline personality disorder, you may compare this patient's behavior and experiences to what has been understood as the typology of a borderline patient to determine whether the situations can be considered similar.

It is clear that there are many factors that might influence your perception on any given day. Error in human judgment is inevitable, regardless of the amount of training or the years of expertise a clinician has obtained. Standardized rating scales, therefore, are a means by which to reduce the threat of error inevitable in human decision-making.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780323031233500077

Core Network Principles

Warren W. Tryon, in Cognitive Neuroscience and Psychotherapy, 2014

Confirmation Bias

Illusory correlation is also driven by confirmation bias; another defective heuristic that operates outside of awareness (Baron, 2000). Confirmation bias refers to our tendency to let subsequent information confirm our first impressions (Baron, 2000). Hence, the same subsequent information can confirm different points of view depending upon what our first impression was. Alternatively stated, we are preferentially sensitive to and cherry-pick facts that justify decisions we make and hypotheses that we favor, and are similarly insensitive to facts that either fail to support or contradict decisions we make and hypotheses that we favor. And the best part is that all of this continuously operates unconsciously; outside of our awareness. This heuristic has been called the Positive Test Strategy and is illustrated next.

Snyder and Cantor (1979) described a fictitious person named Jane. To one group Jane was described as an extravert; to another group Jane was described as an introvert. A couple of days later, half the participants were asked to evaluate Jane for an extraverted job of a real estate broker and half were asked to evaluate her for an introverted job of librarian. Evaluations for the real estate job contained more references to Jane’s extraversion whereas evaluations for the introverted job contained more references to her introversion. This finding implies the use of a positive test strategy when trying to remember things about Jane. This cognitive heuristic is also caused by the neural network property of preferring consonance and coherence over dissonance that we will discuss as our Principle 7.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012420071500003X

What Does It Mean to be Biased

Ulrike Hahn, Adam J.L. Harris, in Psychology of Learning and Motivation, 2014

2.1 Understanding Bias: Scope, Sources, and Systematicity

We begin our example-based discussion with a very general bias which, if robust, would provide direct evidence of motivated reasoning, namely “wishful thinking.” Under this header, researchers (mostly in the field of judgment and decision-making) group evidence for systematic overestimation in the perceived probability of outcomes that are somehow viewed as desirable, as opposed to undesirable.

In actual fact, robust evidence for such a biasing effect of utilities or values on judgments of probability has been hard to come by, despite decades of interest, and the phenomenon has been the dubbed “the elusive wishful thinking effect” (Bar-Hillel & Budescu, 1995). Research on wishful thinking in probability judgment has generally failed to find evidence of wishful thinking under well-controlled laboratory conditions (see for results and critical discussion of previous research, e.g., Bar-Hillel & Budescu, 1995; Bar-Hillel, Budescu, & Amar, 2008; Harris, Corner, & Hahn, 2009). There have been observations of the “wishful thinking effect” outside the laboratory (e.g., Babad & Katz, 1991; Simmons & Massey, 2012). These, however, seem well explained as “an unbiased evaluation of a biased body of evidence” (Bar-Hillel & Budescu, 1995, p. 100, see also Gordon, Franklin, & Beck, 2005; Kunda, 1990; Morlock, 1967; Radzevick & Moore, 2008; Slovic, 1966). For example, Bar-Hillel et al. (2008) observed potential evidence of wishful thinking in the prediction of results in the 2002 and 2006 football World Cups. However, further investigation showed that these results were more parsimoniously explained as resulting from a salience effect than from a “magical wishful thinking effect” (Bar-Hillel et al., 2008, p. 282). Specifically, they seemed to stem from a shift in focus that biases information accumulation and not from any direct biasing effect of desirability. Hence, there is little evidence for a general “I wish for, therefore I believe…” relationship (Bar-Hillel et al., 2008, p. 283) between desirability and estimates of probability. Krizan and Windschitl's (2007) review concludes that while there are circumstances that can lead to desirability indirectly influencing probability estimates through a number of potential mediators, there is little evidence that desirability directly biases estimates of probability.

What is at issue here is the systematicity of the putative bias—the difficulty of establishing the presence of the bias across a range circumstances. The range of contexts in which a systematic deviation between true and estimated value will be observed depends directly on the underlying process that gives rise to that mismatch. Bar-Hillel and Budescu's (1995) contrast between “an unbiased evaluation of a biased body of evidence” and a “magical wishful thinking effect” reflects Macdougall's (1906) distinction between “primary” and “secondary bias,” namely a contrast between selective information uptake and a judgmental distortion of information so acquired.

Both may, in principle, give rise to systematic deviations between (expected) estimate and true value; however, judgmental distortion is more pernicious in that it will produce the expected deviation much more reliably. This follows readily from the fact that selective uptake of information cannot, by definition, guarantee the content of that information. Selectivity in where to look may have some degree of correlation with content, and hence lead to a selective (and truth distorting) evidential basis. However, that relationship must be less than perfect, simply because information uptake on the basis of the content of the evidence itself would require processing of that content, and thus fall under “judgmental distortion” (as a decision to neglect information already “acquired”).

In fact, selective attention to some sources over others can have a systematic effect on information content only where sources and content are systematically aligned and can be identified in advance.

Nevertheless, selectivity in search may lead to measurable decrements in accuracy if it means that information search does not maximize the expected value of information. In other words, even though a search strategy cannot guarantee the content of my beliefs (because there is no way of knowing whether the evidence, once obtained, will actually favor or disfavor my preferred hypothesis), my beliefs may systematically be less accurate because I have not obtained the evidence that could be expected to be most informative.

This is the idea behind Wason's (1960) confirmation bias. Though the term “confirmation bias,” as noted, now includes phenomena that do not concern information search (see earlier, Fischhoff & Beyth-Marom, 1983), but rather information evaluation (e.g., a potential tendency to reinterpret or discredit information that goes against a current belief, e.g., Lord et al., 1979; Nisbett & Ross, 1980; Ross & Lepper, 1980), Wason's original meaning concerns information acquisition. In that context, Klayman and Ha (1989) point out that it is essential to distinguish two notions of “seeking confirmation”:

1.

examining instances most expected to verify, rather than falsify, the (currently) preferred hypothesis.

2.

examining instances that—if the currently preferred hypothesis is true—will fall under its scope.

Concerning the first sense, “disconfirmation” is more powerful in deterministic environments, because a single counter-example will rule out a hypothesis, whereas confirming evidence is not sufficient to establish the truth of an inductively derived hypothesis. This logic, which underlies Popper's (1959) call for falsificationist strategies in science, however, does not apply in probabilistic environments where feedback is noisy. Here, the optimal strategy is to select information so as to maximize its expected value (see e.g., Edwards, 1965; and on the general issue in the context of science, see e.g., Howson & Urbach, 1996). In neither the deterministic nor the probabilistic case, however, is it necessarily wrong to seek confirmation in the second sense—that is, in the form of a positive test strategy. Though such a strategy led to poorer performance in Wason's (1960) study this is not generally the case and, for many (and realistic) hypotheses and environments, a positive test strategy is, in fact, more effective (see also, Oaksford & Chater, 1994).8 This both limits the accuracy costs of any “confirmation bias”9 and makes a link with “motivated reasoning” questionable.

Consideration of systematicity and scope of a putative bias consequently necessitates a clear distinction between the different component processes that go into the formation of a judgment and its subsequent report (whether in an experiment or in the real world). Figure 2.4 distinguishes the three main components of a judgment: evidence accumulation; aggregation, and evaluation of that evidence to form an internal estimate; and report of that estimate. In the context of wishful thinking, biasing effects of outcome utility (the desirability/undesirability of an outcome) can arise at each of these stages (readers familiar with Funder's (1995), realistic accuracy model of person perception will detect the parallels; likewise, motivated reasoning research distinguishes between motivational effects on information accumulation and memory as opposed to effects of processing, see e.g., Kunda, 1990). Figure 2.4 provides examples of studies concerned with biasing effects of outcome desirability on judgment for each of these component processes. For instance, demonstrations that participants’ use information about real-world base rate (Dai et al., 2008) or real world “representativeness” (Mandel, 2008) in judging the probability of events exemplify effects of outcome utility on the information available for the judgment: events that are extremely bad or extremely good are less likely in the real world than ones of moderate desirability, so that outcome utility provides information about frequency of occurrence which can be used to supplement judgments where participants are uncertain about their estimates.

In which of the following types of bias does a person have the tendency to believe falsely after the outcome is known that the person would have accurately predicted it?

Figure 2.4. Locating indirect effects of utility (outcome desirability/undesirability) in the probability estimation process. Framed boxes indicate the distinct stages of the judgment formation process. Ovals indicate factors influencing those stages via which outcome utility can come to exert an effect on judgment. Numbers indicate experimental studies providing evidence for a biasing influence of that factor. Note that Dai, Wertenbroch, and Brendl (2008), Mandel (2008), and Harris et al. (2009) all find higher estimates for undesirable outcomes (i.e., “pessimism”).

Figure adapted from Harris et al. (2009).

Confirming our observations about the relative reliability of primary and secondary bias in generating systematic deviations, the different components of the judgment process vary in the extent to which they generally produce “wishful thinking” and several of the studies listed (see Fig. 2.3) have actually found “anti” wishful thinking effects, whereby undesirable events were perceived to be more likely.

Such mixed, seemingly conflicting, findings are, as we have noted repeatedly, a typical feature of research on biases (see e.g., Table 1 in Krueger & Funder, 2004). However, only when research has established that a deviation is systematic has the existence of a bias been confirmed and only then can the nature of that bias be examined. The example of base rate neglect above illustrated how examination of only a selective range of base rates (just low prior probabilities or just high prior probabilities) would have led to directly conflicting “biases.” The same applies to other putative biases.

In general, names of biases typically imply a putative scope: “wishful thinking” implies that, across a broad range of circumstances, thinking is “wishful.” Likewise, “optimistic bias” (a particular type of wishful thinking, see Sharot, 2012) implies that individuals’ assessments of their future are generally “optimistic.” Researchers have been keen to posit broad scope biases that subsequently do not seem to hold over the full range of contexts implied by their name. This suggests, first and foremost that no such bias exists.

To qualify as optimistically biased for example, participants should demonstrate a tendency to be optimistic across a gamut of judgments or at least across a particular class of judgments such as probability judgments about future life events (e.g., Weinstein, 1980; in keeping with Weinstein's original work we restrict the term “optimistic bias” to judgments about future life events in the remainder). However, while people typically seem optimistic for rare negative events and common positive events, the same measures show pessimism for common negative events and rare common events (Chambers et al., 2003; Kruger & Burrus, 2004). Likewise, for the better-than-average effect (e.g., Dunning, Heath, & Suls, 2004; Svenson, 1981), people typically think that they are better than their peers at easy tasks, but worse than their peers at difficult tasks (Kruger, 1999; Moore, 2007), and the false consensus effect (whereby people overestimate the extent to which others share their opinions, Ross, Greene, & House, 1977) is mirrored by the false uniqueness effect (Frable, 1993; Mullen, Dovidio, Johnson, & Copper, 1992; Suls, Wan, & Sanders, 1988).

One (popular) strategy for responding to such conflicting findings is to retain the generality of the bias but to consider it to manifest only in exactly those situations in which it occurs. Circumstances of seemingly contradictory findings then become “moderators,” which require understanding before one can have a full appreciation of the phenomenon under investigation (e.g., Kruger & Savitsky, 2004): in the case of the better-than-average effect therefore that moderator would be the difficulty of the task.

2.1.1 The Pitfalls of Moderators

Moderators can clearly be very influential in theory development, but they must be theoretically derived. Post hoc moderation claims ensure the unfalsifiability of science, or at least can make findings pitifully trivial. Consider the result—reported in the Dutch Daily News (August 30th, 2011)—that thinking about meat results in more selfish behavior. As this study has since been retracted—its author Stapel admitting that the data were fabricated—it is likely that this result would not have replicated. After (say) 50 replication attempts, what is the most parsimonious conclusion? One can either conclude that the effect does not truly exist or posit moderators. After enough replication attempts across multiple situations, the latter strategy will come down to specifying moderators such as “the date, time and experimenter,” none of which could be predicted on the basis of an “interesting” underlying theory.

This example is clearly an extreme one. The moderators proposed for the optimism bias and better-than-average effects are clearly more sensible and more general. It is still, however, the case that these moderators must be theoretically justified. If not, “moderators” may prop up a bias that does not exist, thus obscuring the true underlying explanation (much as in the toy example above). In a recent review of the literature, Shepperd, Klein, Waters, and Weinstein (2013) argue for the general ubiquitousness of unrealistic optimism defined as “a favorable difference between the risk estimate a person makes for him- or herself and the risk estimate suggested by a relevant, objective standard…Unrealistic optimism also includes comparing oneself to others in an unduly favorable manner,” but state that this definition makes “no assumption about why the difference exists. The difference may originate from motivational forces…or from cognitive sources, such as…egocentric thinking” (Shepperd et al., 2013, p. 396).

However, the question of why the difference exists is critical for understanding what is meant by the term unrealistic optimism especially in the presence of findings that clearly appear inconsistent with certain accounts. The finding that rare negative events invoke comparative optimism, while common negative events invoke comparative pessimism seems entirely inconsistent with a motivational account. If people are motivated to see their futures as “rosy,” why should this not be the case for common negative events (or rare positive events) (Chambers, Windschitl, & Suls, 2003; Kruger & Burrus, 2004)? One can say that comparative optimism is moderated by the interaction of event rarity and valence, such that for half the space of possible events pessimism is in fact observed, but would one really want to call this “unrealistic optimism” or an “optimistic bias”? Rather, it seems that a more appropriate explanation is that people focus overly on the self when making comparative judgments (e.g., Chambers et al., 2003; Kruger & Burrus, 2004; see Harris & Hahn, 2009 for an alternative account which can likewise predict this complete pattern of data)—a process that simply has the by-product of optimism under certain situations. It might be that such overfocus on the self gives rise to bias, but through a correct understanding of it one can better predict its implications. Likewise, one is in a better position to judge the potential costs of it.

In summary, when bias is understood in a statistical sense as a property of an expectation, demonstration of deviation across a range of values is essential to establishing the existence of a bias in the first place, let alone understanding its nature. Conflicting findings across a range of values (e.g., rare vs. common events in the case of optimism) suggest an initial misconception of the bias, and any search for moderators must take care to avoid perpetuating that misconception by—unjustifiedly—splitting up into distinct circumstances one common underlying phenomenon (i.e., one bias) which has different effects in different circumstances (for other examples, see on the better-than-average/worse-than-average effect, see e.g., Benoit & Dubra, 2011; Galesic, Olsson, & Rieskamp, 2012; Kruger, 1999; Kruger, Windschitl, Burrus, Fessel, & Chambers, 2008; Moore & Healy, 2008; Moore & Small, 2007; Roy, Liersch, & Broomell, 2013; on the false uniqueness/false consensus effect see Galesic, Olsson, & Rieskamp, 2013; more generally, see also, Hilbert, 2012).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128002834000022

Professional Forensic Expert Practice

Mark Page, in Forensic Testimony, 2014

4.5.2.3 Target shifting

This phenomenon works in a similar way to information sharing and confirmation bias. Target shifting occurs when presented with a priori information regarding what a suspected “match” may look like; thus the forensic examiner is likely to resolve ambiguities in interpretation of the original sample towards the pattern already seen or expected from knowledge of the reference sample. The naming of this phenomenon is referenced from the notion of “painting a target around an arrow.” Even DNA analysis has been the subject of criticism regarding subjective interpretation and confirmation bias, where the use of low copy number analysis, partial samples, and mixtures in order to obtain a DNA profile suggests that the incidence of ambiguity and subsequent interpretation in DNA casework probably occurs in more than a trivial fraction of cases (Whitman and Koppl, 2010). The existence of ambiguity regarding which peaks belong to which donor, in addition to the problems of allelic drop-out (and drop-in) often require the analyst to make a judgment call on the significance of electropherogram peaks. If the analyst has prior knowledge of a suspect’s profile, as commonly occurs in many laboratories, then they may be more inclined to include some ambiguous readings, and dismiss others by claiming them as artifacts. This “target shifting” naturally occurs in favor of supporting the prosecution theory, as the profile used for comparison is usually that of the defendant (Thompson, 2009). Another researcher has published evidence to suggest that this effect is potentially very real in DNA casework, particularly in mixed-sample cases where potentially biasing information is known to the examiners (Dror and Hampikian, 2011).

The NAS report, discussing this issue in relation to tool marks, notes that the a priori stipulation of what features may or may not be considered suitable for analysis might not be possible, and hence examination of the tool in question might be warranted prior to analysis of the mark itself. This comment is also applicable to other disciplines such as bite mark analysis and fingerprint analysis. But this is now in contrast to most laboratory DNA techniques, arguably some of the most objective forensic analyses possible, which have attempted to correct for this phenomenon by initially “blinding” the examiner to the reference sample. This limiting of a priori knowledge represents a more scientifically justified series of steps for making conclusions regarding the source of forensic samples.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123970053000049

The Psychology of Learning and Motivation

Klaus Fiedler, in Psychology of Learning and Motivation, 2012

6.3 Sample-Size Neglect in Hypothesis Testing

One intriguing consequence of self-induced differences in sample size is confirmation bias in hypothesis testing. When asked to test the hypothesis that girls are superior in language and that boys are superior in science, teachers would engage in positive testing strategies (Klayman & Ha, 1987). They would mostly sample from targets that are the focus of hypothesis. As a consequence, smart girls in language and smart boys in science are rated more positively, due to enhanced sample size, than girls in science and boys in language whose equally high achievement is only visible in smaller samples.

The causal factor that drives this repeatedly demonstrated bias (cf. Fiedler et al., 2002b; Fiedler, Freytag, & Unkelbach, 2007; Fiedler, Walther, & Nickel, 1999) is in fact n, or myopia for n, rather than common gender stereotypes. Thus, if the hypothesis points in a stereotype-inconsistent direction, calling for a test of whether girls excel in science and boys in language, most participants would still engage in positive testing and solicit larger samples from, and provide more positive ratings of, girls in science and boys in language. Similarly, exposing participants to a stimulus series that entails negative testing (i.e., a small rate of observations about the hypothesis target), then a reversal is obtained. Reduced samples yield more regressive, less pronounced judgments (Fiedler et al., 1999), highlighting the causal role of n.

More generally, the MM approach offers an alternative account for a variety of so-called confirmation biases (Klayman & Ha, 1987; Nickerson, 1998). Hypothesis testers – in everyday life as in science – sample more observations about a focal hypothesis Hfocal than about alternative hypotheses Halt. Provided that at least some evidence can be found to support any hypothesis, the unequal n gives a learning advantage to Hfocal. No processing bias or motivated bias is necessary. If each observation has the same impact on memory, unequal n will bias subsequent judgments toward the focal hypothesis.

MM prevents judges from monitoring and controlling for n differences, which reflect their own information-search strategies. Meta-cognitively, they should ignore n for two reasons. First, if the task calls for estimations rather than choices, they should not engage in a Bayesian competition of whether Hfocal or Halt receives more support but rather try to provide unbiased estimations (e.g., of the confirmation rate for all hypotheses). In this case, the impact of n has to be discounted anyway. Second, even in a competitive hypothesis test or choice, the enhanced n in favor Hfocal does not imply enhanced diagnosticity if it reflects the judge's own search bias toward Hfocal, which creates stochastic dependencies in the sample.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123942937000017

Anger Control Therapy

Raymond W. Novaco, in Encyclopedia of Psychotherapy, 2002

II.A. Anger and Threat

The conception of anger as a product of threat perceptions, as having confirmatory bias characteristics (i.e., the perception of events is biased toward fit with existing anger schemas), as being primed by aversive precursors, and as having social distancing effects (i.e., expressing anger keeps people away) can be found in the writings of Lucius Seneca, who was Nero's tutor in first-century Rome. Seneca was the first to write systematically about anger control. Like other Stoic philosophers who negate the value of emotions, his view of anger was almost exclusively negative. Although his idea of anger control was largely that of suppression, Seneca recognized the powerful role of cognition as a determinant of anger, advocated cognitive shift and re-framing to minimize anger, and saw the merit of a calm response to outrageous insult. However, he discounted the functional value of anger, which thereby led him to miss the principle of regulation.

Since the writings of Charles Darwin, William James, and Walter B. Cannon, anger has been viewed in terms of the engagement of the organism's survival systems in response to threat and the interplay of cognitive, physiological, and behavioral components. It is an elementary Darwinian notion that the adaptive value of a characteristic is entailed by its fitness for the environment; if the environment changes, that characteristic may lose its adaptive value, and the organism must adjust. The activation of anger may usefully serve to engage aggression in combat and to overcome fear, but in most everyday contexts, anger is often maladaptive.

Many theories of emotion have enlarged upon the Darwinian view of emotions as reactions to basic survival problems created by the environment and upon Cannon's idea that internal changes prepare the body for fight or flight behavior. Thus, emotion has commonly been viewed as an action disposition. Some contemporary theorists postulate that emotion is controlled by appetitive and aversive motive systems in the brain, with the amygdala serving as a key site for the aversive motivational system, and neurobiological mechanisms associated with amygdala involvement in aversive emotion and trauma are being studied in various laboratories. Most generally, when people are exposed to stimuli signifying present danger or reminders of trauma, they are primed for anger reactions. Anger is intrinsically connected to threat perception.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0123430100000064

Professional Issues

I. Leon Smith, Sandra Greenberg, in Comprehensive Clinical Psychology, 1998

(i)

Social cognition and perception (e.g., attribution theory and biases, information integration, confirmation bias, person perception, development of stereotypes, racism).

(ii)

Social interaction (e.g., interpersonal relationships, aggression, altruism, attraction).

(iii)

Group dynamics and organizational structures (e.g., school systems, gang behavior, family systems, group thinking, cultural behavior, conformity, compliance, obedience, persuasion) and social influences on individual functioning.

(iv)

Environmental/ecological psychology (e.g., person–environment fit, crowding, pollution, noise).

(v)

Theories of personality that describe behavior and the etiology of atypical behavior. Includes knowledge of limitations in existing theories for understanding the effect of diversity (e.g., age, ethnicity, gender).

(vi)

Multicultural and multiethnic diversity (e.g., racial/ethnic minorities, gender, age, disability, sexual orientation, religious groups, between- and within-group differences).

(vii)

Theories of identity development of multicultural/multiethnic groups (e.g., acculturation theories, racial/ethnic identity).

(viii)

Role that race, ethnicity, gender, sexual orientation, disability, and other cultural differences play in the psychosocial, political, and economic development of individuals/groups.

(ix)

Sexual orientation issues (e.g., sexual identity, gay/lesbian/bisexual, family issues).

(x)

Psychology of gender (e.g., psychology of women, psychology of men, gender identity development).

(xi)

Disability and rehabilitation issues (e.g., inclusion, psychological impact of disability).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080427073000389

A Primer on the Psychology of Cognitive Bias

Carla L. MacLean, Itiel E. Dror, in Blinding as a Solution to Bias, 2016

Judgments

Context that is consistent with a correct situational assessment will facilitate the formation of an accurate hunch or hypothesis. However, confirmation bias demonstrates that an inaccurate initial understanding of the situation can be a significantly compromising first step for experts attempting to reach correct decisions (Kassin et al., 2013; Nickerson, 1998). Once initial impressions are formed, individuals tend to seek and interpret additional information that matches their expectations (Findley and Scott, 2006). People tend to give greater weight to information consistent with their expectations. They also tend to ignore, discredit, or weigh very low information that is inconsistent and interpret ambiguous information as consistent with their working theory (see Ask et al., 2008 for a discussion of the elasticity of evidence).

An erroneous initial impression does not ensure that the decision maker will pursue a biased investigative trajectory; however, research does endorse that the initial impression can be a central predecessor to distorted final judgments (O’Brien, 2009). Once in motion, the momentum of confirmation bias can build quickly for the decision maker because people generally require less hypothesis-consistent evidence to convince themselves that their initial theories are accurate than hypothesis-inconsistent evidence to reject their theories. Contributing to such momentum are motivational factors such as personal goals, organizational norms, and the cognitive effort required for the decision. For instance, people were shown to increase their scrutiny of information in a simulated investigation not only because the information conflicted with their initial hypotheses, but also because the information conflicted with their goal of solving the case (Marksteiner et al., 2011). Research that asked participants to assume the norm of “efficiency”—versus “thoroughness”—in a simulated investigation found that efficient participants were less rigorous in processing the evidence and less open to information presented later in the investigation (Ask et al., 2011).

In a study with physicians, Redelmeier and Shafir (1995) found that 53% of participants who had decided on a treatment option and who were then informed that one more medication could be tried with the patient prior to surgery opted to stay with their original plan of just the referral. By contrast, 72% of physicians who were informed that there were two medications that could be tested with the patient chose to proceed with just the referral. In essence, the effort involved with deciding between two medications versus one medication resulted in a higher percentage of physicians defaulting to their original referral plans.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128024607000012

In which of the following types of bias does a person's mind appear to give a disproportionate amount of emphasis to the first information it receives quizlet?

T/F: Anchoring bias occurs because our mind appears to give a disproportionate amount of emphasis to the first information it receives. T/F: A person's tendency to believe he or she can predict the outcome of random events is known as the self-serving bias.

Which bias occurs only after the true outcome of an event is known?

Which bias occurs only after the true outcome of an event is known? The hindsight bias.

Which of the following is an example of the hindsight bias?

Examples of the hindsight bias include a person believing they predicted who would win an election or sporting event. Students might assume that they could predict the questions and answers on exams, which can affect how much effort they devote to studying.

Which of the following describes self

A self-serving bias is the common habit of a person taking credit for positive events or outcomes, but blaming outside factors for negative events. This can be affected by age, culture, clinical diagnosis, and more.