Archive | Decision-making and behavioral biases


Mere exposure effect

The exposure effect is a psychological phenomenon by which people tend to prefer things because they are familiar with them. In social psychology, this effect is sometimes called the familiarity principle. In studies of interpersonal attraction, the more often a person is seen by someone, the more pleasing and likeable that person appears to be.


In the 1960s, a series of laboratory experiments by Robert Zajonc demonstrated that simply exposing subjects to an unfamiliar stimulus led them to rate it more positively than other, similar stimuli which had not been presented. Researchers have used words, Chinese characters, paintings, pictures of faces, geometric figures, and auditory stimuli in these experiments. In one variation, subjects were shown an image on a tachistoscope for a very brief duration that could not be perceived consciously. This subliminal exposure produced the same effect, though it is important to note that subliminal effects are generally weak and unlikely to occur without controlled laboratory conditions. According to Zajonc, the exposure effect is capable of taking place without conscious cognition, and that “preferences need no inferences.”

A meta-analysis of 208 experiments found that the exposure effect is robust and reliable, with an effect size of r=0.26. This analysis found that the effect is strongest when unfamiliar stimuli are presented briefly. Mere exposure typically reaches its maximum effect within 10-20 presentations, and some studies even show that liking may decline after a longer series of exposures. For example, people generally like a song more after they have heard it a few times, but many repetitions can reduce this preference. A delay between exposure and the measurement of liking actually tends to increase the strength of the effect. Curiously, the effect is weaker on children, and for drawings and paintings as compared to other types of stimuli. One social psychology experiment showed that exposure to people we initially dislike makes us dislike them even more.


Although the exposure effect appears to have a natural place in advertising, research has been mixed as to how effective it is at enhancing consumer attitudes toward particular companies and products. According to one study, higher levels of media exposure are associated with lower reputations for companies, even when the exposure is mostly positive. A subsequent review of the research concluded that exposure leads to ambivalence because it brings about a large number of associations, which tend to be both favorable and unfavorable. Exposure is most likely to be helpful when a company or product is new and unfamiliar to consumers.

Posted in Decision-making and behavioral biases0 Comments


Congruence bias

Congruence bias is a type of cognitive bias similar to confirmation bias. Congruence bias occurs due to people’s overreliance on direct testing of a given hypothesis, and their corresponding neglect of indirect testing.

Suppose that in an experiment, a subject is presented with two buttons, and is told that pressing one of those buttons, but not the other, will open a door. The subject adopts the hypothesis that the button on the left opens the door in question. A direct test of this hypothesis would be pressing the button on the left; an indirect test would be pressing the button on the right. The latter is still a valid test because once the result of the door’s remaining closed is found, the left button is proven to be the desired button.

We can take this idea of direct and indirect testing and apply it to more complicated experiments in order to explain the presence of a congruence bias in people. In an experiment, a subject will test his own usually naive hypothesis again and again instead of trying to disprove it.

The classic example of subjects’ congruence bias is found in an experiment by Wason. Here, the experimenter gave subjects the number sequence “2, 4, 6,” telling the subjects that this sequence followed a particular rule and instructing subjects to find the rule underlying the sequence logic. Subjects provided their own number sequences as tests to see if they could ascertain the rule dictating which numbers could be included in the sequence and which could not. Most subjects respond to the task by quickly deciding that the underlying rule is “numbers ascending by 2,” and provide as tests only sequences concordant with this rule, such as “3, 5, 7,” or even “pi plus 2, plus 4, plus 6.” Each of these sequences follows the underlying rule the experimenter is thinking of, though “numbers ascending by 2” is not the actual criterion being used. However, because subjects succeed at repeatedly testing the same singular principle, they naively believe their chosen hypothesis is correct. When a subject offers up to the experimenter the hypothesis “numbers ascending by 2” only to be told he is wrong, much confusion usually ensues. At this point, many subjects attempt to change the wording of the rule without changing its meaning, and even those who switch to indirect testing have trouble letting go of the “+ 2” convention, producing potential rules as idiosyncratic as “the first two numbers in the sequence are random, and the third number is the second number plus two.” Many subjects never realize that the actual rule the experimenter was using was simply just to list ascending numbers, because of the subjects’ inability to consider indirect tests of their hypotheses.

Wason attributed this failure of subjects to an inability to consider alternative hypotheses, which is the root of the congruence bias. Jonathan Baron explains that subjects could be said to be using a “congruence heuristic,” wherein a hypothesis is tested only by thinking of results that would be found if that hypothesis is true. This heuristic, which many people seem to use, ignores alternative hypotheses.

To avoid falling into the trap of the congruence bias, Baron suggests that the following two heuristics be used:

1. Ask “How likely is a yes answer, if I assume that my hypothesis is false?” Remember to choose a test that has a high probability of giving some answer if the hypothesis is true, and a low probability if it is false.

2. “Try to think of alternative hypotheses; then choose a test most likely to distinguish them — a test that will probably give different results depending on which is true.” An example of the need for the heuristic could be seen in a doctor attempting to diagnose appendicitis. In that situation, assessing a white blood cell count would not assist in diagnosis, because an elevated white blood cell count is associated with a number of maladies.

Posted in Decision-making and behavioral biases0 Comments


Neglect of probability

The “neglect of probability bias” is the tendency to disregard probabilities when making decisions which involve a degree of uncertainty. It is a simple way in which many people violate the normative rules for making decisions.

This isn’t the only kind of cognitive bias related to probability–others include “the gambler’s fallacy,” “the hindsight bias,” and “the neglect of prior base rates effect”–but this bias differs notably from those ones, as with this bias the person involved completely disregards probability when making his decision, instead of incorrectly applying probability.

An interesting study was done in 1993 which studied this bias. The study asked children the following question:

Susan and Jennifer are arguing about whether they should wear seat belts when they ride in a car. Susan says that you should. Jennifer says you shouldn’t… Jennifer says that she heard of an accident where a car fell into a lake and a woman was kept from getting out in time because of wearing her seat belt, and another accident where a seat belt kept someone from getting out of the car in time when there was a fire. What do you think about this?

One subject responded as below:

A: Well, in that case I don’t think you should wear a seat belt.
Q (interviewer): How do you know when that’s going to happen?
A: Like, just hope it doesn’t!
Q: So, should you or shouldn’t you wear seat belts?
A: Well, tell-you-the-truth we should wear seat belts.
Q: How come?
A: Just in case of an accident. You won’t get hurt as much as you will if you didn’t wear a seat belt.
Q: OK, well what about these kinds of things, when people get trapped?
A: I don’t think you should, in that case.

Here we can see that the subject completely disregards the probability of an accident happening versus the probability of getting hurt by the seat belt in making the decision. In a  normative model for this decision we would use expected-utility theory to decide which option would be most likely to maximize utility. This would involve weighing the changes in utility in each option by the probability that each option will occur, something the subject ignores.

Another subject responded to the same question:

A: If you have a long trip, you wear seat belts half way.
Q: Which is more likely?
A: That you’ll go flyin’ through the windshield.
Q: Doesn’t that mean you should wear them all the time?
A: No, it doesn’t mean that.
Q: How do you know if you’re gonna have one kind of accident or the other?
A: You don’t know. You just hope and pray that you don’t.

Here again you can see the disregard of probability in making the decision by the subject. He treats each possible outcome as equally likely.

It has been suggested that adults can suffer from this bias as well, especially when it comes to decisions like a medical decision under uncertainty. We see that this bias could make actors drastically violate expected-utility theory in their decision making, especially when a decision must be made in which one possible outcome has a much lower or higher utility but a small probability of occurring (e.g. in medical or gambling situations). In this aspect, the neglect of probability bias is similar to the neglect of prior base rates effect.

In another interesting example of near-total neglect of probability, it was discovered that an average person was willing to pay $7 to avoid a 1% chance of a painful electric shock, and $10—to avoid a 99% chance of the same shock. The suggestion is that probability is more likely to be neglected when the potential outcomes arouse strong emotion.

Posted in Decision-making and behavioral biases1 Comment