Thursday, November 14, 2013

McDermott and Shleifer double-team Taleb and Kahneman

Not really. But I still enjoyed reading the following passage from Andrei Shleifer's review of Daniel Kahneman's (superb) Thinking, Fast and Slow:
The fourth assumption of Prospect Theory is quite important. [i.e. In assessing lotteries, individuals convert objective probabilities into decision weights that overweight low probability events and underweight high probability ones.] The evidence used to justify this assumption is the excessive weights people attach to highly unlikely but extreme events: they pay too much for lottery tickets, overpay for flight insurance at  the airport, or fret about accidents at nuclear power plants. Kahneman and Tversky use probability weighting heavily in their paper, adding several functional form assumptions (subcertainty, subadditivity) to explain various forms of the Allais paradox. In the book, Kahneman does not talk about these extra, assumptions, but without them Prospect Theory explains less.  
To me, the stable probability weighting function is problematic. Take low probability events. Some of the time, as in the cases of plane crashes or jackpot winnings, people put excessive weight on them, a phenomenon incorporated into Prospect Theory that Kahneman connects to the availability heuristic. Other times, as when investors buy AAA-rated mortgage-backed securities, they neglect low probability events, a phenomenon sometimes described as black swans (Taleb 2007). Whether we are in the probability weighting or the black swan world depends on the context: whether or not people recall and are focused on the low probability outcome. [Emphasis mine.]
This exactly the issue I was trying to point out here. Sometimes people greatly overweight the risks of low probability events (as suggested by Kaheman and Prospect Theory)... other times they completely underestimate them (as suggested by Taleb's black swan metaphor). As a result, we should be cautious in trying to make generalisable statements about human behaviour from either one of these theories alone.

You may also recall that -- for my temerity in pointing out this apparent tension between Kahneman and Taleb's theories -- I was labelled an "idiot" by none other than Taleb himself. As I coyly suggested in that second post, Taleb's affinity for labelling others as idiotic meant that I was at least likely to be in good company. I am sure of that now having read Shleifer's article.

2 comments:

  1. I'd think that people overestimate low probabilities when it's less excusable. Meaning, we can take a frequentist approach to the probability of an airplane crashes and this is probably a pretty accurate number. But, the kind of low probability events that Taleb has in mind are those where the frequentist approach is less useful (there are less trials; less information), so there's no base probability to judge against. A similar way of saying the same thing is that people tend to overestimate probability when there's a lot of information available on those class of events, but they tend to underestimate probability when there isn't a lot of information available.

    Or, am I completely wrong? (I haven't read Taleb's book; I have read Kahneman's.)

    ReplyDelete
    Replies
    1. I think that's a fair assessment, Jonathan.

      The complication I would add is that the information needed for individual decision-making may be limited in both cases. For example, most laymen would have no real idea about the true number of airplane crashes a year, even if there is good objective data out there on which to base expert opinion. The fact that these decisions are based on (mistaken) heuristics is one reason why I like the comparison with Taleb's viewpoint.

      OTOH, one major difference that your comment correctly identifies is the applicability of the frequentist approach when we do have data. This is a point that Taleb makes effectively; the illusion of statistical "certainty" promoted by experts in cases where the data is not actually rich enough to justify it. However, should we then revert back to our heuristics when they tend to bias too far in the other direction? I don't think the answer is clear...

      Delete