Showing posts with label Psychology. Show all posts
Showing posts with label Psychology. Show all posts

Thursday, November 14, 2013

McDermott and Shleifer double-team Taleb and Kahneman

Not really. But I still enjoyed reading the following passage from Andrei Shleifer's review of Daniel Kahneman's (superb) Thinking, Fast and Slow:
The fourth assumption of Prospect Theory is quite important. [i.e. In assessing lotteries, individuals convert objective probabilities into decision weights that overweight low probability events and underweight high probability ones.] The evidence used to justify this assumption is the excessive weights people attach to highly unlikely but extreme events: they pay too much for lottery tickets, overpay for flight insurance at  the airport, or fret about accidents at nuclear power plants. Kahneman and Tversky use probability weighting heavily in their paper, adding several functional form assumptions (subcertainty, subadditivity) to explain various forms of the Allais paradox. In the book, Kahneman does not talk about these extra, assumptions, but without them Prospect Theory explains less.  
To me, the stable probability weighting function is problematic. Take low probability events. Some of the time, as in the cases of plane crashes or jackpot winnings, people put excessive weight on them, a phenomenon incorporated into Prospect Theory that Kahneman connects to the availability heuristic. Other times, as when investors buy AAA-rated mortgage-backed securities, they neglect low probability events, a phenomenon sometimes described as black swans (Taleb 2007). Whether we are in the probability weighting or the black swan world depends on the context: whether or not people recall and are focused on the low probability outcome. [Emphasis mine.]
This exactly the issue I was trying to point out here. Sometimes people greatly overweight the risks of low probability events (as suggested by Kaheman and Prospect Theory)... other times they completely underestimate them (as suggested by Taleb's black swan metaphor). As a result, we should be cautious in trying to make generalisable statements about human behaviour from either one of these theories alone.

You may also recall that -- for my temerity in pointing out this apparent tension between Kahneman and Taleb's theories -- I was labelled an "idiot" by none other than Taleb himself. As I coyly suggested in that second post, Taleb's affinity for labelling others as idiotic meant that I was at least likely to be in good company. I am sure of that now having read Shleifer's article.

Monday, October 28, 2013

TEDxBergen

I mentioned the other day that I acted as moderator for the recent TEDxBergen conference. Videos of the various talks have now been posted online, but here are two that I particularly enjoyed as a sample.

1) Mads Nordmo gave a talk on moral psychology, which challenges the traditional "transactional" view of behaviour -- as is favoured by a lot of economic theory.

Mads is actually doing a PhD with me -- albeit in the strategy department -- and also has a degree in clinical psychology. His opening remark about showing that "it wasn't just beginner's luck" was in reference to a quip that I made about him winning a 'Best lecturer' award from NHH bachelor students. (Link in Norwegian.)

He used various examples to underscore his points, including the growing popularity of CrossFit and the paleo diet.[*] For instance, a purely transactional view provides us with very little insight into why people pay such exorbitant sums of money to join CrossFit gyms. The exercises mostly require far less equipment than ordinary gyms and we could all do as many sit-ups and push-ups as we want at home (for free!). However, Mads argued that these "movements" actually constitute a quasi-religious experience -- much like we would encounter at a rock concert or sports match -- where the sense of communal spirit and exaltation actually enable participants to achieve some kind of transcendence.

In the Q&A afterwards (not shown), I suggested that economics would normally explain the high membership fees paid to crossfit gyms as a commitment device. Mads agreed that this too is an important psychological driver. However, there is at the least no reason to regard such phenomena as mutually exclusive. (Interestingly, he also said that psychology is moving closer to economics... not simply the other war around, as is often asserted in some heterodox circles.) Anyway, he is a smart and funny guy, and I think that both traits are evident in his talk. Check it out:




2) The Grammy-nominated violinist, Peter Sheppard Skærved talked about reinvention and finding new purposes for old tools. Peter is a fascinating person -- the Library of Congress has described him as a polymath -- and I thoroughly enjoyed chatting to him about a range of topics, from anthropology to haptic technology, over the course of the day. In this video, he not only makes a compelling case for preserving "museum pieces" by actively using them as much as possible, but also treats the audience to a range of music pieces from across the ages.

___
[*] As someone who has a number of friends into (at least one of) CrossFit and the paleo diet, I freely admit that I am predisposed towards finding this discussion both amusing and enlightening.

Friday, July 5, 2013

It's not every day that you're called an idiot by Nassim Taleb

Or a "bloggist" for that matter.

Here and here.

To be fair, Taleb has charged that many minds superior to my own are beset by idiocy, so I'm in reasonable company. More seriously, he did at least tone down his bombast when I pointed out that he had misunderstood what I was asking.

The background is this post, where I wondered (quite respectful like!) what Taleb made of the research that shows people have a tendency to overestimate the likelihood of low-probability events if they were framed in highly dramatic terms. This seemed to run counter to a recurring theme in his writings, which is that people are blind to "black swans"... basically that they consistently underestimate low-prob, high impact events.

Taleb pointed me towards a short paper on "binary" (up vs down) versus "vanilla" (+500 vs +5,000,000 vs -5,000,000) outcomes, which was supposed to refute the relevance of such studies. However, I remain rather unconvinced. Consider the key figure in my previous post:

Perceived versus actual fatalities. Adapted from Lichtenstein et al. (1978).

As I wrote back then: What we see here is that people have a clear tendency to overstate -- by an order of several magnitudes -- the relative likelihood of death arising due to "unusual and sensational" causes (tornadoes, floods, etc). The opposite is true for more mundane causes of death like degenerative disease (diabetes, stomach cancer, etc).

Now, I certainly agree with Taleb that it is important to distinguish between between binary and continuous outcomes. Asking whether a stock will go up/down is a much less interesting (and less complex) question to ask than whether it will go up/down by a certain amount. You are clearly not comparing apples with apples if you say that a stock will go up by 5% or 500%. In short, binary and continuous ("vanilla") outcomes are incommensurable in terms of evaluating payoffs.

However, the studies that I linked to are interesting exactly because they are comparing the *same* outcome (i.e. death). It makes no sense to say that death by tornado equals five times death by stroke. They are obviously equivalent. The "payoff" is thus the same because the outcome is the same. Further, I'm not claiming that the insights from these particular studies are fully generalisable to all other low probability, high-impact outcomes (especially those in finance). Yet they do show that underestimation of black swan events is hardly a universal phenomenon either... In fact, people here are shown to rely on heuristics that lead them to a diametrically opposite conclusion! I was ultimately interested in hearing from Taleb whether he thinks these heuristics are efficient or not. I didn't get an answer unfortunately, so I guess we'll have to judge for ourselves.

A final observation is that I disagree with the paper's assertion that "binary is limited to probability". (In other words, that binary outcomes say nothing about the size of a payoff.) This is certainly true in many cases -- again, especially in finance -- but not always. In some instances, binary outcomes imply payoffs directly. The obvious example is the one that we have been discussing in this very post, i.e. death. Indeed, I would think that Taleb probably agrees with me, given that one of his favourite analogies is that of a turkey being fattened up in preparation for Thanksgiving.

What Taleb calls his "classical metaphor". A turkey on his way to becoming dinner. (Source)

With apologies to Monty Python, you might say that the prospect of becoming an ex-turkey implies a very obvious payoff indeed.

UPDATE: Andrei Shleifer agrees.

Thursday, February 7, 2013

A question for Nassim Taleb fans

I read an interesting article last night, detailing a public exchange between Daniel Kahneman and Nassim Taleb.
[E]ach man was asked to write a biography of seven words or less. Taleb described himself as: “Convexity. Mental probabilistic heuristics approach to uncertainty.” Kahneman apparently pleaded with the moderator to only use five words, which were: “Endlessly amused by people’s minds.” Not surprisingly these two autobiographies are descriptive of the two men’s bodies of work. Much of the discussion at this event, however, was not about making decisions under uncertainty, but a sort of tit for tat, with Kahneman asking probing questions and making pointed observations of Taleb. Little of the Nobel laureate’s [i.e. Kahneman's] work was discussed.
It would seem that Kahneman had Taleb on the back foot at various times during the exchange, pointing out (among other things) that the latter's framing of situations suffered from a clear "anchoring" bias. 

The above article also reminded me of a lingering question that I have about Taleb's work -- not least of all because it relates to the type of research that made Kahneman famous (i.e. the limits of heuristics in the face of statistical problems). Having failed to get any responses to my query on Twitter, I'd like to try and flesh it out here.

Let me state up front that I have yet to read, in full, any of Taleb's books. (They are patiently waiting on my kindle.) However, I have read several chapters from them and, moreover, a number of the articles that Taleb has penned in different media outlets. For instance, this essay for Edge magazine which seems to nicely sum up his position. 

So, I'm reasonably confident that I know where Taleb is coming from. I should also say that I think some of his points are very well made. Such as the "inverse problem of rare events" -- basically, that it is incredibly difficult to gauge the impact of extremely rare events exactly because they occur so infrequently. We lack the very observations that are needed to build up a decent idea of the probability distribution of their associated impact. As Taleb explains in the Edge essay: "If small probability events carry large impacts, and (at the same time) these small probability events are more difficult to compute from past data itself, then: our empirical knowledge about the potential contribution -- or role -- of rare events (probability × consequence) is inversely proportional to their impact."[*]

My reading of Taleb also leads me to think that he that he more or less regards everyone as blind to "black swan" (low probability, high impact) events. If that is true, however, I'm wondering how he squares that notion with the consistent empirical finding that people tend to overestimate the likelihood of low probability, high impact events. (And vice versa for more common, low impact events.) Consider the following chart, for example, which was originally produced in a seminal study by Lichtenstein et al. (1978):

Relationship between judged frequency and actual number of fatalities per year for 41 causes of death.
What we see here is that people have a clear tendency to overstate -- by an order of several magnitudes -- the relative likelihood of death arising due to "unusual and sensational" causes (tornadoes, floods, etc). The opposite is true for more mundane causes of death like degenerative disease (diabetes, stomach cancer, etc).

Similarly, have a look at Table 2 (p. 19) in this follow-up study by the same authors, where various groups of people were asked to rank the relative risks of different technologies. We clearly see a incompatibility between the opinions of experts and those expressed by laymen. For example, nuclear power is perceived to be far more risky by members of the general public than by those familiar with the actual number of fatalities and diseases brought on by this technology.

Now, Taleb might respond by that saying these are the exactly the type of misleading comparisons that he is talking about! He could argue that the "actual" observed fatalities are not necessarily an accurate representation of the underlying risks. After all, a single major event could significantly alter the average number of deaths of any particular cause (e.g. nuclear meltdown)... 

Well, perhaps, but I'm not totally convinced. For one thing, that says very little about the flipside of this problem, which is the degree to which "normal" causes of death are underestimated -- both in absolute terms and relative to more sensational outcomes. Second, by now we have accumulated decent data on numerous low-probability events that have occurred (rare as they are), from the outbreak of plague to massive natural disasters. Third, even disregarding my previous points, it doesn't seem at all obvious to me that the public is guilty of consistently underplaying the role of black swan events. Indeed, if anything they appear to be using a heuristic which causes them to significantly overestimate the likelihood of rare events.... Perhaps as a way of adjusting for the -- unquantifiable? -- impact that these outcomes could have if they do occur?

To restate my question then to those of you that know Taleb better than myself: Does he ever integrate (or reconcile) his theory about the ignorance of black swan events with the empirical evidence that people consistently overestimate the likelihood of low probability, dramatic outcomes?

UPDATE: This post appears to invoked Taleb's ire in somewhat amusing fashion. See follow-up here.
UPDATE 2: Second follow-up and some big name support of my basic point here.

___
[*] This type of unquantifiable uncertainty happens to be a big area of research in the climate change literature. In particular, the 'dismal theorem' proposed by Marty Weitzman, whom I have mentioned numerous times before on this blog. See here for more.

Monday, September 10, 2012

Sam Harris on "Life Without Free Will"

He is on top form in this one.

Here is a passage that resonates particularly strongly with my own meta-views of morality:
If we cannot assign blame to the workings of the universe, how can evil people be held responsible for their actions? In the deepest sense, it seems, they can’t be. But in a practical sense, they must be. I see no contradiction in this. In fact, I think that keeping the deep causes of human behavior in view would only improve our practical response to evil. The feeling that people are deeply responsible for who they are does nothing but produce moral illusions and psychological suffering.
Indeed. For more on these ideas ideas, see this old post which quotes liberally from an outstanding article by Frans De Waal.

Back to Harris, there's some dark humour mixed in with the profundity:
[M]y wife and I recently took our three-year-old daughter on an airplane for the first time. She loves to fly! As it happens, her joy was made possible in part because we neglected to tell her that airplanes occasionally malfunction and fall out of the sky, killing everyone on board.

Wednesday, March 14, 2012

Great TED talk on the evolutionary basis for religion

..., spirituality and co-operation by the psychologist Jonathan Haidt. My economist self was particularly interested in his thoughts on free-riding. The rest of me was particularly interested in the discussion as a whole.



It's curious to reflect on the things that give meaning and purpose to our lives. On that note, I've often wondered whether the appeal of atheism would diminish if there were no believers to convince otherwise?

UPDATE: Haidt has just published a new book that is getting rave reviews.

Wednesday, March 23, 2011

Free will in a world of Mad Men

Mark Thoma links to a study discussing the limits -- or "origins" might be more correct -- of free will. I'll just highlight the snippet on implications:
[The findings] indicate that some activity in our brains may significantly precede our awareness of wanting to [act]. Libet suggested that free will works by vetoing: volition (the will to act) arises in neurons before conscious experience does, but conscious will can override it and prevent unwanted movements. 
Other interpretations might require that we reconstruct our idea of free will. Rather than a linear process in which decision leads to action, our behavior may be the bottom-line result of many simultaneous processes: We are constantly faced with a multitude of options for what to do right now – switch the channel? Take a sip from our drink? Get up and go to the bathroom? But our set of options is not unlimited (i.e., the set of options we just mentioned is unlikely to include “launch a ballistic missile”). Deciding what to do and when to do it may be the result of a process in which all the currently-available options are assessed and weighted. Rather than free will being the ability to do anything at all, it might be an act of selection from the present range of options. And the decision might be made before you are even aware of it. ...
Thoma himself is sceptical about whether this actually says much about free will. Among the more interesting comments on the thread, from my perspective, are those pointing out the mistakes of trying to separate the conscious from the subconscious.

Now, I'm no neuroscientist (clearly). However, I have read a fair bit of research related to neuroeconomics.[*] My understanding is that many of our decisions and actions are formed at a level that involves very little conscious cognitive thought. Indeed, our brains tend to shift activities from the cognitive, "thinking" cortex... to the affective, "instinctual" cortex as we become familiar with repeated actions. In other words, there's a deeper truth in the meme "practice makes perfect": Our minds (bodies) begin to respond to external stimuli in a far more efficient way over time, simply because we spend less time thinking about our best course of action and instead just react according some (pre-) programmed optimal response.

In this regard, there is a fascinating body of research on the psychology and mental processes of chess players. In particular, what separates the top-ranked players from the rest of us? The answers are rather surprising. Grandmasters, for instance, spend far less time thinking about their moves than simply recognising patterns in play. For their part, players of lesser rank typically analyse and consider a wider variety of possible moves (and their consequences) at each stage of the game, but this is unfortunately much less efficient. The superiority of top chess players does not lie with intelligence per se, but in the ability to recognise meaningful patterns and respond accordingly. [An interesting side note: Grandmasters and other top-ranked players have a tremendous capacity to memorise a multitude of "plays" and board positions. However, arrange chess pieces in unfamiliar positions and their memory advantage regresses to that of ordinary punters.]

Added to all this is the fact that humans are fantastic rationalisers. We naturally seek order. Not only do we have an innate ability to seek out patterns and coincidences, but we look to provide (ex post) justification for our actions and even the actions of others. Along these lines, one of the most interesting findings to come out of hypnosis is the phenomena of rationalisation under post-hypnotic suggestion. A hypnotised patient can be made to (unwittingly) perform an action on a given cue; for example to open a window when the hypnotist claps his hands. Unaware of the true underlying causes, when the patient is asked by the hypnotist why he opened the window, the former will strive to provide plausible -- yet invalid -- reasons (e.g. "I was getting hot"). I believe that the subject of ex post rationalisation also underpins a lot of research in area of addiction studies...

Anyway, all this reminds me of a great Derren Brown clip that I saw a while ago. The influence of subliminal advertising is well publicised (if not entirely understood), but this is perhaps the most impressive exposition that I've seen of it. What makes it all the sweeter is that he is turning the tables on advertising execs here:


(I note that there is a US version of the same set-up here.)

THOUGHT FOR THE DAY: Fascinating stuff. Somewhere, Don Draper is smiling. And boozing. And womanizing. Damn his smooth ways!

"Yes, I believe you heard me correctly. I own your mind.
And I slept with your wife."

[*] If you're interested in reading more about neuroeconomics, this paper by Camerer and Lowenstein (2004) is the standard reference point in the literature.