Wednesday, November 16, 2016
Quote of the day 16th November 2016
(Janan Ganesh in the Financial Times)
Human brains evolved to a significant extent as pattern-spotting machines - recognising patterns which can indicate danger, and avoiding it, recognising patterns which can indicate the presence of food or shelter and carefully checking them out.
A difficulty this creates for us in the modern world is that the evolutionary penalty for erring on the side of caution when estimating danger, and on the side of checking things out when estimating potential benefit, were far greater than the likely penalties for failing to spot a risk or benefit.
E.g. the potential loss from identifying a potential risk which is not in fact there was avoiding somewhere you did not need to avoid or carrying a weapon you did not need to bring: the potential loss from failing to identify a risk which really is there is getting eaten and/or killed.
Consequently, as a statistician would put it, when it comes to spotting patterns we have evolved to give higher priority to avoiding type II errors rather than type I errors - better to be wary of a danger or look out for an opportunity that does exist rather than
When scientists, test a hypothesis, a type I error is the incorrect rejection of a true null hypothesis (a "false positive"), while a type II error is incorrectly retaining a false null hypothesis (a "false negative"). The "Null hypothesis" is that there is no pattern, or nothing there. So the null hypothesis might be "there is no tiger hiding in that patch of thick grass."
Obviously the consequences for accepting that particular null hypothesis if it is in fact wrong and there IS a tiger were far more serious than those of rejecting it, or even just taking seriously that it might be false, if it is correct and there is no tiger.
The very existence of the proverb about "the boy who cried wolf" is an illustration that humans have understood for a very long time the possibility of making both kinds of mistake. It also demonstrates that more astute humans have been aware for a very long time that one of the most serious potential consequences of repeated type I errors - e.g. crying "wolf" when there isn't one - can be that it subsequently leads yourself or others to make type II errors - e.g. not believing you when you then cry "wolf" when there is.
Which brings me back to Janan Ganesh's point. I picked it as my quote of the day because his point that we tend to ascribe very great significance to current events, and try to fit those events into a pattern which may or may not be there is worth thinking about.
But I do not reach the same conclusion.
I think he is right to warn against being too quick to adopt simplistic and cataclysmic explanations for events like the British general election of 2015 and Brexit vote, and America's election of Donald Trump.
But I think the warning that we should not be too quick to think we understand what caused such upsets applies to other simplistic explanations as well. You have to find a lot of "particularities" to explain the Brexit vote or Trump's election.
And in particular, the idea that you can adequately explain those events by pointing out that Ed Miliband, Jeremy Corbyn and Hillary Clinton were and are very poor standard-bearers for the cause of the mainstream left - although they were - is not remotely credible.