Bayesian tale of two cities

This season of Christmas and New Year are particularly well celebrated in the major cities like London and New York.  I remember people celebrating the New Year in, in both cities, by taking Concorde from London to New York and taking advantage of the time difference.

Part of the theme of my book is in working with the juxtaposition of polarities of difference – and I kick off with reference to Charles’ Dickens “Tale of Two Cities”;  “it was the best of time, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us, we were all going to Heaven,  we were all going direct the other way”.  I suggest that as we stand in the gap between difference, it can be uncomfortable but rewarding.

And this talk of two cities reminds me of the types of illusions that can throw us off the scent of how we evaluate risk.  It’s about Bayesian logic, and the basis of one of my favourite jokes, which is how I start the next extract from my book.

[Extracts from “Risky Strategy” to be published in 2016]


One of my favourite “one line” jokes goes like this: “Did you know, I am more likely to be mugged in London than I am in New York..[Pause for effect] … That’s because I hardly ever go to New York”

This is an example of Bayesian probability. The Reverend Bayes in the nineteenth century came up with a formula for calculating the results of conditional probability.  Put more simply, that is what happens to overall likelihood when you combine two types of variability.   So in our joke example, we have the variability of getting mugged in either London or New York,  and the variability of the amount of time I spend in either London or New York.  So given I am in New York, my likelihood of getting mugged is say 5%;  whereas, given I am in London, my likelihood of getting mugged is say 1%.  However, I only spend 10% of my time in New York, and 90% in London.  So overall the chances on any given day that I am mugged in New York are  0.5% (5% x 10%), whereas the chance that I am mugged in London is 0.9% (1% x 90%), ie London is higher than New York.


This might sound like a trivial example which doesn’t matter that much, but I believe we see Bayesian confusion created by authoritative voices in society, particularly when it comes to medical issues. Taleb picks up on this particular example in “Fooled by Randomness” (Taleb N. N., Fooled by Randomness: The Hidden Role of Chance in Kife and in the Markets)

You have a disease which can affect 1 in 1000 people (ie  0.1%), and the test for the disease is 95% accurate, which means there is 5% false positive. That means that in a sample of 100 test results, 5 of those will  indicate a disease which isn’t there. If someone gets a positive test result what are the chances they actually have the disease?  Many would say 95%.  Actually it’s much lower: 2%.  That’s a surprise to many of us. This is how it can be explained.  In a random group of 1000 people all who happened to have been tested, only 1 probably has the disease. However, of the remaining 999, about 50 (5% of 999) will have tested positive for the disease. So out of that 50, that chances that you are the one that have the disease is 1 in 50 ie 2%.  This is quite a powerful illusion. I wonder how many people who have been tested for a fairly rare disease with what appears to be a fairly reliable test, and testing positive, have been told there is a high chance they have the disease.