Close

Why humans are still smarter than AI (and will be for some time)

“The problem isn’t the rise of “smart” machines but the dumbing down of humanity” (Astra Taylor)

Gerd Gigerenzer has spent his life studying heuristics and “fast and frugal” decision-making and in How to Stay Smart in a Smart World he argues that while artificial intelligence is sometimes very good at what it does, it is not yet “intelligent” in the human sense.

Using examples from online dating, self-driving cars, privacy issues, and the psychology of online addiction he explains why complex algorithms work best, “in well-defined, stable situations where large amounts of data are available”. Human intelligence works differently as it has evolved to deal with a high degree of uncertainty and with any amount of data that might be available.

Human intelligence has four superpowers that Gigerenzer identifies: causal thinking, intuitive psychology, intuitive physics, and intuitive sociality (our ‘common sense’). He makes an analogy between AI and Solomon Shereshevsky, a Russian who had a literally limitless memory with no limits on capacity or duration.

Shereshevsky could read a page and instantly recall it word for word, backwards or forwards, and recall it again days, weeks, months and years later. However, he had no understanding of what he recalled, nor could he summarise the main themes. Evolution learned long ago that understanding the ‘gist’ of something is far more important than remembering every detail and made the appropriate trade-off.

One of the most prominent examples in the book is the story of Google Flu Trends, which was launched in 2008 and had some initial success. For a while the use of AI and big data was lauded and then its predictions started becoming less accurate. Google Flu Trends was finally closed down in 2015.

Gerd Gigerenzer and his co-workers compared the accuracy of Google Flu Trends with a very simple heuristic (recency; using the previous week’s flu searches to predict the next week’s flu searches). They found that it was twice as accurate as Google Flu Trends with its much more complicated (and opaque) algorithm, using data from 2007 until 2015.

While Google Flu Trends made a good job of fitting its initial data from 2007 (using sophisticated versions of correlation) it had not built a causal model of why flu searches happen. This meant that it had increasingly limited prediction power.

And here lies a key challenge with AI. It lacks the causal thinking that makes humans more intelligent and assumes that correlation is equivalent to causation (as do many proponents of AI and big data). Unless you understand the underlying model of why something happens then relying on correlation is dangerous.

Another great example in the book is that of New York’s Mount Sinai hospital using X-ray machines to understand which patients were at risk of pneumonia with great success. Their success did not translate to other hospitals and when the X-rays were examined it was found that the algorithm was using the machine type (which reflected location) to make its predictions. Patients were more likely to be at risk when they had already been hospitalized!

This reminds us of other examples such as categorizing dogs versus wolves using the background colours (is there snow in the picture?). Algorithms use whatever information is available to classify and sometimes this is not the information that we expect them to be using. Hence many of the issues of embedded racism and sexism in AI that are exposed in the documentary Coded Bias (recommended).

Algorithms and AI are easily deceived because they lack humanity’s common sense and our more holistic thinking. While human thinking has many flaws and is subject to many well-documented biases, we can always rely on our common sense. Until we can program that, we are better off limiting AI to “well defined, stable situations” in the words of Gerd Gigerenzer.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *