AI-powered robots won’t run the world any time soon

The term “artificial intelligence” was coined in 1956 by a young professor, John McCarthy, with the hypothesis that such new software would explore “every aspect of learning or any other feature of intelligence that can in principle be so precisely described that a machine can be made to simulate it.”

Then in 1958, the idea of building software “neural” networks – inspired by the way the brain works – by Frank Rosenblatt led to parallel processing of data that is now called “deep learning.”

He had a big idea. In 1958 he was quoted in The New York Times saying such a machine would be the first to think like the human brain.

How has that vision fared – will computers ever emulate the human brain?

As computing capabilities have steadily improved since the 1950s, so have the capabilities of the software that now is an essential part of all activities. For examples of assistance in medicine, see The physician’s helper: AI software, Asia Times, December 13, 2019.

Watching this remarkable progress, it is tempting to extrapolate that computers artificial intelligence will supersede human activity. Imagine autonomous robots running the industrial world. Can that happen, or are there fundamental limitations that limit computer “intelligence” that makes humankind unique?

A recent review discusses the limitations of AI software based on massive experience. Quoting Yoshua Bengio, the scientific director of Mila – Quebec AI Institute: “In terms of how much progress we’ve made over the past two decades: I don’t think we’re anywhere close today to the level of intelligence of a two-year-old child. But maybe we have algorithms that are equivalent to lower animals for perception.” 

Among the important shortcomings of AI systems discussed in the review article are the following.