The shifting goalposts of AIPublication date: 11 August 2010
Originally published 2010 in Atomic: Maximum Power Computing
Last modified 03-Dec-2011.
We are surrounded by artificial intelligence, every day of our lives.
Or there is no AI at all, and there never will be.
Which, if either, of these statements is true depends on your definition of artificial intelligence. And the definition of AI has changed a lot, over the centuries.
People today often think of AI as meaning "a machine that's intelligent, like we are". This presents the problem that we're not exactly sure what our own intelligence is - and don't even ask about what consciousness is. But an artificial brain that accepts the input our brains accept, and produces similar output, is a pretty graspable concept, even if you can't precisely place its boundaries.
This concept is an old one, too.
You might think that the idea of a thinking machine is a pretty recent development, going back maybe to Karel Capek's invention of the word "robot" in 1921. But it actually goes back through pretty much all of recorded history. For thousands of years, people have been inventing mechanical men that walk and sing and flirt with ladies, metal heads that can answer any question, and a wide variety of other amazingly realistic automata.
The actual form of the "invention" here has, of course, usually been "just making up a story" or "hiding a midget inside the alleged chess-playing robot".
King Mu of Zhou was supposed to have encountered a mechanical man with a full suite of removable internal organs, that all perfectly controlled the parts of the body that Chinese scholars of the time incorrectly assumed those organs actually controlled. Note, however, that King Mu was also said to have taken a chariot-ride to heaven to taste the peaches of immortality. (This is quite restrained, compared with the things the Yellow Emperor is said to have done. One of the less impressive of the Yellow Emperor's long list of putative inventions is the South Pointing Chariot, a real and pretty simple device which provides a great example of a machine which could be taken by the uneducated to have some sort of supernatural intelligence.)
Other historical tales of amazing robots probably grew from non-fraudulent things like karakuri automata. They're ingenious clockwork toys, but they aren't actually any more "intelligent" than a Tamagotchi.
Before the Enlightenment, these sorts of creations were usually assumed to be animated by magic. By the time you get to Jacques de Vaucanson's Digesting Duck in 1739, though, "miraculously lifelike" mechanical creatures, operating in purely physical ways, were relatively commonplace. Or, at least, embroidered stories about them were.
But actual human-brain-like "strong AI", as I've written before, is still a long way away. We're not even sure in what direction.
In the past, though, there have been a lot of much more modest definitions of artificial intelligence. They were often phrased in the form "a machine will never...":
"A machine will never be able to read the
"A machine will never understand speech."
"A machine will never be able to look at something and figure out what 3D shape it is."
"A machine will never drive a car."
"A machine will never play chess."
"A machine will never play chess well."
"A machine will never beat a chess Grandmaster."
"A machine will never beat my favourite chess Grandmaster."
Go back far enough and you can find people making these same sorts of predictions about tasks that seem simple today. Arithmetic, algebra, spell-checking - all were clearly Things Only the Mind of Man (and of a Few Unusually Intelligent Women, Bless 'Em) Could Ever Do.
But a funny thing always happens, right after a machine does whatever it is that people previously declared a machine would never do. What happens is, that particular act is demoted from the rarefied world of "artificial intelligence", to mere "automation" or "software engineering".
Apparently, you see, when they said "a machine will never be able to spot-weld a car together", they meant to say "a machine will never be aware that it's welding a car together". So all of those production-line robots aren't actually a triumph of artificial intelligence at all, any more than aircraft autopilots or optical character recognition or the square-root button on a calculator - which, after all, merely duplicated a perfectly obvious slide-rule operation - are.
I sound as if I'm cross about this, but I'm not, really. I find illogical reasoning and unfounded assumptions about "AI" being a priori impossible or not worthy of government research funding mildly annoying, but I don't find it any more irritating than the reverse phenomenon, in which marketing departments insist that the washing machine they're trying to sell is smarter than HAL.
I think it's an interesting mental exercise, though.
Think, for a moment, about the things that you currently reckon computers will never be able to do. I'm sure you can come up with some.
I mean, never mind strong AI that thinks just like a human. How about writing music? How about creating whole movies, or just writing novels, from scratch? I don't mean your typical computer-generated aphasic wind-chime randomness, either - I mean good stuff. Or, at least, chart-topping, summer-blockbuster, bestselling stuff.
Bear in mind that we've already got machines rendering near-photorealistic 3D environments on the fly, and pick-and-place robots whipping through tedious item selection and packaging tasks faster than any five trained humans could. Not to mention farmers dozing in the air-conditioned cabin of their tractor while GPS helps it to plough the field autonomously.
This still might not be that much of a test, though.
Note that some classic examples of "computer-generated" artworks were actually strongly "guided" by humans. Racter's book The Policeman's Beard Is Half Constructed made quite a splash in the Omni-magazine demographic when it came out in 1983, but what little sense it made would have been entirely lost if humans hadn't edited the book together from a much larger amount of mindless chatterbot output.
A more recent example of this sort of thing is Markov-chain text generation, where whatever meaning the output has is either coincidental, or copied directly from input data that was created by humans. There are also various "evolutionary" computer-art systems, in which a human is asked to choose between differently-mutated versions of a picture, tune or whatever. This can produce art that the human selector would never have thought to make, but that doesn't mean the computer's an artist.