THE standard joke about artificial intelligence (AI) is that, like nuclear fusion, it has been the future for more than half a century now. In 1958 the New York Times reported that the “Perceptron”, an early AI machine developed at Cornell University with military money, was “the embryo of an electronic computer that [the American Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.” Five decades later self-aware battleships remain conspicuous by their absence. Yet alongside the hype, there has been spectacular progress: computers are now better than any human at chess, for instance. Computers can process human speech and read even messy handwriting. To cynical moderns, automated telephone-answering systems are infuriating. They would seem like magic to someone from the 1950s. These days AI is in the news again, for there has been impressive progress in the past few years in a particular subfield of AI called machine learning. But what exactly is that?
Machine learning is exactly what it sounds like: an attempt to perform a trick that even very primitive animals are capable of, namely learning from experience. Computers are hyper-literal, ornery beasts: anyone who has tried programming one will tell you that the difficulty comes from dealing with the fact that a computer will do exactly and precisely what you tell it to, stupid mistakes and all. For tasks that can be boiled down into simple, unambiguous rules—such as crunching through difficult mathematics, for instance—that is fine. For woolier jobs, it is a serious problem, especially since humans themselves might struggle to articulate clear rules. In 1964 Potter Stewart, an American Supreme Court judge, found it impossibly difficult to set a legally watertight definition of pornography. Frustrated, he famously wrote that, although he could not define porn as such, “I know it when I see it.” Machine learning aims to help computers discover such fuzzy rules by themselves, without having to be explicitly instructed evert step of the way by human programmers.
There are many different kinds of machine learning. But the one that is grabbing headlines at the moment is called “deep learning”. It uses neural networks—simple computer simulations of how biological neurons behave—to extract rules and patterns from sets of data. Show a neural network enough pictures of cats, for instance, or have it listen to enough German speech, and it will be able to tell you if a picture or sound recording it has never seen before is a cat, or in German. The general approach is not new (the Perceptron, mentioned above, was one of the first neural networks). But the ever-increasing power of computers has allowed deep learning machines to simulate billions of neurons. At the same time, the huge quantity of information available on the internet has provided the algorithms with an unprecedent quantity of data to chew on. The results can be impressive. Facebook’s Deep Face algorithm, for instance, is about as good as a human being when it comes to recognising specific faces, even if they are poorly lit, or seen from a strange angle. Email spam is much less of a problem than it used to be, because the vast quantities of it circulating online have allowed computers to realise what a spam email looks like, and divert it before it ever reaches your inbox.
Big firms like Google, Baidu and Microsoft are pouring resources into AI development, aiming to improve search results, build computers you can talk to, and more. A wave of startups wants to use the techniques for everything from analysing medical images looking for tumours to automating back-office work like the preparation of sales reports. The appeal of automated voice or facial-recognition for spies and policemen is obvious, and they are taking a keen interest. This rapid progress has spawned prophets of doom, who worry that computers could become more clever than their human masters and perhaps even displace them. Such worries are not entirely without foundation. Even now, scientists do not really understand how the brain works. But there is nothing supernatural about it—and that implies that building something similar inside a machine should be possible in principle. Some conceptual breakthrough, or the steady rise in computing power, might one day give rise to hyper-intelligent, self-aware computers. But for now, and for the forseeable future, deep-learning machines will remain pattern-recognition engines. They are not going to take over the world. But they will shake up the world of work.
Powered by Facebook Comments