Artificial intelligence: What does it mean for machines – and humans?
Scientists and sci-fi screenwriters talk about what goes into teaching machines to “think,” and what differentiates machines from humans
By David Levine Posted on 31 March 2016
Artificial intelligence came of age on March 12 when Korean Go master Lee Sedol lost his third match in a row to DeepMind’s AlphaGo, a computer that was designed to play Go, a game thought to have originated at least 2,500 to 4,000 years ago in China.
The games took place in Seoul and were streamed live on DeepMind’s YouTube channel. It was the most anticipated match between a computer and a human since 1997 when the IBM-developed supercomputer Deep Blue beat world chess champion Gary Kasparov. Although Sedol won the fourth match, it was too late as the $1 million contest was a best of five series. (Sedol lost the fifth match as well) DeepMind is donating its winnings to UNICEF, STEM charities and Go organizations.
The technology of artificial intelligence
Last month, DeepMind co-founder and CEO Dr. Demis Hassabis spoke about the upcoming contest confidently at the American Association for the Advancement of Science (AAAS) annual meeting in Washington, DC. His London-based start-up – which became the world’s leading company for general artificial intelligence (AI) – was acquired by Google in 2014 reportedly for more than $400 million.
Dr. Hassabis was one of four panelists on a session called Artificial Intelligence: Imagining the Future, which drew a standing-room-only audience in the large auditorium.
Dr. Hassabis said that even though an earlier version of AlphaGo had beaten the reigning three-time European Champion Fan Hui in October 2015 (the results were revealed in the Jan. 27 edition of Nature), “no one thought a machine could beat Lee Sedol, the Roger Federer of Go.” In fact, the international community said the odds of a machine beating Sedol were about 5 percent. “There are more possible positions in Go than there are atoms in the universe,” Dr. Hassabis said.
[pullquote align="alignright"]Our artificial intelligence machines don’t know anything about the games they play. They learn by playing games over and over again. So in theory they could master any game.[/pullquote]
Dr. Hassabis did not reveal much about how the latest version of AlphaGo would beat Sedol, saying he didn’t want to tip his hand before the contest. But he explained that the team at DeepMind takes a very different approach to artificial intelligence than IBM did when it designed DeepBlue. “Deep Blue was programmed to play chess. That’s all it knows how to do. It could not win a game of tic-tack-toe,” Dr. Hassabis said. In contrast, AlphaGo is not preprogrammed to do anything. “Our artificial intelligence machines don’t know anything about the games they play. They learn by playing games over and over again. So in theory they could master any game.”
The machines use reinforcement learning to learn from mistakes made while playing against itself and are capable of narrowing down the search space for the best move possible. Dr. Hassabis amused the audience by showing how his early machines learned to beat Space Invaders and Breakout. “The first day they lost every game. After a short time, they beat the games faster than any human could.”
In a press conference, Dr. Hassabis said DeepMind’s win over the best Go player in the world should not be seen as a loss for humanity. “Our hope is that in the long run we will be able to use these techniques for many other problems, including helping scientists solve some of the world’s biggest challenges in healthcare and other areas.”
Consciousness vs. intelligence
The extent to which digital machines can or cannot be conscious was the topic of the next presenter, Dr. Christof Koch, President and Chief Scientific Officer of the Allen Institute for Brain Science (started by Paul G. Allen, the co-founder of Microsoft). Dr. Koch explored the relationship between the brain, behavior, and consciousness; what is known about the biology and neurology of consciousness; the limits to our knowledge – and whether machines can be conscious.
Dr. Koch stressed that when talking about artificial intelligence, one has to keep in mind that intelligence and consciousness are not the same. “Intelligence is the ability to understand new ideas, to adapt to new environments, to learn from experience, to think abstractly, to plan and to reason,” he said. “It can be decomposed into crystalline and fluid intelligence and can be measured through instruments such as IQ tests.
Consciousness is different. “It’s the ability to experience something, to see, hear, feel angry, or explicitly recall an event,” he explained. “Many animals besides humans experience the sights and sounds of the world. Consciousness is associated with some complex, adaptive, biological networks. It can be dissociated from emotion, selective attention, long-term memory and language. Self-consciousness is one of many aspects of consciousness, highly developed in adult neuro-typical humans, less so in infants, certain patients and non-human animals."
Consciousness is what differentiates us from machines, no matter how intelligent they are. It also raises many ethical questions.
Intelligence is not what makes us human, it’s consciousness. Stroke victims and brain-damaged individuals have rights. So do animals. They have the ability to experience pain or pleasure. Having the ability to experience pain and pleasure, i.e., to be a subject, implies a set of minimal rights and ethical obligations on the side of its creator and/or owner.
Just like a computer simulating a rainstorm will not get wet, a computer simulating conscious behavior will not be conscious. For a computer to have human-level consciousness, it will have to have the causal powers of the human brain. The question of whether computer software and robots can be conscious will be a critical challenge in the years to come.
A TV show that combines AI with human nature
The final speakers, Samuel Vincent and Jonathan Brackley, are the writers of the TV show Humans, the highest rated drama on Channel 4 in the UK (In the US, it can be seen on AMC). The show takes place in a world where humans can hire “Synths,” highly evolved robots, to be a nanny, a servant, a caretaker or even a friend. “Unlike researchers,” Vincent said, we have the luxury of using our imaginations.”
One of the storylines has a widower, George Millican (William Hurt), who had a stroke and whose caretaker, Odi, a Synth supplied by Britain’s National Health Service, has flaws. “Odi forgets things,” Vincent said. “He doesn’t remember that George had a wife. But George doesn’t want to report his flaws because he doesn’t want to lose his Synth.”
Brackley said the writing of flaws in the synths was not accidental:
Flaws humanize the characters. In fact, polls show that our viewers like the synths better than our ‘human’ characters. Think about having a nanny who never gets tired, never loses patience, and never gets angry. Is that what a child needs? Will the child prefer the Synth over her real mom? Will the mom resent the perfect nanny? These are the questions we are asking.
In an interview on the AMC website, Brackley said their goal is to engender debate. “We purposely avoided trying to pass too much judgment on whether Synths are going to be a good thing or a bad thing, if they existed.” Vincent said. “We don’t want to present it as a utopia or a dystopia, and we want the audience to make up their own minds. We don’t really mind what the water cooler talk is, so long as there’s plenty of it. We want people to ask each other, Well, would you get one?”
Previews and interviews with the writers and casts are available on the AMC website, and you can download or stream season one on Google Play, ITunes and Amazon.
Do human-like machines pose a threat?
During the question and answer period at the end of the talks, there were many questions on the warnings of Stephen Hawking, Elon Musk (who invested in DeepMind over fears of the Terminator) and Bill Gates that artificial intelligence machines will one day be capable of wiping out humanity. (They will decide they don’t need us.) Dr. Koch said he is not worried about it for now. “I am a fan of Blade Runner, but it’s a movie.”
Dr. Hassabis noted that we are decades away from anything like human-level general intelligence and that right now the machines are playing games. He thinks people have been too influenced by movies that forecast an evil future. In an article in The Guardian February 16, he said, “As with all new powerful technologies, this (AI) has to be used ethically and responsibly, and that’s why we’re actively calling for debate and researching the issues now, so that when the time comes, we’ll be well prepared.”
Elsevier Connect Contributor
David Levine (@Dlloydlevine) is co-chairman of Science Writers in New York (SWINY) and a member of the National Association of Science Writers (NASW). He served as director of media relations at the American Cancer Society and as senior director of communications at the NYC Health and Hospitals Corp. He has written for Scientific American, the Los Angeles Times, The New York Times, More magazine, and Good Housekeeping, and was a contributing editor at Physician's Weekly for 10 years. He has a BA and MA from The Johns Hopkins University.