Subscribe Subscribe to Elsevier Connect

The biggest misconceptions about AI: the experts’ view

Five experts reveal common misunderstandings around "the singularity" and what AI can and can’t do

©istock.com/piranka
©istock.com/piranka

Last year, Elon Musk made headlines by describing AI as a “fundamental risk to the existence of civilization.” At the time, Facebook CEO Mark Zuckerburg described such warnings as “pretty irresponsible.” More recently, Google CEO Eric Schmidt suggested that the answer to fears about AI was to police it: “The example I would offer is, would you not invent the telephone because of the possible misuse of the telephone by evil people? No, you would build the telephone and you would try to find a way to police the misuse of the telephone,” he said at the VivaTech conference in Paris last month.

Others talk about the singularity – the point at which an AI suddenly becomes sentient – and use that possibility to stoke fears already fueled by dozens of sci-fi movies.

The reality is less dramatic. There’s no questioning that AI has the potential to be destructive, and it also has the potential to be transformative, although in neither case does it reach the extremes sometimes portrayed by the mass media and the entertainment industry.

At Elsevier, we’re increasingly using AI technologies such as machine learning, natural language processing (NLP), knowledge based systems and knowledge graphs to help researchers, engineers and clinicians do their work. For example, we can use NLP to automatically understand the subject of an article and route it to the right reviewers and the right journal. We can use knowledge graphs to direct university departments to the funding opportunities in which they are most likely to succeed. We can use knowledge systems to support the deductive reasoning clinicians use to provide an accurate diagnosis.

But what does the technology mean in the wider context of society? We caught up with some of the leading figures in artificial intelligence research to get their view on the biggest misunderstandings around AI as well as their hopes and fears for the technology.

Prof. Gary Marcus, NYU: “The biggest misconception around AI is that people think we’re close to it.”

Prof. Gary Marcus, PhDDr. Gary Marcus is Professor of Psychology and Neural Science, New York University, former CEO of the machine learning startup Geometric Intelligence, acquired by Uber in 2017. He writes:

“I think the biggest misconception around AI is that people think we’re close to it. We’re not anywhere near that. We’ve learned to engineer certain narrow problems like speech recognition very well. We’ve done that in ways that we couldn’t have imagined five or ten years ago. But the idea of having machines that can reason about the world in the ways that human beings can … I don’t think we’ve made any significant progress in that at all. Humans can be super flexible – they can learn something in one context and apply it in another. Machines can’t do that.

“There’s also a lot of misunderstanding around the singularity. As a concept, this reduces a complicated problem to a single dimension. There are so many dimensions in artificial intelligence and natural intelligence, questions around what perception is, how language develops and how memory works. Talking about the singularity is like trying to boil intelligence down to a single IQ number, which itself will change in an individual from day to day. What does an artificial superintelligence mean? Machines are already way smarter than us when it comes to playing games with very tight boundaries, but nowhere near as smart as us when it comes to playing a computer game a 12-year-old could play.”

Prof. Max Welling, University of Amsterdam and UC Irvine: “It’s a glorified signal processing tool, but it can be super beneficial.”

Prof. Max Welling, PhDDr. Max Welling is Professor and Research Chair in Machine Learning at the University of Amsterdam and full professor at the University of California Irvine. He writes:

“At this point, many people think that AI is a silver bullet that will solve everything. In reality, it’s more that we can do really good signal processing. In other words, AI can extract relevant features, analyse images, and understand speech, but there is a lot of high level reasoning it can’t do. It can’t look at a picture and project into time about what will happen next or extrapolate as to what were the things that happened before and what the causal relationships were that led to the current image. That is a much more complicated understanding of a situation and is something we can't do yet. It might take a while before we can.

“It’s important not to overestimate the current standards. It’s a glorified signal processing tool, but it can be super beneficial – almost any other scientist would benefit from collaborating with machine learning specialists, for example.”

Prof. Joanna Bryson, University of Bath: “As for the algorithm that suddenly knows everything … the singularity – that’s impossible.”

Prof. Joanna Bryson, PhDDr. Joanna Bryson is Associate Professor at the University of Bath and Affiliate at Princeton's Center for Information Technology Policy. She writes:

“When people think about AI, it’s usually around two concepts. One is human-like intelligence – referred to as general AI – and the other is a single algorithm that suddenly knows everything. The first of those must be possible because there already is human intelligence, but it’s unlikely we will build it. If you clone a human, you have a biological human, but if you build something like a human, everything changes – you’ve built something that can do anything a human can do. I don’t think we will ever recreate that.

There’s no one piece we can’t build, and we’ve gone super-human in many ways already, if you consider that ‘super-human’ means that machines can outperform humans in one specific task. A book is super-human in its ability to remember things, a plane is super-human in its ability to fly. But if we had something exactly like a person, which could transfer skills from one task to another, and which started competing with us that would create a problem. As for the algorithm that suddenly knows everything, which people sometimes refer to as the singularity – that’s impossible.”

Elizabeth Ling, Elsevier: “AI is already used in many systems in society. … They just don’t look like people expect.”

 Elizabeth LingElizabeth Ling is SVP of Web Analytics at Elsevier. She writes:

“One common misconception is that AI has suddenly happened. In reality, it’s a longstanding domain of science that’s been evolving.

The flip side of that is that people think of it as something in the far future, but there are already a lot of applications of various forms of AI. It’s already used in many systems in society. The thing is they just don’t look like people expect. If you mention AI in warfare for example, people think of smart drones, but in reality, it’s more likely to appear in a logistics management system. AI applications will be on websites where you may not even notice them. It’s been with us longer than people think.”

Prof. Stuart Russell, UC Berkeley: “Making machines faster doesn’t make them more intelligent.”

Prof. Stuart Russell, PhDDr. Stuart Russell is Professor of Electrical Engineering and Computer Sciences at UC Berkeley and Adjunct Professor of Neurological Surgery at UC San Francisco. He writes:

“There’s a common misunderstanding that AI presents a risk because it will magically become conscious and spontaneously hate human beings. There’s no reason to be concerned about spontaneous malevolent consciousness.”

“It also sometimes gets reported as though five years ago we didn’t have AI, and now we do, but the research has been going for 60 years and we’ve made fairly continuous progress. Every so often research reaches a point where you can create a product that people will pay for and from the outside of the field it looks like some kind of breakthrough. But it’s not – we’re just showing how far we’ve got with a certain problem with 10 or 20 years of research.”

“When it comes to the singularity, it’s based on this misperception that machines will get faster and faster and faster than the brain and at some point they’ll just take off. Making machines faster doesn’t make them more intelligent. You’ll just get the wrong answer more quickly. The benefit of having faster machines is to speed up the cycle of experimentation. If it takes you 3 weeks to try something you can’t move forwards. If it takes 3 minutes you can go on to the next thing and you’re better able to quickly develop something that works well.”

“The other form of the singularity is the idea of machines redesigning themselves to be better and having that become a cycle. It’s a possibility, and I think that if you’re serious about creating human level AI, you’d better figure out how to make sure that doesn’t happen. If you can’t ensure that a machine doesn’t redesign itself in some physical form, you don’t have any control over it at all. That’s a basic thing but I don’t think that’s the real issue. If we make machines that are more capable than we are that can have an impact at a global scale, that’s a more present risk. It’s not about the ability to redesign itself, it’s about the ability to change the world. If a more capable AI is on the internet it has access to 5 billion screens, if it regulates electricity or the financial system, it can have an impact on a global scale and that impact won’t depend on whether it can redesign itself.”

Quick question for you

Which terms do you most associate with Elsevier? (check all that apply)

Data and analytics
Research platforms
Technology
Decision support tools
Publishing
Books and journals
Scientific articles
Healthcare content

Tags


Contributors


Comments


comments powered by Disqus