Subscribe Subscribe to Elsevier Connect

Hopes and fears for AI: the experts' view

It won’t end the world or single-handedly cure cancer, but AI is a powerful tool

AI experts, clockwise from top left: Prof. Gary Marcus, PhD (NYU), Elizabeth Ling (Elsevier), Prof. Max Welling, PhD (University of Amsterdam/UC Irvine), Prof. Stuart Russell, PhD (UC Berkeley), and Prof. Joanna Bryson, PhD (University of Bath). (Background image ©istock.com/from2015)
AI experts, clockwise from top left: Prof. Gary Marcus, PhD (NYU), Elizabeth Ling (Elsevier), Prof. Max Welling, PhD (University of Amsterdam/UC Irvine), Prof. Stuart Russell, PhD (UC Berkeley), and Prof. Joanna Bryson, PhD (University of Bath). (Background image ©istock.com/from2015)

When we encounter artificial intelligence in the media, it’s often discussed at extremes. At one end, there are films, books, games and even news commentary that paint a picture of a world-ending intelligence. At the other end, people picture algorithms so powerful they can solve every major problem facing mankind.

In reality, the capabilities for AI lie somewhere in between. For example, some of Elsevier’s products use machine-learning driven image identification to better diagnose life-threatening illnesses – but these tools are designed to aid the deductive work of human experts, not replace them. Similarly, while one of our collaborations shows that AI can be used to help generate a scientific hypothesis, it essentially acts as a guide for a human researcher.

So AI remains a tool – albeit a powerful one – and like any tool, whether it harms or helps is largely up to the user. Here, we draw on the expertise of five experts in AI, asking them about their hopes and fears for the technology.

Check out our tech jobs

Prof. Gary Marcus, NYU: “My greatest fear is that it would be used to solve more nepharious problems.”

Prof. Gary Marcus, PhDDr. Gary Marcus is Professor of Psychology and Neural Science, New York University, former CEO of the machine learning startup Geometric Intelligence, acquired by Uber in 2017. He writes:

“My greatest hope for the technology is that we will be able to do automated scientific reasoning, and use it to solve problems like cancer that, because of the sheer number of molecular interactions, are too complex for humans to understand. One of the reasons we can’t fully interrogate the genome is because of the thousands and thousands of genes interacting. We can understand how one gene works, but we can’t do the maths for how they all work together. Machines can be very good at that.

“My greatest fear is that it would be used to solve more nefarious problems. It’s a powerful tool and we need to have a system of regulation and a clear idea of what we would do when these rules were violated.”

Prof. Max Welling, University of Amsterdam and UC Irvine: “My greatest hope is that AI will have a huge impact on healthcare.”

Prof. Max Welling, PhDDr. Max Welling is Professor and Research Chair in Machine Learning at the University of Amsterdam and full professor at the University of California Irvine. He writes:

“My greatest hope is that AI will have a huge impact on healthcare. It has the potential to radically revolutionize healthcare in areas such as genomic analysis or medical image analysis. That’s something that could be around the corner.

“My greatest fear is that we underestimate certain privacy aspects, so we census our world, our cities and our own lives and we don’t think about what all that data that could mean if it falls into the wrong hands, or if we do realise it, we realize too late. We need to think now about the implications of giving our data to companies, and the amount of information we pass on.”

Prof. Joanna Bryson, University of Bath: “My fear is that we’re using these technologies to enforce conformity …”

Prof. Joanna Bryson, PhDDr. Joanna Bryson is Associate Professor at the University of Bath and Affiliate at Princeton's Center for Information Technology Policy. She writes: “My biggest hope would be that AI could solve some of the biggest problems we have. Like reasonable wealth distribution – how do we reward merit and provide motivation while ensuring that enough people have enough money to employ each other and keep societies together? That’s been a really hard problem to solve for 1,000 years.

“Elsewhere, for millions of years, you’ve had the problem of sustainability – It’s not in our best interest to wipe out all other mammals, but that’s what we’re doing unintentionally. When we started doing agriculture our numbers exponentially increased and we started changing the entire biomethod of this planet to our own vision. We need to get a harness on that and my hope is that we can use AI to coordinate ourselves.

“My fear is that we’re using these technologies to enforce conformity because we think that in conformity there’s security. That is false. It’s easier to reason if everyone is the same but it creates fragility. You want diversity in order to be able to solve a lot of different problems. We need divergent education, we need free thinking. We don’t need invasions of privacy at the level we are getting. That mass surveillance is just a conformity of current tech standards, which at best is unnecessary and at worst can be weaponised through tactics such as troll farms.”

Elizabeth Ling, Elsevier: “My greatest hope is that it will lead to big changes in medicine and education.”

Elizabeth LingElizabeth Ling is SVP of Web Analytics at Elsevier. She writes: “My greatest hope is that it will lead to big changes in medicine and education. There are so many advances, but they take so long to be applied and make a difference to people’s health. AI can help accelerate that.

In education, you could apply it to issues around gender representation and access for education, by including it in the design of Massive Online Open Courses (MOOCs), or use it to enable distributed learning so that kids at the top of the class aren’t bored and others aren’t left behind. You can find a way to apply it to so many areas.”

Prof. Stuart Russell, UC Berkeley: “I hope we do succeed in creating something like human-level AI. … It would be the biggest event in human history.”

Prof. Stuart Russell, PhDDr. Stuart Russell is Professor of Electrical Engineering and Computer Sciences at UC Berkeley and Adjunct Professor of Neurological Surgery at UC San Francisco. He writes:

“As far as hopes and fears go, I hope we do succeed in creating something like human-level AI. If it succeeds, the potential upside is enormous – it would be the biggest event in human history.

“But we need to be careful about the objectives we set. If you were a gorilla and you were having a discussion with fellow gorillas about whether your ancestors should have created humans, you’d have unanimous agreement that humans were really bad for gorillas. We don’t want to be in that situation, but that’s what would happen if we made something that could operate directly on the real world, more effectively than us.”

“Often, we program AI with an objective and it carries out that objective. As King Midas found out, if you don’t state your objective correctly, you can end up in an irreversibly bad place.

“However, if we create systems that don’t know what the objective is, you get a different result – they ask permission to do things because they are aware they may be making a mistake, or they allow themselves to be switched off. If we imbue the entire field with this approach we can avoid the problems of objective-driven systems.”

Quick question for you

Which terms do you most associate with Elsevier? (check all that apply)

Data and analytics
Research platforms
Technology
Decision support tools
Publishing
Books and journals
Scientific articles
Healthcare content

Tags


Contributors


Comments


comments powered by Disqus