The prospect of driverless cars appearing on the world’s roads is fast approaching reality. While some hail the game-changing technology and potential benefits to society, the issue of how best to integrate artificial intelligence into our daily lives is raising concerns by some leaders in technology. Robots have been used in industry for years, but AI requires machines to be more analytical so they are able to handle more complex tasks than those found on an assembly line.
In the first of two blog posts on artificial intelligence — “Why Technology in Our Society Still Requires the Human Empathy Component” — Olivier Dumon, Managing Director of Research Products at Elsevier, uses the example of optimizing the brain of a driverless car to make certain decisions in the case of a possible collision with a deer:
Imagine yourself seated (not driving) in your self-driving car … while cruising at 50 miles per hour. Suddenly, a deer from a nearby forest jumps in front of your car. The car’s brain fed by all the sensors registers the information and calculates the probability of a collision. Unfortunately, the odds are 100 percent that your car does not have enough time to stop and the collision with the deer is inevitable … On the right side, a cyclist is using a designated cycling lane. Another car is coming in the opposite direction. … The car’s brain has to decide what sort action to take. There are three options …
He concludes with noting how a human driver might avoid the situation all together through powers of observance and empathy by noting that there are deer in the area:
The driverless car is an exciting new development that opens up worlds of opportunity, especially to those who have physical handicaps and need practical transportation for a fulfilling life. But let’s keep common sense in our thinking as we determine how such technologies are best employed. … It’s not clear how the driver’s innate common sense and empathy would react in that situation or the resulting impact. … It is also possible that the driver has already heeded a warning sign about deer, and as a result, has reduced speed and increased observation of the surrounding area, thus avoiding the situation altogether.
In his follow up post — “Teaching a Stone to play Chess” — Dumon explores the notion of our society dominated by artificial beings and what it really means to think:
From a computational perspective, computers will certainly be able to outthink humans, but what does it really mean to think? Is it inevitable that machines will eventually be able to think as humans do? Or will the human conscience and our capacity for emotion and reason always make the difference?
He concludes with a cautionary note about putting guidelines in place to ensure AI is developed to serve humanity as intended, not become its servants:
As a business executive, I am the first to advocate for innovation; it is the basis for our existence. But it is worth taking a moment for those of us in science and technology to ask ourselves how much are we willing to let AI be part of our lives, and at what point should we exercise caution and put some guidelines in place?
Read the full posts here: