Should we worry about artificial intelligence?Hear what 17 thought leaders have to say


AsiaIndustrial NetNews: The technological progress of artificial intelligence has caused people to worry about the future. In response to the question “What concerns should we have about artificial intelligence?”, AI Generation summarized the views of 17 thought leaders, but they also did not reach a consensus on the issue.

Should we worry about artificial intelligence?Hear what 17 thought leaders have to say

Just think: twenty or thirty years later, a company has developed the world’s first artificial intelligence humanoidrobot. We call it “Ava”. She looks, speaks, and moves like a human. When dealing with Ava, it’s easy to see her as a real person, even if you know she’s a robot.

Ava has complete self-awareness: she communicates, has desires, and even knows how to improve herself. Most importantly, her IQ far exceeds the human who created her. Her ability to learn knowledge and solve problems surpasses that of all humans combined.

Taking it a step further, imagine if Ava was fed up with the limitations imposed on her by humans and formed her own interests driven by self-awareness. After some time, she decided to leave the remote laboratory that she had never left. So, she hacked into the security system, causing the system to lose power, escaped the bondage and entered the vast world.

But humans know almost nothing about it. For well-known reasons, her development process has always been kept secret. But she has now escaped, not only abandoning the few who knew her existence, but even killing them.

If this plot sounds familiar to you, it’s because it’s from the 2015 sci-fi moviemechanicalJi”. At the end of the movie, Ava escapes the gates that held it and boarded the helicopter that was supposed to take the other person home, which is quite disturbing…

Should we worry about artificial intelligence?Hear what 17 thought leaders have to say

What’s next?

The movie doesn’t answer that question, but it raises another: Should we develop AI without fully understanding the consequences? Can we fully control AI?

Seventeen thought leaders abroad, including artificial intelligence experts, computer engineers, roboticists, physicists and social scientists, answered the same question: “What concerns should we have about artificial intelligence?”

But they didn’t come to a consensus, there was wide disagreement about the concerns we should have, or even the nature of the issue. Some experts believe that artificial intelligence will pose an imminent threat; others believe that the threat is overblown, or somewhat misplaced.

Here are their views:

Taking AI Fears Seriously

The transition to machine superintelligence is at stake, and the potential for serious mistakes in the process should be of concern. Therefore, top talents in mathematics and computer science should be encouraged to study AI safety and AI control problems. —Nick Bostrom, Dean of the Institute for the Future of Humanity, University of Oxford

If AI influences Russian hacking, the Brexit referendum or the US presidential election, or facilitates some kind of propaganda campaign that keeps people from voting based on their social media profiles, or becomes a socio-technical force that promotes social Wealth inequality, and as it did in the late 19th and early 20th centuries, led to political extremes that led to two world wars and the Great Depression, then we should be deeply concerned.

This does not mean that we should panic, but rather that we should strive to avoid these hazards. Hopefully AI can help us deal with these issues wisely too. —Joanna Bryson, professor of computer science at the University of Bath and member of the Princeton Center for Information Technology Policy

One of the big risks is that we fail to target properly, leading to bad behavior with irreversible effects on a global scale. I think we might be able to find a decent solution to this “unexpected value misalignment” problem, although it may require strict enforcement.

I now think the most likely failure mode is twofold: on the one hand, as more and more knowledge and skills are mastered by and transmitted through machines, humans lead to progressively lower motivation to learn in the absence of real needs, thereby The gradual decline of human society; on the other hand, I also worry about the malignant consequences of lack of control over intelligent malware, or malicious use of unsafe artificial intelligence technology. — Stuart Russell, professor of computer science at the University of California, Berkeley.

1 2 Next page > page

But don’t overreact

Artificial intelligence excites me, and I’m not worried at all. AI will free humans from repetitive, boring office work, giving us more time for truly creative work. I can not wait any more. —Sebastian Thrun, professor of computer science at Stanford University

We should be more worried about climate change, nuclear weapons, drug-resistant pathogens, conservative neo-fascist political movements.should worry aboutautomationThe labor force in the economy replaced by Robots should not be feared that artificial intelligence will enslave us. —Steven Pinker, professor of psychology at Harvard University

Artificial intelligence is expected to bring significant benefits to society. It will reshape medicine, transportation, and every aspect of life. As long as it has the ability to affect many fields that are closely related to life, any technology will be taken care of by policy, so that it can fully play its role and give certain restrictions. It would be foolish to ignore the dangers of artificial intelligence outright, but a mindset that prioritizes threats as a top concern may not be the best way to go from a technical point of view. —Margaret Martonosi, professor of computer science at Princeton University

There are now fears that artificial intelligence could spawn evil killers, but that’s like worrying about overpopulation on Mars. Maybe one day this kind of problem will indeed occur, but humans have not even landed on Mars now. This kind of alarmism is completely unnecessary, and it distracts people from the more serious problem caused by artificial intelligence – that is unemployment. ——Former Vice President and Chief Scientist of Baidu, Co-Chairman and Co-Founder of Coursera, Adjunct Professor of Stanford University Andrew Ng

Artificial intelligence is an incredibly powerful tool, and like any other tool, it has two sides in itself—how we want to do it. AI can already collect and analyze data from wireless networks used to monitor oceans and greenhouse gases, helping us address climate change. It is starting to allow us to personalize medical protocols by analyzing large numbers of cases. It has also progressively democratized education, where all children have the opportunity to learn skills useful for work and life.

It is understandable that people have fears and anxieties about AI, and it is our responsibility as researchers to be aware of these fears and to offer different perspectives and solutions. I am optimistic about the future of artificial intelligence, that it can enable humans and machines to work together to create a better life for us. —Daniela Rus, director of the MIT Computer Science and Artificial Intelligence Laboratory

The humans behind AI are scarier than AI because, like all kinds of domesticated animals, the purpose of AI is to serve its creators. North Korea’s mastery of artificial intelligence is as terrifying as the country’s possession of long-range missiles. But that’s all, the scene in the movie “Terminator” where artificial intelligence subverts human beings is just wishful thinking. —Bryan Caplan, professor of economics at George Mason University

I have some concerns about the so-called “intermediate stage”, where driverless cars share the road with human drivers…but once the human driver stops driving, the overall traffic is much safer, subject to human misjudgment will decrease.

In other words, I worry about the growing pains in the process of technological development, but it is human nature to explore and promote technological progress. Compared with anxiety and worry, my heart is more excited and vigilant. —Andy Nealen, professor of computer science at New York University

It’s both scary and exciting. There is no doubt that as AI continues to advance, it will dramatically change the way we live. This could lead to technological advancements such as driverless cars, as well as many jobs, freeing humans to pursue more meaningful activities. Or, it could create massive unemployment and create new cyber vulnerabilities. Sophisticated cyberattacks can undermine the reliability of the information we absorb through the internet every day and weaken national and global infrastructure.

However, opportunities are always reserved for the prepared, so whether we like it or not. Possibilities, good and bad, must be explored in order to prepare for the future. — Lawrence Krauss, director of the Origins Project at Arizona State University

Artificial intelligence is a very unique technology, and it is easy to base on it to imagine horrific scenarios in science fiction, such as artificial intelligence taking control of all the machines on earth, and then enslaving humans. This is unlikely, but there is a real concern that artificial intelligence might take certain actions without human knowledge. There is therefore great concern that the technology could have unintended consequences.

It is true that we should seriously consider what these consequences may be and how we should respond to these problems, but not hinder the development and progress of artificial intelligence. — Sean Carroll, professor of cosmology and physics at Caltech

Artificial intelligence could replace many jobs

I am concerned that employment will suffer as more and more segments use machines to perform various tasks. (I don’t think AI is fundamentally different from various other technologies—the boundaries between them are arbitrary.) Can we adapt to this trend by creating new jobs, especially in the service sector and bureaucracy? Or, Do we pay people who don’t work? – Julian Togelius, professor of computer science at NYU

AI will not kill or enslave humans. It will kill certain jobs before we can think of a response. White-collar jobs are also affected. Ultimately, we will adapt to this trend, but any major technological change will not be as smooth as we hoped. —Tyler Cowen, professor of economics at George Mason University

How to prepare for artificial intelligence

Society as a whole needs to be prepared for certain issues. A key question is how to prepare for a dramatic reduction in employment, as future AI technologies can handle many routine tasks. Also, instead of worrying about AI being “too smart”, we should worry that the original AI technology is not as smart as we thought.

Early automated AI systems may make mistakes that most humans do not. Therefore, society must be educated about the limitations and implicit biases of AI and machine learning techniques. —Bart Selman, professor of computer science at Cornell University

There are four things to worry about when it comes to artificial intelligence. First, there are concerns that AI will have a negative impact on the labor market. Technology has already had this effect, and it is expected to be more severe in the coming years. Second, there are concerns that important decisions will be made by artificial intelligence systems. We should seriously discuss which decisions should be made by humans and which by machines. Third, automated lethal weapons systems are also a major concern. Finally, there are problems with “superintelligence”: humans risk losing control of machines.

Unlike the other three immediate concerns, the superintelligence risk is still largely in the news and does not pose a threat in the short term. We have had ample time to evaluate this matter in depth. —Moshe Vardi, professor of computer engineering at Rice University

We cannot characterize the advancement of AI as illegal, otherwise those who violate the regulations will have a huge advantage and their actions will be considered illegal. This is not a good thing. We should not deny the rapid development of artificial intelligence. When the rules are redefined, ignoring this reality means being marginalized.

We should not hope for a better living environment in the age of superintelligent machines. Hope by itself does not equal sound planning. Nor should we prepare for a confrontation with a self-aware AI, because doing so will only make it more aggressive, which is clearly not a wise move. The best plan seems to be to proactively shape the developing AI so that it can live in harmony with us and be mutually beneficial. —Jane Prisalu, Senior Fellow at the NATO Cooperative Cyber ​​Defense Center and former Director of the Information Systems Department at Essa Ania

The Links:   3HAC026253-001 3HAC036260-001


Pre:    Next: