AI is still a long way from true full autonomy


With the rise of artificial intelligence technology,AI The existing problems have also been gradually exposed. Decisions made by AI still differ from the best human decisions, and often contain some bias. So where is the problem? In a recent article, author Marianne Bellotti Explains why and offers his own views on AI design principles. Let’s take a look at the specific content of the article.

According to experts, data scientists spend about 80% of their time cleaning data, and the key to enabling centralized AI-driven decision-making is breaking down barriers between jobs and creating interoperable processes for AI models. In the current AI field, even if it takes a lot of time and economic costs, it is still impossible to achieve situational awareness that is closer to the global level like the human brain. With advances in data science and artificial intelligence, the amount of data required to build AI models has also increased. AutopilotCompanies have invested tens of billions of dollars and still have not achieved fully autonomous driving, and social media companies have invested billions of dollars trying to use AI to clean up bad information, but still rely heavily on humans to clean up the platform. AI does not yet have the ability to make optimal decisions. Furthermore, instead of completely eliminating human bias when building AI models, people are trying to build the “perfect” AI model from an ever-increasing amount of data, but that data is spotty.

▍Correlation between decision and data

When trying to find a solution to a difficult problem, the first thing to do is to break it down: What assumptions are being made? How do these assumptions frame the problem that needs to be solved? If these assumptions were different, would they solve different problems? What is the relationship between the problem you want to solve and the outcome of the program? For AI, it’s obviously very important to have better decisions as a result. Overall situational awareness is also very important, assuming that access to more data is key for decision makers to make better decisions, and better decisions mean fewer negative impacts. In real life, decision makers often make optimization decisions in order to save costs. But after all, decision-making is based on the results to judge whether it is good or bad, and it needs a little luck on the basis of correct analysis. Even the most carefully and thoroughly constructed strategy, backed by excellent data, cannot guarantee that the decision will be absolutely correct until the results are in. Therefore, the decision-making process should not be an objective analysis of data, but an active negotiation among stakeholders about tolerance for risk and priority. Data is not used to provide insight, but is used as a shield to protect stakeholders from influence, and perfect information often reduces the quality of decision-making by increasing noise levels.

It seems unbelievable, shouldn’t perfect information automatically improve the decision-making process? In fact, more information may change the organizational strategy behind the decision. AI can correctly identify content, but decisions based on that content are heavily influenced by the norms and expectations of users and organizations. The best way to improve team decision-making is not to get more data, but to improve communication between stakeholders. But do people really need to spend billions to cleanse or increase data volumes to benefit from AI?

▍ Poorly designed AI can lead to huge security risks

Currently, the way people evaluate data quality is misleading. “Clean” data seems to be accurate, unbiased, and reusable data. But in reality, clean is not the same as accurate, and accurate is not the same as actionable. Problems in these three aspects of data may seriously affect the performance of artificial intelligence models and interfere with the quality of their results. There are many possible problems with data, some of which are obvious, such as incorrect data, corruption, or non-standard data format. Some issues are more subtle, such as data being acquired under specific circumstances and then being reused inappropriately; data at the wrong level of granularity for the model; data not standardized, resulting in the same facts being represented or described in different ways . Solving any of the above problems with a single source of truth would be very difficult, and it would be practically impossible to solve all of the above problems if a programmatic attacker tried to inject bad data into large systems to corrupt the model. What one cannot ignore is that while AI is creating new opportunities, it also brings new vulnerabilities. Artificial intelligence brings new ways to attack and be attacked. AI may lead to a new generation of attack tools, such as satellite data jamming spoofing (location spoofing). Techniques to fool or mislead AI systems by destroying data are being developed together with AI techniques.

Current AI systems rely entirely on the quality of data, so AI is flawed not because the technology is immature, but because AI was originally designed to be in this vulnerable form. So in this case, the AI ​​system must be designed to be flexible enough to deal with bad data. So what if this design was changed to reduce the risk of AI being attacked? This requires making AI “antifragile”.

▍What is antifragile AI?

“Antifragile” means that AI systems can not only recover from failures, but become stronger and more effective after experiencing them. Building AI systems based on factors that actually improve decision making will create opportunities for antifragile AI.Existing cognitive science shows that good decision-making is proactively articulating and constructing hypotheseschecktestThe product of validating assumptions and establishing clear communication channels between stakeholders. Many of the cognitive biases that give rise to “human error” are the result of problems in all three areas: when people do not clearly articulate their assumptions, they use solutions that are not appropriate under actual conditions; when people do not test their assumptions, they There is no way to adjust the right decisions to changing conditions; when information cannot be effectively shared among operators, opportunities to discover changing conditions and challenge assumptions are lost, to the detriment of everyone. AI is vulnerable to bad data because current research overemphasizes its use for classification and identification and underestimates its use for suggestion and contextualization. But decisions made by AI can be easily corrupted. Designing anti-fragile AI is difficult because there is a big difference between treating the output of an algorithm’s analysis as a conclusion and treating it as a suggestion or a hint. Decision makers may use the AI ​​output as a conclusion in order to save costs. This is a catastrophic mistake that already exists in applying artificial intelligence today. At the same time, AI systems in medicine are able to improve the quality of decision-making because many diagnoses do not have a single correct answer. In medical diagnosis, any set of symptoms has a range of possible causes with varying probabilities.The clinician would build in his mind a decision tree of every possible cause he could think of, and imagine tests to rule out some of the possible causesMeasurementtry. Medical diagnosis is a cyclic process of “defining hypotheses, testing tests, and further narrowing the set of possible causes” until a solution is found.

Although the data are poor, the diagnosis process can be accelerated by prompting doctors to add other possible causes. In this context, AI can improve communication and knowledge sharing among medical professionals and obtain relevant information about patients at critical moments. Conversely, AI products that try to distinguish benign from malignant tumors through artificial intelligence technology to outperform doctors have been plagued by poor data problems.

▍Powerful AI under bad data

Before taking advantage of the cutting-edge technology of artificial intelligence, researchers and developers should first think about how to define the problem to be solved. If AI is used to improve decision-making, then AI should guide decision-makers through hypothesis testing, rather than trying to outperform experts. If AI were to try to outperform experts, it would become completely dependent on the quality of the data, creating a set of vulnerabilities that program attackers could easily exploit. When AI is not trained to be an expert, but to improve and support human decision-making, then AI is resilient to bad data and can become antifragile. AI does not make decisions in this case, instead, it helps people articulate the assumptions behind the decisions, communicate those assumptions to people, and alert decision makers when the actual conditions associated with those assumptions change significantly. AI can help decision makers figure out what states are possible, or under what conditions some states are possible. Such a solution can enhance the overall capability of the decision-making team by addressing existing weaknesses, rather than creating some new weaknesses due to bad data.

▍Artificial intelligence is not yet “intelligent”

After the article was published, many netizens expressed their agreement with the author’s point of view. Some netizens said: “This is one of the most sensible articles I have read on the topic of AI in recent years, and it will benefit some workers in related fields.”

AI is still a long way from true full autonomy

Others made a similar point to the author: “People are so fascinated by AI in automating human work that they forget that AI has a greater potential to assist humans.”

AI is still a long way from true full autonomy

Some netizens believe that the key to the success of artificial intelligence is not a large amount of data, but a small amount of data obtained from successful experience:

AI is still a long way from true full autonomy

In addition, some netizens said: “Artificial intelligence has nothing to do with human ‘intelligence’, it is actually justcomputerinformation, still needs to be parsed by people. “

AI is still a long way from true full autonomy

It seems that artificial intelligence is still a long way from true full autonomy. What do you think of this?

The Links:   3HAC023518-001 3BSE018157R1


Pre:    Next: