Ain Interactive Wang Shoukun: AI and Dialogue Robots
As early as the 1950s, shortly after the birth of electronic computers, scientists put forward the concept of “human-like intelligence”.
There is a school of symbolism in the field of artificial intelligence, which uses the intelligent simulation method based on logical reasoning to simulate human intelligent behavior. The most representative result is the heuristic program – “logic theorist”. Scientists have used it to prove 38 points. Mathematical theorem. In addition, in the context of the Cold War at that time, whether it was the United States or the former Soviet Union, the government invested a lot in artificial intelligence, so artificial intelligence ushered in the first golden age in the 1960s, and people were very optimistic about its prospects.
Simon, a Nobel laureate in economics and a famous American economist, is an early artificial intelligence scholar. He believes that by the end of the 20th century, artificial intelligence will replace human intelligence, and machines will complete most of human’s daily work.
□ Stills from the movie “Artificial Intelligence”
From the mid-1980s to the 1990s, artificial intelligence encountered a big setback. People found that although artificial intelligence could do complex reasoning work, it could not do simple things well. There had been no progress in speech recognition and image recognition. Can’t even find the direction of development.
At the same time, with the end of the Cold War, government investment gradually decreased, and there was no money to do artificial intelligence. People even created a term called AI Winter by analogy to the depression after the nuclear war. Although there was a small renaissance in AI in the early to mid-1990s, mainly due to the rise of expert systems, it still did not come out of the low tide, which lasted until around 2000.
Since 2006, due to the gradual rise of deep neural networks, especially around 2011, deep neural networks have made major breakthroughs in a series of traditional machine learning tasks, and it has been found that artificial intelligence has entered the fast lane of renaissance. In the past two years, especially since AlphaGo defeated Lee Sedol in March 2016, artificial intelligence has once again entered the public eye.
The current research on unsupervised learning is far from enough
So far, the most significant breakthrough in the field of artificial intelligence has been machine learning.
Machine learning can be roughly divided into three categories.
The first type is called supervised learning, which is to use a certain amount of calibration data to learn a model, and then use this model to classify the uncalibrated data.
Supervised learning can be used for classification and regression.
Classification refers to dividing some calibrated data into ABC categories, such as credit card anti-fraud, face recognition, voice recognition, fingerprint recognition, etc.
Anything that is partial to a numerical value can be done with regression, which is about predicting a numerical value, such as predicting the weather or stock prices.
At present, the most thorough research in the field of machine learning is the classification problem, which can be roughly divided into two stages:
The first stage: have a bunch of calibration data. For example, if you tell the machine that there is a face on this picture, or that a certain sentence expresses an intention, the voice signal of this sentence and the corresponding text are called calibration data.
The second stage: put the calibration data into the machine learning algorithm for training, generate the corresponding model, and use the model to make predictions in the future. For example, we use a photo with a face as the calibration data to generate a model. In the future, we can use a new photo as the uncalibrated data to use this model to determine whether there is a face in the photo.
□ Stills from the movie “Machine Girl”
The second category is called unsupervised learning, which mainly finds patterns or varieties in uncalibrated data. Common unsupervised learning tasks include anomaly detection, clustering, association analysis, and more.
1 2 3 Next > page
Anomaly detection is the process of finding anomalous points or patterns, such as peaks or troughs, in a series of data, and clustering is the grouping of similar parts of a bunch of data into groups.
The following cases are anomaly detection:
There is a problem with the oil transported through thousands of kilometers of oil pipelines. The reason for the problem may be that the oil pipeline is damaged by natural disasters, or it may be that a hole in the oil pipe has been pryed open. How to find the point of the problem.
Although it is recognized that unsupervised learning is more important than supervised learning, because the former can discover new things – things that were not known or seen before, but in fact, at the level that artificial intelligence can currently achieve, more than 90% Efforts are concentrated on supervised learning.
In terms of supervised learning, prediction accuracy and recall are hard metrics. Also faced with a photo containing a face, under the same recall rate, the accuracy of A is 90%, and the accuracy of B is 95%, so it must be that B is better than A.
The human process of exploring knowledge is an unsupervised learning process, and while important, nothing can compare. For example, in clustering, A has five categories and B has six categories. How can we theoretically determine that A must be better than B, and five categories must be better than six categories? In other words, if A discovers a certain phenomenon, it is called an anomaly, and B also discovers this phenomenon, but it does not necessarily call it an anomaly. Which makes more sense?
fromindustryFrom the perspective of the world, we hope that a large number of such learning processes can help us understand the world, but in fact, our current research on unsupervised learning is far from enough.
I contacted my colleagues and found a very obvious phenomenon: many researchers in the field of artificial intelligence, most of them are studying supervised learning, because the achievements in this area are very easy to be recognized by the academic community, as long as they do well in data, they must be will be recognized. However, for unsupervised learning, the automatic discovery and accumulation of patterns and knowledge, but few people care about it, although everyone generally thinks it is very important.
□ Stills from the sci-fi movie “Artificial Intelligence”
The third category, Reinforcement Learning, promises an even more compelling goal: being able to learn from feedback, that is, learning in a changing environment. The term was originally used to describe a scene in a casino.
Suppose you enter a casino with 1,000 slot machines with 1,000 yuan. Each slot machine can bet 1 yuan at a time, but the probability of winning or losing is different for each slot machine. Win money, what should I do?
A reasonable strategy is to give one-third of your money to pick a slot machine to try, and if you win, you can continue to try on this machine, and if you lose, you can switch to another slot machine. After a third of your money is spent, you may have tried dozens or hundreds of slot machines; spend the remaining two-thirds on the one with the highest chance of winning.
This strategy can be learned using reinforcement learning. The thinking method of reinforcement learning is how to use feedback to learn an optimal strategy in an uncertain environment, so as to maximize the benefits. Reinforcement learning is used in AlphaGo’s algorithm model. Reinforcement learning is very close to real life, and can even help us solve some real-life problems. So there are more and more studies in this area.
The power of deep learning
A major breakthrough in the current field of machine learning is the deep learning (Deep Learning) that we often mention recently. The deep neural network it uses has some similarities with the way the human brain works.
There are 15 billion neurons in the human brain, which are divided into different regions. There is no difference between the neurons in each region, but the division of labor is different due to function and location. For example, this part of neurons close to our eyes is trained to sense the signals from the eyes, but does not respond to human language. It can process the signals from the eyes and transmit them to the brain.
But this does not mean that the neurons themselves are different. In fact, some scientists have done experiments to transplant neuronal cells from mouse embryos to the damaged optic nerve area of adult mice. The two successfully fused and established a connection. The same is true for the deep neural network technology we use in the field of AI. The neurons are similar, because of the different levels and positions, the output is different.
Why is everyone willing to use deep neural networks? An important reason is that after using deep neural networks, we can no longer spend a lot of effort on feature engineering (Feature Engineering), that is, picking features.
You can incorporate all the features you can find, as long as there are enough layers and enough computer re
Pre: Yaskawa SGMPH-04A1A-YR61 AC Servo Motor Next: Coinciding with Tesla founder Musk, b...