Why do big countries compete with land combat robots?

In the arms race, as long as one country takes the first step, there will be a second country, a third country… and finally a tense and obscure network of hegemonic relations will be formed.
Wrapped in the largest introspection in human history, what is the intention of the armies of various countries to develop combat Robots? Standing in the long river of military reform, where has the technology of combat robots actually progressed? Will artificial intelligence become an important part of the arms race? Are we terrified by the horrors of the arms race, or by the conscious potential of the frontiers to unleash robotics?
From the perspective of technological development, many high-techs are developed with the help of military opportunities. In 1968, the ARPA network was born. There is a widespread saying that the ARPA network is a communication network built by the US Department of Defense to resist the nuclear attack of the former Soviet Union. Even after some command points are destroyed, other The point is still able to communicate normally. This local area network was originally used by the U.S. Defense Advanced Research Projects Agency for military research purposes and is considered to be the predecessor of today’s internet (Internet). Since then, interconnection technology has begun to develop from local interconnection to wide-area interconnection, from military to civilian use.
However, focusing on the AI landing projects with remarkable results in the world, artificial intelligence technology has played a big role in the medical and health field. The reason is not only that there is a large amount of data available for machine learning in the medical field, but more importantly, that medical problems have clear boundaries, and the operation of AI is extremely secretive and regular. In this data-intensive, knowledge-intensive, and mental-labor-intensive field, machine learning is more like setting goals for gains and deductions, through infinite exhaustion, and relying on supercomputers with powerful computing power, but it is far from people’s recognition. There is still a big gap between the “intelligence” of knowledge.
In the real world, there are few problems with well-defined boundaries.
The military combat terrain is complex, and the combat environment depends on the maneuvering tactics of both sides of the battle. A robot that grows up under the constraints of soft and hard rules can improve the autonomous decision-making ability of the robot according to the special constraints of military operations and grow into a Steel intelligence that cooperates with soldiers to fight?
Britain is planning to build an army of robots by 2030, deploying 120,000 “Terminator” robots for the next generation of warfare. In the next decade or the 2030s, about a quarter of the UK military could be Robotic, according to the UK’s chief of defence staff, General Nick Carter.
At the same time, the United States is developing robotic combat vehicles to enhance the army’s combat capabilities. By the 2030s, these fast, powerful vehicles will patrol the battlefield and cooperate with the Army in combat.
Developed countries are the first to use robots as part of their armaments. Most people are not surprised. Robots may supplement the shortage of recruits, reduce dependence on human soldiers, and may diversify a country’s military power. Military investment in robotics has increased in many countries, led by the United States, and robots are likely to be a key part of any nation’s military arsenal in the future.
From the perspective of history, when the social form begins to evolve towards intelligence, the war form will inevitably evolve towards intelligence. From knives, spears, swords, and halberds in the cold weapon era, which emphasized melee combat among soldiers, to the hot weapon era, where bullets and bullets attached importance to strategy and tactics, should we immediately usher in the era of information warfare, signal warfare, and unmanned combat?
Currently, the U.S. Army Research Lab (ARL) is training robots to test autonomous navigation techniques on rough terrain (top and mid), with the goal of cooperating with human teammates. ARL is also developing operative robots that can interact with objects and replace human combat redundancy.
However, is the underlying technology of the robot to the point of daunting, with these questions, IEEE Spectrum senior editor, Even Ackerman (Evan Ackerman), recently traveled to the Adelphi Laboratory Center in Maryland (Adelphi Laboratory Center) ), wrote this article from a first perspective. AI Technology Review organizes it and explores the real capabilities of military and ground combat robots with you.
01 Robots perform poorly in cluttered environments
“I probably shouldn’t be standing this close,” I said to myself as the robot slowly approached a large branch on the floor in front of me. It’s not the size of the branches that makes me nervous, it’s this autonomous robot. While I know what it’s supposed to do, I’m totally not sure what it’s going to do next.
If all goes as planned by ARL’s roboticists, the robot will next recognize the presence of a branch, grab it, and drag it to the side of the road. The robots knew exactly what they were doing, but I was terrified standing in front of them, so I took a small step back.
Named “RoMan” (Robotic Manipulation), which means “machine operation,” the robot is about the size of a large lawnmower and has a track base that can handle most different road conditions. It has a short torso in the front, equipped with cameras and depth sensors; and a pair of arms, the prototype of which is from the disaster response robot RoboSimian, originally developed by NASA’s Jet Propulsion Laboratory (JPL) for the US DARPA. Robot competition developed to perform disaster-related tasks.

Today, Roman’s task is to clear the road. It’s a multi-step task that ARL wants the robot to accomplish as autonomously as possible. Instead of instructing the robot how to grab a target object or move it to a specific location, the operator tells RoMan to “clear a clear path” and let the robot autonomously decide how to complete the task .
“The ability to make autonomous decisions” is the reason why robots can be called “humans”. We value robots because of their ability to sense what is happening around them, make decisions based on the sensed information, and then take effective actions without human intervention. In the past, robot decisions followed highly structured rules. Robots work well in a structured environment like a factory, but in a chaotic, unfamiliar or poorly defined environment, such as a battlefield, then the reliance on rules can make the robot “clunky” because the robot cannot Precise forecasting and planning ahead of time.
02 Deep learning: a “stumbling block”
Like many robots, including household vacuums, drones, and self-driving cars, RoMan uses artificial neural networks to tackle challenges encountered in semi-structured environments. About a decade ago, artificial neural networks began to be applied to a wide variety of semi-structured data. Until now, this semi-structured data has been a puzzle for computers that operate based on rules programming (also known as “symbolic reasoning”).
Rather than identifying specific data structures, artificial neural networks identify patterns in data, finding new data that is similar but not identical to data previously encountered by the network. Part of the appeal of artificial neural networks is that they are trained on examples, allowing the neural network to learn from labeled data to form its own patterns of recognition. A neural network with multiple layers of abstraction is called “deep learning”.
Although humans are involved in the training process, and artificial neural networks are also inspired by the neural networks of the human brain, fundamentally, deep learning systems recognize patterns that are different from the way humans see the world. We often fail to understand the relationship between the input data and output data of a deep learning system, so deep learning systems are often referred to as “black box” models.
This “black box” opaque decision-making nature of deep learning has caused some problems for robots like RoMan and ARL labs. This opacity also means that we have to be careful with robots that rely on deep learning systems.
Deep learning systems are good at recognizing patterns, but lack the human understanding of the world to make rational decisions like humans. This is why deep learning achieves the best performance in some well-defined and narrow-scope applications.
“Deep learning is useful when you have both well-formed inputs and outputs, and you can fully express your problem in those inputs and outputs.” Tom Howard, director of the Robotics and Artificial Intelligence Laboratory at the University of Rochester, USA talk. Previously, Tom Howard developed many natural language interaction algorithms for RoMan and other ground robots. “The question is, when programming intelligent robots, what is the actual size of these robots that rely on deep learning systems?”
Howard explained that when you apply deep learning to higher-level problems, the amount of input data can be very large, and processing large-scale data can be very difficult. The unpredictable or unexplained behavior of a 170-kilogram, two-armed military robot in the execution of a mission is especially critical.
A few minutes later, Roman hadn’t moved – he was still sitting there, brooding over the branches, his arms wiggling like a mantis. For the past 10 years, ARL’s Robotics Collaborative Technology Alliance (RCTA) has been collaborating with companies from Carnegie Mellon University, Florida State University, General Dynamics Land Systems, JPL, MIT, QinetiQ North America, China The University of Florida, the University of Pennsylvania, and other top research institutions are working together to develop robotic autonomy for future ground operations. RoMan is a representative of this big project.
The task of “clearing a path” that RoMan is thinking about is difficult for robots because it’s too abstract. In this task, Roman needs to identify objects that might be in the way, infer the physical properties of those objects, figure out how to grab them, and what is the best maneuvering technique (push, pull, lift, etc.) The behavior is fully executed. For a robot with limited knowledge of the world, this task is too many steps and full of unknowns.
03 “Modular” to understand the world
Ethan Stump, chief artificial intelligence scientist for ARL’s Manipulation and Mobility Program, said: “Making robots gradually understand the world is what sets ARL’s robots apart from other robots that rely on deep learning.”
“The army may be on a mission anywhere in the world, but it’s impossible for us to collect detailed data on all the geographies that apply to the robots. We may be sent to a forest on the other side of the world that we’ve never set foot in, but we’re going to have to perform as well. It’s as good as your own backyard,” he says, but most deep learning systems can only operate reliably in the domain and environment in which they were trained. Furthermore, if the deep learning systems of military combat robots do not perform well, they cannot solve the problem by simply collecting more data, which is limited in amount.
ARL’s bots also need to be aware of what they’re doing. “In the standard order of execution of a mission, you have goals, constraints, words that express the commander’s intent,” Stump explained. In other words, RoMan may need to clear a path quickly, or it may need to clear a path quietly , depending on the specific requirements of the task. This is a tall order even for the most advanced robots today.
As I watched, RoMan was carrying branches again. ARL’s autonomous approach is modular, where deep learning is combined with other techniques to allow RoMan to help ARL determine what tasks are suitable for what techniques.
Currently, RoMan is testing two different methods of identifying objects from 3D sensor data: Penn’s method is based on deep learning, while Carnegie Mellon University uses a perception-by-search method that relies on more Traditional 3D model database. Search-aware methods are only effective if the objects to be looked for are determined in advance, but this method is much faster to train because only one model is required for each object. and. Search perception methods can accurately identify objects even when objects are difficult to perceive, such as when a part of the object is occluded or upside down. ARL tests two methods at the same time, letting both methods run concurrently and compete against each other to select the most general and effective method.
Perception is one of the things deep learning is good at. Maggie Wigness, computer scientist at ARL, said: “Thanks to deep learning, the field of computer vision has made great progress, and we have successfully generalized some deep learning models trained in only one environment to new environments well. middle.”
ARL’s modular approach is to combine the strengths of several techniques. For example, perception systems that classify terrain based on deep learning vision can work with autonomous driving systems based on inverse reinforcement learning methods. In inverse reinforcement learning methods, models can be quickly created or optimized from the observations of human soldiers, whereas traditional reinforcement learning optimizes solutions based on a given reward function, usually only when you are not sure what the best behavior is. This coincides with an operational mindset, which often assumes that a well-trained human being on the sidelines guiding a robot is the right way to do things.
“So we want a technology that lets soldiers intervene, combined with some battlefield instances. If we need new behaviors, we can update the system. Deep learning technology requires more data and time,” Wigness said.
04 How to operate safely
Deep learning has to face not only the problems of data sparseness and rapid adaptation, but also problems such as robustness, interpretability and security. “These problems aren’t unique to combat robots, but are especially important in military operations, where the consequences can be lethal,” Stump said. To be clear, the ARL is not currently studying lethality. Autonomous weapon systems, but rather laying the groundwork for the U.S. military’s autonomous systems. In the future, combat robots may act like RoMan.
Stump also said that safety will always be a priority, but there is currently no clear way to ensure the safety of deep learning systems. “Deep learning under safe constraints is an important research effort, but adding those constraints to a system is really difficult because you don’t know where the constraints that are already in the system come from. So, when Constraints are difficult to deal with when the task changes, or the environment changes.
It’s not even a data problem, it’s an architectural problem. “Whether ARL’s modular architecture is a perception module using deep learning or an autonomous driving module using inverse reinforcement learning, it can form part of a broader autonomous system and meet military requirements for safety and adaptability.
Can an integrated deep learning system fight?
Nicholas Roy is the head of the Robotics Group at MIT. Describing himself as an “instigator” because he feels deep learning should not be deified, he agrees with ARL robotics experts that deep learning methods are often not up to the challenges facing the military.
“The Army is constantly going into new environments, and the enemy is always trying to keep changing the environment, so the training process the robots go through simply doesn’t match the needs of the Army,” Roy said. “So, to a large extent, the needs of deep networks The mission of fighting the Army is mismatched, that’s a problem.”
In the RCTA (Rear Cross Traffic Alert) task, Roy emphasized the abstract reasoning of ground robots. He argues that deep learning is a useful technique when applied to problems with well-defined functional relationships, but when you start working on abstract concepts, it’s not clear that deep learning is feasible. “
“I’m very interested in how neural networks and deep learning can be assembled in a way that supports higher-level reasoning,” Roy said. “In the end, it’s a question of how to combine multiple low-level neural networks to express higher-level concepts, but currently We don’t know how to do that yet.”
Roy gave the example of using two separate neural networks, one to detect cars and the other to detect red objects. Combining these two networks into a larger network to detect red cars is much more difficult than using a logical relationship-based symbolic reasoning system with structured rules. “A lot of people are working on this question, but I haven’t seen research that has successfully advanced this kind of abstract reasoning.”
For the foreseeable future, ARL will ensure the safety and robustness of autonomous systems by involving humans in high-level reasoning and occasional low-level advice. Humans may not be involved in robotic systems research all the time, but humans and robots are more efficient when they work together as a team. When the latest phase of the Robotics Collaborative Technology Alliance project began in 2009, ARL had been in Iraq and Afghanistan for many years, where robots were often used as tools. We’ve been wondering what we can do to make the robot go from tool to teammate on the team. “
RoMan did get a little help when the humans pointed out which areas of the branches were most effective for grabbing. Robots have no awareness of tree branches, and this ignorance of world knowledge (what is often referred to as “common sense”) is a common problem in all autonomous decision-making systems. But if someone could take advantage of our vast human experience and give RoMan a little touch, it would work a lot easier. This time, RoMan managed to grab the branch and drag it away.
It’s hard to turn a robot into a good teammate, because it’s tricky to give a robot what level of autonomy. Robots with too little autonomy require a lot of human effort to manage, which is good for special cases like dealing with explosives, but inefficient in other cases. But if you give robots too much autonomy, there are hidden dangers in terms of trust, safety, and explainability.
“I think the criteria we’re looking for are robots that operate at the same level as working dogs,” Stump explained. “They know exactly what we need them to do in a limited environment; if they go to a new environment, there will be a small amount of them. flexibility and creativity, but we don’t expect them to solve problems in innovative ways. If they need help, they can turn to us.”
05The exploration of autonomous systems will continue
Even as part of a human team, RoMan is unlikely to perform tasks independently in the wild right away. RoMan is more like a research platform, and with this research opportunity, a series of complex problems of deep learning can be explored. However, ARL is developing a software for RoMan and other robots called Adaptive Planner Parameter Learning (APPL), which may be used first for autonomous driving, and then for more complex robotic systems, including systems like RoMan Such a mobile operator.
APPL layers different machine learning techniques (including inverse reinforcement learning and deep learning) under the classic autonomous navigation system, which can apply high-level goals and constraints to low-level programming. Humans can use teleoperation demonstrations, corrective interventions, and assessment feedback to help robots adapt to new environments, while robots can use unsupervised reinforcement learning to adjust their own behavioral parameters.
As a result, an autonomous system can combine several of the advantages of machine learning, while also providing the security and explainability that the military needs. With APPL, learning-based systems like RoMan can operate in a predictable manner even under uncertainty. If it’s in a very different environment than the training environment, it needs to rely on human tuning or human demonstration.
The rapid development of commercial and Industrial autonomous systems, such as self-driving cars, inevitably leads people to wonder: why is the military lagging behind in the flood of advanced technology? Stump’s view is that there are many problems in autonomous systems, and the military’s problems are not the same as industrial problems. The military, for example, does not have a structured environment with massive amounts of data to operate robots. In the future, humans are likely to remain a key player in the autonomous framework that ARL is developing.
It can be seen from the above analysis that the global military robot research is not stagnant but is actively developing. It is more desirable for military robot developers to find a balance between combat and intelligent automation.
The trajectory of our times is to truly realize the integration of man and machine, and from the perspective of robots, the relationship between man and machine is gradually assisting, cooperating, replacing, and expanding. The assistance and coordination have been realized, and the dominant position of human beings will be placed in a more prominent position.
The Links: SGDR-AXA02A 3BSE037760R1
Pre: Rapoo intelligent equipment storage e... Next: Siemens Energy supplies high-efficien...