New York University researchers blaze new paths for robots-IEEE Spectrum

2021-12-14 13:05:46 By : Ms. Elaine Cai

The IEEE website places cookies on your device in order to provide you with the best user experience. By using our website, you agree to the placement of these cookies. To learn more, please read our privacy policy.

New York University Tandon will launch a new robotics program focused on collaboration and improving urban life

Solo 8 is an open source research quadruped robot, developed by Ludovic Righetti, Ludovic Righetti is one of the four researchers leading a new robotics project by Tandon of New York University.

This is a sponsored article brought to you by NYU Tandon School of Engineering.

New York University Tandon School of Engineering is about to launch a new robotics program, which is expected to adopt a unique method in the research and teaching of engineering and academic disciplines, and is based on the school's decades of robotics.

Four robotics experts are serving as the main organizers of a new robotics project in New York University Tanden. Clockwise from top left: Giuseppe Loianno, S. Farokh Atashzar, Ludovic Righetti and Chen Feng.

As planning details are being finalized after many years, we have the opportunity to talk with four robotics experts who are the main organizers of the new plan, which will build on the existing Tandon advantages of more than a dozen robotics academies, and will eventually seek Enroll other researchers from Tandon and other New York University schools.

These researchers were recruited to New York University Tandon as part of "cluster recruitment" during the two years from 2017 to 2019 to support Tandon Dean Jelena Kovačević's vision of cross-departmental collaboration in a wide range of robotics projects.

Although their work often crosses and often collaborates on projects, each of these researchers looks at robots from a unique perspective.

Giuseppe Loianno has a background in the perception, learning and control of autonomous robots. He has explored the autonomy of robots, especially drones and other airborne robots. He leads the Agile Robotics and Perception Laboratory (ARPL) and is also a member of NYU WIRELESS and NYU CUSP. The laboratory conducts basic and applied research in the field of robotic autonomy to create flexible autonomous machines that operate in an unstructured and dynamically changing environment without relying on any external infrastructure, and can learn from experience to improve autonomous behavior . Through projects such as Aerial Co-Workers supported by NSF in cooperation with Atashzar and Feng, as well as cooperation with Army Research Laboratories and multiple industries, his laboratory is also studying ways to make robots more agile and collaborative-both Mutual and human. Read more about his work in the IEEE Spectrum published earlier this year.

S. Farokh Atashzar has devoted most of his career to the development of cyber-physical systems and robotics for medical and health applications, and currently focuses on remote surgery and remote rehabilitation combined with next-generation telecommunications capabilities. He recently received a donation of equipment from the Intuitive Foundation, which includes a Da Vinci research kit, a surgical system that allows his team to devise a method that allows surgeons in one place to be City) patients undergo surgery, regions or even continents. As part of his work leading the New York University Wireless and Medical Robotics and Interactive Intelligent Technology (MERIIT) laboratories within New York University CUSP, he is also committed to cutting-edge human-machine interface technology to realize nerve-to-device functions and apply them directly Exoskeleton equipment & next-generation prosthetics and rehabilitation robots. He has established active partnerships with the New York University School of Medicine and the U.S. Food and Drug Administration (FDA). His research is supported by the National Science Foundation.

Ludovic Righetti heads the machine in the Tandon Sports Laboratory at New York University. There, his team invented algorithms to make robots that walk and manipulate objects more autonomous, multi-functional, and safer to interact with. His new methods in machine learning and optimal control allow robots to "understand" when and how to interact with the environment and various objects through different intensities, strengths, etc. based on object materials, functions, and purposes. In addition to creating new possibilities in the field of autonomous machines, he also allowed more researchers to use robots through the Solo 8 and 12 projects, which is a low-cost, open source alternative to expensive quadruped robots. His laboratory is engaged in the intersection of robotics and wireless telecommunications at NYU Wireless, including designing cloud-based whole-body control to control legged robots through 5G links.

Chen Feng applies his background in civil, electrical and geospatial engineering to computer vision and robotic perception applications in construction and manufacturing. With funding from the C2SMART Tier 1 University Transportation Center of NSF and New York University Tandon, he used his expertise in visually synchronized localization and mapping (vSLAM) and deep learning to develop autonomous driving, assisted living and construction robotics technologies. Several patents on the algorithmic process of these applications. As the leader of the Multidisciplinary Research Group of Automation and Intelligence in Civil Engineering (AI4CE), he is advancing robot vision and machine learning through basic research inspired by the use of multidisciplinary. An example, collective additive manufacturing, is a collaborative project aimed at developing theories and systems. A group of autonomous mobile robots can jointly print large 3D structures. Another collaborative project, ARM4MOD, aims to use quadruped robots to simplify modular building structures. From design to manufacturing to installation, quadruped robots can project complex visual maps on physical surfaces. He is also affiliated with the City Center for Science and Progress (CUSP).

This is our interview with four researchers.

Q: Can you talk about how you found your way to Tandon and what motivated you to participate in launching a new robotics project? How did you find each other and how did the idea of ​​the initiative develop?

Although several of us have appointments in other departments, all four of us have appointments in mechanical engineering, and we have worked a lot together. We have all joined Tandon NYU within a few years and have worked together since the beginning. We have three things in total, and they form the basis of this new initiative.

One thing we believed from the beginning was the robotic facility shared between the four of us and the new faculty and staff who will join us next year. The experimental facilities we want are more than just the sum of our laboratories. Therefore, the school is now investing in a new facility where we will have more than 4,000 square feet of laboratory space. The four of us did not design it as a different laboratory, but as a real collective space, allowing us to do more than each laboratory alone.

Another thing we shared is that the four of us are studying the algorithmic basis of robots, including control, planning, learning, human-computer interaction and perception.

Finally, we are all committed to complementary applications of robotics, which can be meaningfully used to improve people's lives.

We came to Tandon to know that the leadership has strong enthusiasm and support, and plan to create a unique shared space. From the first day we joined NYU, we began to discuss how to collaborate on different aspects of this new project. Joint appointment by multiple departments makes us a bridge for cooperation between Tanton's various departments focused on robotics.

My view on this initiative is that it is not just about space and summaries of the work we do. Creating a shared physical center causes convolution of what we do. It is the work we have done together and our interactions that have produced new projects, new concepts, and new visions.

A university with a close connection between engineering and medicine is also very important to me. Therefore, New York University is one of the best candidates. New York University is also designed as part of the urban fabric-the fact that New York City is our campus makes it very unique and special. We make robots for the smart and connected society of the future.

One thing that makes our employees interesting is that we are all jointly appointed. For example, Ludovic, Giuseppe, and Farokh were jointly appointed in the Department of Electrical and Mechanical Engineering, and I was jointly appointed in the Department of Civil and Mechanical Engineering. We are all part of several different centers. For example, the four of us are all part of the New York University City Center for Science and Progress (CUSP), and then there are other centers, such as NYU WIRELESS and C2SMART, some of us are also part of it. This is important because the future of robotics involves far more than robotics-it is the intersection of robotics and advanced artificial intelligence, robotics and wireless communications, robotics and biomedical engineering, robotics and civil engineering.

In fact, we have established connections with many existing departments and centers of New York University, which is crucial because we understand what is happening in different departments and have already collaborated with a large number of faculty members. This design helps us to jointly use our different networks and resources in Tandon to determine possible cooperation and apply all resources to the plan, this space, and the success of this work.

Before joining NYU Tandon, I was looking for the next step in my career at the University of Pennsylvania. There is no doubt that NYU Tandon was and is still the right place. It is passionate about robotics and has a bright future. It is likely to have strong future research and technological influence, support the environment, and the growth of a multicultural urban environment with great potential as well as Strong external cooperation.

I think that launching another aspect of the initiative here is very important and will play an important role in science. This is the fact that we are in New York City. The technology ecosystem is constantly evolving here. There are signs that in the next five years, it may be as dynamic as places like Silicon Valley. This is a huge opportunity for us because it allows us to create start-up companies and interact with the world inside and outside of academia. For example, we are close to the Brooklyn Naval Shipyard, which can act as an incubator and has a lot of space for us to use.

In addition, the uniqueness of the program is that we have brought together new and futuristic visions for the school, and how we can carry out research and education that may affect robotics in the next 10 years.

The mobile 3D printing robot works in Chen Feng's Civil Engineering Automation and Intelligent Laboratory.

Q: What do you think is unique about NYU’s Tandon Robotics Program? How do you see how it differs from other university robotics research centers and programs?

In terms of scale, you cannot compare us to a larger robotics institute. However, what is unique is the physical space and shared infrastructure. In our research, we hope to be able to reproduce the situation you will encounter in the city. Therefore, we are studying how to take the robot out of the laboratory and test it in a real environment throughout the city.

The other thing we all mentioned is that the plan has been and will be true collaboration. We share grants, and we are all members of each other's doctoral committees. We conceived a new course together; together we launched a robot minor.

We are four young roboticists with the least overlap-just enough to collaborate effectively. This is very effective because we all fully understand the scientific language we share, but the applications are very different.

What sets us apart is our integrated education program. We have a unique robotics course. I believe that few schools in this country can provide so many graduate and doctoral level robotics courses. I think we have a total of seven graduate-level robotics courses. We also minored in robotics, and we are teaching four different robotics courses for undergraduates.

Our vision is not only research, but also the education we provide, and how research and education influence each other in a two-way manner. We are designing a broad and in-depth research education program for undergraduates, masters and doctoral students, which will grow over time.

Before joining Tandon, I worked as a research scientist at Mitsubishi Electric Research Laboratory (MERL) in Cambridge. I want to return to academia because I like to interact with students, and I want to do more basic robotics research that really requires academic freedom.

In addition, I have received training in the Department of Civil Engineering. I got a PhD degree in civil engineering, and my research field is construction robots. When I just graduated from my Ph.D., not many schools invested in construction robotics, so I almost gave up returning to the academic world in this field. Then suddenly, I saw that New York University was investing in this field and became one of the first civil engineering departments in the country to invest in building automation.

This ability to focus on building automation really makes me attractive to this initiative. Historically, construction has always been a low-tech industry that has not benefited from automation technology. But now, with the recently passed new federal infrastructure bill, the entire country will spend a lot of money on infrastructure. We will witness growth opportunities in automation and robotics, which help maintain and update the country’s civilian infrastructure. I feel that compared to other large schools that focus on robotics, NYU’s robotics program puts us in a unique position because we are located in this geographically interesting area. In a dense urban environment, we will encounter many real-world civil infrastructure problems, which allows us to think about how to use robotics to improve these infrastructure projects, thereby ultimately improving the quality of life of the citizens living here. So this is very exciting for me, and I think it is unique.

As I said before, because of our position, the opportunity is right in front of us-there are many investors in New York, and we are in a place very close to space, which can be directly used to build start-up experiments like our future. Room and Brooklyn Naval Shipyard.

The unique aspect of this is how we connect our research, which starts with basic concepts and then applies to urban problems.

Max Planck Institute for Intelligent Systems

Solo 8 is an open source research quadruped robot that can perform various body movements. The robot was developed by a team led by Ludovic Righetti, associate professor of electrical and computer engineering and mechanical and aerospace engineering at New York University Tanden.

Q: Can you talk about the academic and commercial landscape of robotics today, and what challenges each of you think needs to be solved in these areas to better promote the overall development of robotics?

Many people think that robotics is useful, but for very small tasks. They did not see the benefits from a long-term perspective.

New York City is a unique environment that is not available in the United States or even anywhere else in the world. It demonstrates that robot technology can not only play a role in general life tasks, but can also be applied to complex urban scenes where many dimensions are brought together.​​​ We have just seen a pandemic and architecture is changing. For example, you need a lot of monitoring and inspections to ensure safety, but also infrastructure monitoring. This is a very unique environment, and all these aspects are integrated in a multicultural urban environment. In terms of manpower, thought, capital and space, there are a lot of potential energies. Together, they can truly demonstrate the greater potential of robotics. I think this makes this a very interesting place for investors to understand the next development of potential technology.

There is a lot of hype surrounding robotics. Some companies are promising things that we know may not happen because we don’t have the technology to make robots completely autonomous in unstructured environments. It is important to match what we said will happen and what will actually happen. We need to conduct reality checks on what we can actually do or reasonably be able to do in the near future, so as not to disappoint the public and the industry due to unrealistic expectations.

The robot is excellent in an environment where you can control the environment. When robots become autonomous in an environment beyond our control, they become less good. This means that when there are people around you, when the construction sites around you are messy, and when you have disasters, they are not very reliable.

We need to transform our work and how we formulate problems so that we can strive to achieve meaningful and credible automation. We need to create key robotics technologies that are truly useful to people, to be useful in real environments, not just to make promises, but to actually try to think about how we can actually solve specific problems that people encounter. From this perspective, we need to have a dual point of view-before we commercialize what we develop, we have to ask: How will it affect people's lives? Does this actually make their lives better or not?

One problem with robotics is that people either overestimate the capabilities of current technology or underestimate it. We believe that our initiative can help solve this problem through our educational methods. We bring STEM students from different backgrounds together to learn our robotics course. Through these courses, they can better understand what the technology can and cannot do now, and what it can do in the near future and a long time from now.

By helping them better understand the capabilities of the technology, it will help set realistic commercial expectations. They will not over-promise, and I think this is healthier for the robotics industry in the long run.

A large part of commercialization or industry-focused research is basically testing and evaluating the performance of the system. To perform this test and evaluation, you need infrastructure; you need to build expensive infrastructure in the center of the city to interact with the city. Connect with what's happening in China. I think this puts our initiative in a unique position.

The most advanced human-machine interface module with a wearable controller is one of many multi-modal technologies tested in S. Farokh Atashzar's MERIIT laboratory in Tandon, New York University.

Q: Can each of you share a specific research area that you are interested in, which will become an early focus area of ​​the new plan?

I have strong feelings about a concept called mobile 3D printing, which really needs the support of robotics. Our idea is to use mobile robots and mobile manipulators for 3D printing, such as concrete 3D printing.

We can even consider sending these robotic printers to the moon and Mars as our manufacturing base. This is something that Ludovic and I have been working on for the past few years. We are building a theoretical basis for this, and it does have great commercial potential. This is something I am really passionate about, and I want to take the time to solve this problem.

I mainly work in the field of aerial robotics. Our main goal is to make our robots unmanned, smaller, more flexible, more flexible and collaborative. We are studying issues related to safe and fast navigation in unknown environments, and how multiple vehicles can cooperate with each other. For example, not only in terms of groups, but also in terms of how multiple vehicles physically interact with each other on issues related to transporting or manipulating large objects, and how they collaborate with each other and with humans.

My main goal is to improve the autonomy of such machines, making them smarter, faster, and more collaborative. This has an impact on a wide range of issues related to safety, search and rescue, and cargo transportation. For example, you can imagine cargo transportation or even urban delivery after a natural disaster, which is now basically done using ground vehicles. One day, it may be done using autonomous drones.

I really want to understand the algorithm of motion. I am studying how to make robots move and "do things" reliably. So doing things means I am working with large robots, which can move around as quadrupeds or bipeds. Not only can you move around, you can also use objects in any type of environment and perform any type of task. If something goes wrong, they can find out, maybe learn from it and improve over time. How do we solve it? I am very excited to figure this out.

I am interested in human-centered robots in the field of medical robotics. Therefore, my laboratory has three main focuses. A more basic main focus is autonomous cyberbots. We are trying to connect robots through the network. We work on a multi-agent network of robots and understand how distributed delays affect reliability, efficacy, and performance, and how we use local autonomy to share performance between machines and humans. In this case, we are committed to artificial intelligence, nonlinear control and information theory.

The second part of my work is the rehabilitation of stroke and spinal cord injury patients, and how we can build a robotic system to help them recover their lost sensorimotor function. In this case, we will focus on building an algorithmic bridge between neurorobot intelligence and human cognition.

The third aspect of my work is surgical robots. Speaking of surgical robots, I am very interested in autonomous surgery. Just like any other technology in its infancy (such as autonomous driving 15 years ago), automatic surgery at this time still sounds like science fiction. However, although it has not happened on the scale we hoped, it will happen soon when the surgeon's use is restricted (for example, space operations).

Graduate students test the designed and manufactured drones at the Giuseppe Loianno Robotics Laboratory in Tandon, New York University.

Q: Each of you has brought different backgrounds in robotics into this new program, can you talk about how you view the overall organization of the program and how each of your areas of expertise will be integrated into the larger whole Play a role?

We want to avoid strict predefined groupings that may limit the potential of the planned work. The shared physical space will enable students and faculty to better collaborate and view each other’s work. This is a challenge at the moment because we are still in different physical spaces.

In the shared physical space, people can participate and start cooperation, involving the cross integration of each other's professional knowledge, in order to explore new concepts and make new contributions to science and technology.

Students are already in a unique position at New York University because they not only study robotics and engineering courses. They have a large portfolio of courses, such as artificial intelligence, medical fields, mathematics, humanities, etc., because the network of New York University is very large.

One way we think about loosely organizing space is to revolve around the robot functions we hope to achieve. We have a field of field robots, a field of aerial robots, a field of service robots, and a field of medical robots. These are different functional areas. However, we deliberately do not treat these as different groups. Expertise in different functional areas work together in a shared space to realize the common vision of this initiative: to improve the lives of urban people.

Engineers challenge the limits of deep learning for battlefield robots

RoMan is the robotic manipulator of the Army Research Laboratory, which considers the best way to grasp and move branches at the Adelphi Laboratory Center in Maryland.

This article is part of our special report "The Great AI Liquidation" on artificial intelligence.

"I probably shouldn't have stood so close," I thought to myself as the robot slowly approached a large branch on the floor in front of me. What makes me nervous is not the size of the branches-it is the robot running autonomously. Although I know what it should do, I am not entirely sure what it will do. If everything goes as expected by roboticists at the U.S. Army Research Laboratory (ARL) in Adelphi, Maryland, the robot will recognize the branch, grab it, and drag it aside. These people knew what they were doing, but I spent enough time around the robot, and I took a small step back anyway.

The robot is named RoMan, which means Robotic Manipulator, and is about the same size as a large lawn mower, with a crawler base that can help it handle most terrains. At the front, it has a squat torso equipped with a camera and depth sensor, and a pair of arms from a prototype disaster response robot originally developed at NASA's Jet Propulsion Laboratory for the DARPA Robotics Competition. RoMan's job today is to clear the road, which is a multi-step task that ARL hopes the robot can complete as autonomously as possible. Instead of instructing the robot to grab specific objects in a specific way and move them to a specific location, the operator tells RoMan to "take a path." The robot then makes all the decisions needed to achieve that goal.

This article is part of our special report "The Great AI Liquidation" on artificial intelligence.

This article is part of our special report "The Great AI Liquidation" on artificial intelligence.

The ability to make autonomous decisions is not only what makes robots useful, but also what makes robots a robot. We value robots because they can perceive what is happening around them, make decisions based on this information, and then take useful actions without our input. In the past, robot decisions followed highly structured rules-if you feel this, go for it. This works well in a structured environment like a factory. But in a chaotic, unfamiliar or unclearly defined environment, the reliance on rules makes robots notoriously bad at dealing with anything that cannot be accurately predicted and planned in advance.

RoMan, along with many other robots, including household vacuum cleaners, drones and self-driving cars, uses artificial neural networks to meet the challenge of a semi-structured environment-a computational method that loosely mimics the structure of neurons in a biological brain. About ten years ago, artificial neural networks began to be applied to a variety of semi-structured data that was previously very difficult to interpret for computers running rule-based programming (often called symbolic reasoning). The artificial neural network is not to identify a specific data structure, but to recognize data patterns and identify new data that is similar (but not the same) as the data encountered before the network. In fact, part of the appeal of artificial neural networks is that they are trained through examples, by allowing the network to ingest annotated data and learn its own pattern recognition system. For neural networks with multiple layers of abstraction, this technique is called deep learning.

Although humans usually participate in the training process, even if the artificial neural network is inspired by the neural network of the human brain, the pattern recognition done by the deep learning system is fundamentally different from the way humans view the world. It is usually almost impossible to understand the relationship between the data input into the system and the interpretation of the system output data. And this difference-the "black box" opacity of deep learning-poses potential problems for robots like RoMan and Army research laboratories.

In a chaotic, unfamiliar or undefined environment, the reliance on rules makes robots notoriously bad at dealing with anything that cannot be accurately predicted and planned in advance.

This opacity means that robots that rely on deep learning must be used with caution. Deep learning systems are good at recognizing patterns, but lack the understanding of the world that humans usually use to make decisions, which is why such systems perform best when their applications are well-defined and narrow. "When you have well-structured inputs and outputs, and you can encapsulate your problems in this relationship, I think deep learning works very well," Tom Howard, head of the Robotics and Artificial Intelligence Laboratory at the University of Rochester Said that he has developed natural language interaction algorithms for RoMan and other ground robots. "The question when programming smart robots is, how big are these deep learning building blocks?" Howard explained that when you apply deep learning to higher-level problems, the number of possible inputs becomes very large , Solving problems of this scale can be challenging. When this behavior is manifested by a 170 kg dual-arm military robot, the potential consequences of unexpected or unexplainable behavior are much more serious.

After a few minutes, RoMan didn't move-it was still sitting there, thinking about the branch, its arms balanced like a praying mantis. In the past 10 years, the Robotic Collaborative Technology Alliance (RCTA) of the Army Research Laboratory has been cooperating with people from Carnegie Mellon University, Florida State University, General Dynamics Land Systems, Jet Propulsion Laboratory, Massachusetts Institute of Technology, QinetiQ North America , University of Central Florida, University of Pennsylvania, and other top research institutions develop robotic autonomy for future ground combat vehicles. RoMan is part of this process.

The "clear path" task that RoMan is slowly thinking about is difficult for robots because the task is too abstract. RoMan needs to identify objects that may block the path, infer the physical properties of these objects, figure out how to grasp them and which manipulation technique (such as pushing, pulling or lifting) is most suitable to apply, and then implement it. For a robot with limited knowledge of the world, these are many steps and many unknowns.

Ethan Stump, chief scientist of the ARL maneuvering and mobile artificial intelligence project, said that this limited understanding is that ARL robots are beginning to differ from other robots that rely on deep learning. "Basically, the Army can be required to operate anywhere in the world. We don’t have a mechanism to collect data in all the different areas where we may operate. We may be deployed to an unknown forest on the other side of the earth. The world, but we Will be expected to behave as well as we are in our own backyard," he said. Most deep learning systems can only operate reliably in the field and environment in which they have been trained. Even if the domain is similar to "every drivable road in San Francisco", the robot will do well because this is a data set that has already been collected. However, Stump said, this is not an option for the military. If the Army's deep learning system performs poorly, they cannot solve the problem simply by collecting more data.

ARL robots also need to have a broad understanding of what they are doing. "In the standard operating order of the mission, you have goals, constraints, and paragraphs about the commander’s intentions—basically a narrative of the mission’s purpose—it provides contextual information that humans can interpret, and provides them with structure when needed. Make decisions and when you need to improvise,” Stump explained. In other words, RoMan may need to clear the path quickly, or it may need to clear the path quietly, depending on the broader goal of the mission. This is a big requirement even for the most advanced robots. "I can't think of a deep learning method that can handle this type of information," Stump said.

Army Research Laboratory robots test autonomous navigation technology on rugged terrain [above, middle], with the goal of being able to keep up with their human teammates. ARL is also developing robots with maneuverability [bottom] that can interact with objects so that humans don’t have to do this. Evan Ackerman

When I watched, RoMan was reset to the second attempt to remove the branch. The autonomous approach of ARL is modular, where deep learning is combined with other technologies, and robots are helping ARL find out which technologies are suitable for tasks. RoMan is currently testing two different methods of identifying objects from 3D sensor data: UPenn’s method is based on deep learning, while Carnegie Mellon is using a method called perception through search, which relies on a more traditional 3D model database . Perception through search is effective only when you know exactly what you are looking for in advance, but training is much faster because only one model is needed for each object. It can also be more accurate when the perception of an object is difficult-for example, if the object is partially hidden or upside down. ARL is testing these strategies to determine which is the most versatile and effective, allowing them to run simultaneously and compete with each other.

Perception is one of the things that deep learning is often good at. Maggie Wigness, a computer scientist at ARL, said: “The computer vision community has made crazy progress in these areas using deep learning.” “We have achieved great success with some of these models, which are trained in an environment and extended to new ones. Environment, we intend to continue to use deep learning for such tasks because it is the most advanced."

ARL's modular approach may combine multiple technologies in ways that take advantage of its specific advantages. For example, a perception system that uses deep learning-based vision to classify terrain can work with an autopilot system based on an inverse reinforcement learning method, where models can be quickly created or refined through the observation of human soldiers. Traditional reinforcement learning optimizes the solution based on an established reward function, and is usually applied when you are not sure what the best behavior looks like. This is not a problem for the Army. They can usually assume that well-trained humans will show robots the right way to do things nearby. "When we deploy these robots, things will change very quickly," Wignes said. “Therefore, we want a technology that allows soldiers to intervene. Just provide some examples of on-site users, and we can update the system when new behaviors are needed.” She said that deep learning technology requires “more data and time".

Deep learning faces more than just data sparseness and rapid adaptation. There are also problems of robustness, interpretability and safety. "These issues are not unique to the military," Stump said, "but this is especially important when we talk about potentially lethal systems." To be clear, ARL does not currently study deadly autonomous weapon systems. But the laboratory is helping to lay the foundation for the U.S. military's autonomous systems more broadly, which means considering how such systems might be used in the future.

The requirements of the deep network are largely inconsistent with the requirements of the Army mission, which is a problem.

Stump said that security is clearly the top priority, but there is no clear way to make deep learning systems safe. "Deep learning under security constraints is a major research effort. It is difficult to add these constraints to the system because you don't know where the constraints already in the system come from. So when the task changes or the context changes, It’s difficult to solve this problem. It’s not even a data problem, but an architectural problem." Part of a broad autonomous system that combines the security and adaptability required by the military. Other modules in the system can operate at a higher level, use different technologies that are more verifiable or interpretable, and can intervene to protect the entire system from adverse unpredictable behavior. "If other information enters and changes what we need to do, then there will be a hierarchy," Stump said. "It all happened in a rational way."

Nicholas Roy, head of the Robust Robotics Group at MIT, described himself as "a bit provocative" because he was skeptical of certain claims about the power of deep learning, and he agreed with the ARL robotics expert's point of view that deep learning methods It is often unable to meet the various challenges that the Army must be prepared to deal with. "The Army is always entering a new environment, and opponents are always trying to change the environment so that the training process the robot goes through is completely inconsistent with what they see," Roy said. "Therefore, the requirements of the deep network are largely inconsistent with the requirements of the Army mission, which is a problem."

As part of the RCTA, Roy, who is engaged in abstract reasoning for ground robots, emphasized that when applied to problems with clear functional relationships, deep learning is a useful technique, but when you start to look at abstract concepts, it is not clear whether deep learning is A feasible method. "I am very interested in how to combine neural networks and deep learning in a way that supports higher-level reasoning," Roy said. "I think it boils down to the concept of combining multiple low-level neural networks to express higher-level concepts. I don't think we know how to do this yet." Roy gave an example of using two independent neural networks, one for It detects car objects, and the other is used to detect red objects. Compared with a symbolic reasoning system based on structured rules with logical relationships, it is much more difficult to combine these two networks into a larger network to detect red cars. "A lot of people are studying this, but I haven't seen the real success in promoting this kind of abstract reasoning."

In the foreseeable future, ARL will ensure the safety and robustness of its autonomous systems by allowing humans to perform higher-level reasoning and occasional low-level suggestions. Humans may not always be directly involved, but our idea is that humans and robots are more effective when working in teams. When the latest phase of the Robot Cooperation Technology Alliance program began in 2009, Stump said, “We have been in Iraq and Afghanistan for many years, where robots are often used as tools. We have been trying to figure out what we can do. Transform the robot from a tool to play more of the role of teammates in the team."

RoMan will get some help when the human supervisor points out the areas in the branch where crawling may be most effective. Robots do not have any basic knowledge of what branches actually are, and this lack of knowledge of the world (we consider common sense) is the basic problem of various autonomous systems. Letting humans use our rich experience for a small amount of guidance can make RoMan's work easier. Indeed, this time RoMan successfully grabbed the branch and dragged it across the room loudly.

Turning a robot into a good teammate can be difficult, because finding the right autonomy can be tricky. Too little, it takes most or all of a person's attention to manage a robot, which may be suitable for special situations such as explosives disposal, but it is not efficient in other situations. With too much autonomy, you will start to have problems with trust, security, and interpretability.

"I think the level we are looking for here is for the robot to operate at the level of a working dog," Stump explained. "They know exactly what we need them to do in limited situations. If they face a new situation, they have a small amount of flexibility and creativity, but we don't want them to solve problems creatively. If they need help, they will Come back to us."

RoMan is unlikely to find himself performing tasks in the field soon, even as part of a human team. This is largely a research platform. But the software developed by ARL for RoMan and other robots, called adaptive planner parameter learning (APPL), may first be used for autonomous driving and then for more complex robotic systems, which may include mobile manipulators like RoMan. APPL combines different machine learning techniques (including reverse reinforcement learning and deep learning) arranged hierarchically under the classic autonomous navigation system. This allows high-level goals and constraints to be applied to low-level programming. Humans can use remote control demonstrations, corrective interventions, and evaluation feedback to help robots adapt to new environments, while robots can use unsupervised reinforcement learning to adjust their behavioral parameters in real time. The result is that an autonomous system can enjoy the many benefits of machine learning while also providing the kind of security and interpretability the Army needs. With APPL, a learning-based system like RoMan can operate in a predictable manner even under uncertain conditions. If it ends up in an environment that is completely different from the training environment, it can rely on manual adjustments or manual demonstrations.

It is tempting to watch the rapid progress of commercial and industrial autonomous systems (self-driving cars are just one example) and wonder why the Army seems to be a bit behind the most advanced technology. But as Stump found himself having to explain to Army generals, when it comes to autonomous systems, "there are many problems, but the problems of industry are different from those of the Army." The Army is not in a structured environment with large amounts of data. The luxury of operating robots is why ARL puts so much effort into maintaining a place for APPL and for humans. Looking to the future, humans may still be a key part of the autonomous framework being developed by ARL. "This is what we are trying to build with our robotic system," Stump said. "That's our bumper sticker:'From tools to teammates.'"

This article appeared in the October 2021 print edition as "Deep Learning Entering Boot Camp".

Special report: Great AI liquidation Read next: 7 ways to reveal AI failures or view the full report to learn more about the future of AI.

Read the next article: 7 ways to reveal AI failure

Or check out the full report to learn more about the future of AI.