US Army-funded research boosts memory of AI systems

Contact Our Team

For more information about how Halldale can add value to your marketing and promotional campaigns or to discuss event exhibitor and sponsorship opportunities, contact our team to find out more

 

The Americas -
holly.foster@halldale.com

Rest of World -
jeremy@halldale.com



US-Army-funded-research-boosts-memory-of-AI-systems-e1558462930355

A project of the U.S. Army has developed a new framework fordeep neural networks that allows artificial intelligence (AI) systems to betterlearn new tasks while forgetting less of what they have learned in previoustasks.

The North Carolina State University researchers, funded bythe Army, have also demonstrated that using the framework to learn a new taskcan make the AI better at performing previous tasks, a phenomenon calledbackward transfer.

"The Army needs to be prepared to fight anywhere in theworld so its intelligent systems also need to be prepared," said Dr. MaryAnne Fields, program manager for Intelligent Systems at Army Research Office,an element of U.S. Army Combat Capabilities Development Command's Army ResearchLab. "We expect the Army's intelligent systems to continually acquire newskills as they conduct missions on battlefields around the world withoutforgetting skills that have already been trained. For instance, whileconducting an urban operation, a wheeled robot may learn new navigationparameters for dense urban cities, but it still needs to operate efficiently ina previously encountered environment like a forest."

The research team proposed a new framework, called Learn toGrow, for continual learning, which decouples network structure learning andmodel parameter learning. In experimental testing it outperformed previousapproaches to continual learning.

"Deep neural network AI systems are designed forlearning narrow tasks," said Xilai Li, a co-lead author of the paper and aPh.D. candidate at NC State. "As a result, one of several things canhappen when learning new tasks, systems can forget old tasks when learning newones, which is called catastrophic forgetting. Systems can forget some of thethings they knew about old tasks, while not learning to do new ones as well. Orsystems can fix old tasks in place while adding new tasks – which limitsimprovement and quickly leads to an AI system that is too large to operateefficiently. Continual learning, also called lifelong-learning orlearning-to-learn, is trying to address the issue."

To understand the Learn to Grow framework, think of deepneural networks as a pipe filled with multiple layers. Raw data goes into thetop of the pipe, and task outputs come out the bottom. Every "layer"in the pipe is a computation that manipulates the data in order to help thenetwork accomplish its task, such as identifying objects in a digital image.There are multiple ways of arranging the layers in the pipe, which correspondto different "architectures" of the network.

When asking a deep neural network to learn a new task, theLearn to Grow framework begins by conducting something called an explicitneural architecture optimization via search. What this means is that as thenetwork comes to each layer in its system, it can decide to do one of fourthings: skip the layer; use the layer in the same way that previous tasks usedit; attach a lightweight adapter to the layer, which modifies it slightly; orcreate an entirely new layer.

This architecture optimization effectively lays out the besttopology, or series of layers, needed to accomplish the new task. Once this iscomplete, the network uses the new topology to train itself on how toaccomplish the task – just like any other deep learning AI system.

"We've run experiments using several data sets, andwhat we've found is that the more similar a new task is to previous tasks, themore overlap there is in terms of the existing layers that are kept to performthe new task," Li said. "What is more interesting is that, with theoptimized – or "learned" topology – a network trained to perform newtasks forgets very little of what it needed to perform the older tasks, even ifthe older tasks were not similar."

The researchers also ran experiments comparing the Learn toGrow framework's ability to learn new tasks to several other continual learningmethods, and found that the Learn to Grow framework had better accuracy whencompleting new tasks.

To test how much each network may have forgotten whenlearning the new task, the researchers then tested each system's accuracy atperforming the older tasks – and the Learn to Grow framework again outperformedthe other networks.

"In some cases, the Learn to Grow framework actuallygot better at performing the old tasks," said Caiming Xiong, the researchdirector of Salesforce Research and a co-author of the work. "This iscalled backward transfer, and occurs when you find that learning a new taskmakes you better at an old task. We see this in people all the time; not somuch with AI."

"This Army investment extends the current state-of-the-artmachine learning techniques that will guide our Army Research Laboratoryresearchers as they develop robotic applications, such as intelligent maneuverand learning to recognize novel objects," Fields said. "This researchbrings AI a step closer to providing our warfighters with effective unmannedsystems that can be deployed in the field."

The paper, "Learn to Grow: A Continual StructureLearning Framework for Overcoming Catastrophic Forgetting," will bepresented at the 36th International Conference on Machine Learning, being heldJune 9-15 in Long Beach, California. Co-lead authors of the paper are TianfuWu, Ph.D., an assistant professor of electrical and computer engineering at NCState; Xilai Li, a doctoral student at NC State; and Yingbo Zhou of SalesforceResearch. The paper was co-authored by Richard Socher and Caiming Xiong ofSalesforce Research.

The work was also supported by the National ScienceFoundation. Part of the work was done while Li was a summer intern atSalesforce AI Research.

Source: US Army

Featured

More events

Related articles



More Features

More features