$7.5M MURI to make dynamic AI smarter and safer

Researchers from four U.S. institutions aim to pull the best from control theory and machine learning to build safer mobile, intelligent systems.

A new $7.5M, five-year Multidisciplinary University Research Initiative (MURI) collaboration among researchers of four U.S. universities, with Northeastern University taking the lead, aims to take the best from both control theory and machine learning to create dynamic, intelligent systems that are safe, robust, and capable of performing complex tasks.

$900K goes to the group of Necmiye Ozay, associate professor of electrical engineering & computer science and co-PI of the project at Michigan, which will contribute to efforts to reverse-engineer neural networks, assess when machine learning or control approaches are most appropriate, and develop a new AI theory combining both control theory and machine learning.

Autonomous vehicles are on some roads and improving all the time. Drones are expected to be used in healthcare, agriculture, weather forecasting, emergency response, monitoring systems and wildlife, and much more.

Researchers are in a race to make these autonomous dynamic systems safe and reliable – yet often use different techniques to achieve the desired results. The two leading approaches pull from the complementary areas of control theory and machine learning, with each approach informed by the latest advances in optimization and neuroscience.

With lives on the line, which is going to provide the best artificial intelligence (AI) solution?

Necmiye Ozay
Necmiye Ozay

The new collaboration, called “Control and Learning Enabled Verifiable Robust AI (CLEVR-AI),” will try to combine both.

“We want a better understanding of where the middle ground is between control and machine learning,” says Ozay. “When we take advantage of the powerful parts of both of these different areas, what can you expect to achieve?”

Control theory relies on physics and the geometric relationship between objects. It works well with dynamic systems that interact with the environment, such as adaptive cruise control in cars and planes. But providing the same reliability to more advanced systems, such as driverless cars and autonomous drones, has been elusive.

Computer science, on the other hand, has provided rather stunning results using machine learning techniques in AI applications. But it’s impossible to know how far you can go with the same technique, which relies on learning from increasing amounts of data, before things fall apart.

“I could make a car that learns the environment by bumping into buildings or bumping into people. That’s not acceptable,” said Mario Sznaier, program lead and Dennis Picard trustee professor of electrical and computer engineering at Northeastern University. “You need an AI that can guarantee that under no conditions will humans be harmed. That’s the problem we’re trying to solve.”

The reverse engineering of neural networks is an attempt to peer inside the so-called black box in an AI system and identify what’s actually happening. The black box, a name for the often mysterious workings of an AI process between the input and output, is considered robust if changes introduced into the system don’t dramatically impact the results.

Unfortunately, it is often not robust—with small changes, for example, turning identification of a panda bear into a vulture in simple computer vision applications.

This can seem merely silly in the case of searching for images. When using this type of AI for mission critical applications—such as an autonomous car mis-identifying a moving pedestrian, or a red light, mistakes become deadly.

With other members of the team, Ozay will also attempt to provide an understanding of when machine learning methods are effective for dynamical systems, and when they should be replaced with control methods.

“Control tells us what type of neural network architectures would work for a given task and what types are impossible to work,” says Ozay.

For example, automated parallel parking with partial sensing requires memory – something not found in feedforward neural networks. Before determining whether to move forward or in reverse while parallel parking, it’s essential to know where you are in the process. Control offers this memory.

These studies will feed into a new and improved theory for artificial intelligence in dynamic systems, created by combining the most successful methods. “We want to use control to make learning and AI more clever,” Ozay said, “but we also want to use data to make control and formal methods better.”

Test cases will be carried out at state-of-the-art testing facilities, including the University of Michigan’s 10,000 sq. ft., four-story outdoor M-Air drone lab and the 32-acre Mcity facility, built for safe testing of the broad range of complexities automated and connected vehicles encounter in urban and suburban environments.

In addition to researchers from Northeastern University and University of Michigan, faculty at Johns Hopkins University and University of California, Berkeley are collaborating in the project.  The team of eight people includes biologists who study electric fish such as eels that use sensors to gauge their immediate surroundings.

Funded by the Office of Naval Research, it is one of 26 MURI’s funded by the Department of Defense in 2021. The MURI program is designed to accelerate research progress by bringing together a critical mass of individuals with complementary areas of expertise.

“We’ve been a very coherent group from the very beginning,” said Ozay, “and we find new connections in each meeting. This is an exciting project with important implications for the future of AI.”

_______________________________________________________

Additional Information

AI can’t land a plan on the Hudson River in an emergency or safely drive a car. A team is using drones to try to fix that (Northeastern University)

Necmiye Ozay – website / news