Log in
Enquire now
Liquid neural network

Liquid neural network

A liquid neural network (LNN) is a time-continuous recurrent neural network built with a dynamic architecture of neurons.

OverviewStructured DataIssuesContributorsActivity

Contents

Is a
Technology
Technology

Technology attributes

Created/Discovered by
MIT Computer Science and Artificial Intelligence Laboratory
MIT Computer Science and Artificial Intelligence Laboratory
0
Related Industries
Generative AI
Generative AI
Natural language processing (NLP)
Natural language processing (NLP)
Machine learning
Machine learning
Artificial neural network
Artificial neural network
Robotics
Robotics
Deep learning
Deep learning
Autonomous vehicle
Autonomous vehicle
Related Technology
Deep learning
Deep learning
Date Invented
2018
0
Overview

A liquid neural network (LNN) is a time-continuous recurrent neural network built with a dynamic architecture of neurons. LNNs are able to process time-series data, make predictions based on observations, and continuously adapt to new inputs learning even after the training phase. LNNs are designed to overcome some of the inherent challenges of traditional deep learning architectures, offering a more compact, adaptable, and efficient solution to certain artificial intelligence (AI) problems. Examples include edge devices like robotics and self-driving cars that do not have the computation or storage to run large AI models.

This novel deep learning architecture was developed by researchers at the Computer Science and Artificial Intelligence Laboratory at MIT (CSAIL). The concept was inspired by the microscopic nematode Caenorhabditis elegans, a worm that can respond dynamically to its environment with only 302 neurons in its nervous system. LNNs were first introduced in a November 2018 research paper titled "Liquid Time-constant Recurrent Neural Networks as Universal Approximators," written by Ramin M. Hasani, Mathias Lechner, Alexander Amini, Daniela Rus, and Radu Grosu. Lead author Hasani is Principal AI and Machine Learning Scientist at the Vanguard Group and a Research Affiliate at CSAIL MIT.

The new deep learning architecture became better known after a 2020 paper titled "Liquid Time-constant Networks" from the same authors and the subsequent presentation of their work to wider audiences through a series of lectures. A test using LNNs for autonomous vehicle navigation was presented in an October 2020 paper in Nature Machine Intelligence titled "Neural circuit policies enabling auditable autonomy." The test used onboard cameras to record how human drivers held the steering wheel, passing the data to a training platform that taught an LNN to map steering wheel angle to footage of the car driving. The LNN used the data to learn how to autonomously steer the vehicle. In April 2023, MIT researchers demonstrated the use of LNNs to help teach aerial drones to navigate to a given object while responding correctly to complex environments (e.g., forest and urban landscapes).

A key difference between LNNs and more traditional neural networks is the use of dynamic connections between neurons rather than fixed connections and weights. These flexible connections allow LNNs to continuously adapt and learn from new data inputs rather than being fixed depending on training data. This makes LNNs superior at processing time-series data but less effective at processing static or fixed data compared to other neural networks. Using this dynamic architecture requires fewer overall neurons, consuming less overall computing power. This allows them to be run on lightweight hardware, such as microcontrollers. LNNs are also more interpretable than larger, more complex black-box neural networks, as it is easier to see how data inputs are influencing outputs.

Timeline

No Timeline data yet.

Further Resources

Title
Author
Link
Type
Date

"Liquid" machine-learning system adapts to changing conditions

Daniel Ackerman | MIT News Office

https://news.mit.edu/2021/machine-learning-adapts-0128?ref=blog.roboflow.com

Web

January 28, 2021

Liquid Neural Networks | Ramin Hasani | TEDxMIT

Ramin Hasani

https://www.youtube.com/watch?v=RI35E5ewBuI

Web

January 19, 2023

Liquid Time-constant Networks

Ramin Hasani, Mathias Lechner, Alexander Amini, Daniela Rus, Radu Grosu

https://arxiv.org/abs/2006.04439

June 8, 2020

Liquid Time-constant Recurrent Neural Networks as Universal Approximators

Ramin M. Hasani, Mathias Lechner, Alexander Amini, Daniela Rus, Radu Grosu

https://arxiv.org/abs/1811.00321

November 1, 2018

References

Find more entities like Liquid neural network

Use the Golden Query Tool to find similar entities by any field in the Knowledge Graph, including industry, location, and more.
Open Query Tool
Access by API
Golden Query Tool
Golden logo

Company

  • Home
  • Press & Media
  • Blog
  • Careers
  • WE'RE HIRING

Products

  • Knowledge Graph
  • Query Tool
  • Data Requests
  • Knowledge Storage
  • API
  • Pricing
  • Enterprise
  • ChatGPT Plugin

Legal

  • Terms of Service
  • Enterprise Terms of Service
  • Privacy Policy

Help

  • Help center
  • API Documentation
  • Contact Us
By using this site, you agree to our Terms of Service.