Technology attributes
A liquid neural network (LNN) is a time-continuous recurrent neural network built with a dynamic architecture of neurons. LNNs are able to process time-series data, make predictions based on observations, and continuously adapt to new inputs learning even after the training phase. LNNs are designed to overcome some of the inherent challenges of traditional deep learning architectures, offering a more compact, adaptable, and efficient solution to certain artificial intelligence (AI) problems. Examples include edge devices like robotics and self-driving cars that do not have the computation or storage to run large AI models.
This novel deep learning architecture was developed by researchers at the Computer Science and Artificial Intelligence Laboratory at MIT (CSAIL). The concept was inspired by the microscopic nematode Caenorhabditis elegans, a worm that can respond dynamically to its environment with only 302 neurons in its nervous system. LNNs were first introduced in a November 2018 research paper titled "Liquid Time-constant Recurrent Neural Networks as Universal Approximators," written by Ramin M. Hasani, Mathias Lechner, Alexander Amini, Daniela Rus, and Radu Grosu. Lead author Hasani is Principal AI and Machine Learning Scientist at the Vanguard Group and a Research Affiliate at CSAIL MIT.
The new deep learning architecture became better known after a 2020 paper titled "Liquid Time-constant Networks" from the same authors and the subsequent presentation of their work to wider audiences through a series of lectures. A test using LNNs for autonomous vehicle navigation was presented in an October 2020 paper in Nature Machine Intelligence titled "Neural circuit policies enabling auditable autonomy." The test used onboard cameras to record how human drivers held the steering wheel, passing the data to a training platform that taught an LNN to map steering wheel angle to footage of the car driving. The LNN used the data to learn how to autonomously steer the vehicle. In April 2023, MIT researchers demonstrated the use of LNNs to help teach aerial drones to navigate to a given object while responding correctly to complex environments (e.g., forest and urban landscapes).
A key difference between LNNs and more traditional neural networks is the use of dynamic connections between neurons rather than fixed connections and weights. These flexible connections allow LNNs to continuously adapt and learn from new data inputs rather than being fixed depending on training data. This makes LNNs superior at processing time-series data but less effective at processing static or fixed data compared to other neural networks. Using this dynamic architecture requires fewer overall neurons, consuming less overall computing power. This allows them to be run on lightweight hardware, such as microcontrollers. LNNs are also more interpretable than larger, more complex black-box neural networks, as it is easier to see how data inputs are influencing outputs.