GoldenGolden
Advanced Search
Reservoir computing

Reservoir computing

Reservoir computing is a framework for computation derived from recurrent neural network theory that maps input signals into computational spaces through a fixed, nonlinear system called a reservoir.

All edits by  Amy Tomlinson Gayle 

Edits on 22 Apr, 2021
Amy Tomlinson Gayle
Amy Tomlinson Gayle edited on 22 Apr, 2021
Edits made to:
Article (+1/-1 images) (+404/-458 characters)
Article

Reservoir computing is an approach to recurrent neural network design and training, which maps input signals into higher dimensional computational spaces through a fixed, nonlinear system called a reservoir. The reservoir is treated as a black box from which a simple readout mechanism is trained to read the state of the reservoir and map it to the desired output. Reservoir computing is suited for temporal or sequential data processing. This computing setup features two key elements: a dynamical system that can respond to inputs (a reservoir) and a readout layer that is used to analyze the state of the system.

...
Model of a reservoir neural network.
Model of a reservoir neural network.

Reservoir computing differs from traditional recurrent neural network (RNN) learning techniques by making conceptual and computation separation between the reservoir and the readout. This means in contrast to traditional supervised learning, errors in the weights to input or in the reservoir will only influence the weights of the readout layer, as these weights are set at the start of the learning and do not change. WhereWhereas in traditional supervised learning, the error between the desired output and the computed output will influence the weights of the entire network.

...

The nonlinearity describes the response of each unit to an input is a nonlinear response, and the time to whichrespond allows reservoir computers to solve complex problems. The reservoir is capable of storing information by connecting units in recurrent loops, where previous input affects the next response., Andand where the change in reaction due to the past allows the computer to be trained to complete specific tasks.

...

These reservoirsReservoirs can bybe virtual or physical. In virtual reservoirs, the reservoir is randomly generated and designed like neural networks. They can further be designed to have nonlinearity and recurrent loops. But, unlike neural networks, the connections between units are randomized and remain unchanged during computation.

...

An early example of reservoir computing, the context reverberation network architecturenetwork architecture has an input layer whichthat feeds into a dimensional dynamical systems read out by a trainable single-layer perceptron. In this network, two kinds of dynamical systems were described:

...

Echo state network'snetworks (ESN) provide an architecture and a supervised learning principle for recurrent neural networks. The echo state network principle drives a random, large, fixed recurrent neural network with the input signal. Thereby, the ESN induces a nonlinear response signal in each neuron in the recurrent neural network, which composes the 'reservoir' network. This can combine a desired output signal with a trainable linear combination of all the response signals.

...

A liquid state machine (LSM) is a reservoir computer that uses a spiking neural network for computation. The name comes from an analogy to a stone dropped into a body of water or other liquid, which generates ripples in the liquid, such that the input (the falling stone) has been converted into a pattern of liquid displacement. In this system, the LSM consists of a large collection of units (called nodes or neurons). These nodes receive time varying input from external sources and from other nodes, all of which are randomly connected to each other. The recurrent nature of the connections translate the time varying input into a spatial-temporal pattern of activations in the network node, which in ternturn are read out by linear discriminant units. This ends with the computing of nonlinear functions on the input, and,; given a large variety of nonlinear functions, it is possible to obtain linear combinations to perform mathematical operations and achieve tasks such as speech recognition and computer vision.

...

Quantum reservoir computing utilizes the nonlinear nature of quantum mechanical interactions or processes to form a reservoir. It may also be accomplished using linear reservoirs, when the injection of the input to the reservoir creates the nonlinearity. As well, the possible combination of quantum devices with machine learningmachine learning could lead to the development of quantum neuromorphic computing.

...

Reservoir computing, for different reasons, has advantages over classical fully trained recurrent neural networks,. whichThe havereservoir computing paradigm has facilitated the practical application of recurrent neural networks, and reservoir computer-trained recurrent neural networks have seen reservoir computer trained recurrent neural networks outperformoutperformed classically trained recurrent neural networks in many tasks. The advantages to reservoir computing over other forms of recurrent neural network training include:

  • The first advantage is that trainingTraining is performed at the readout stage, simplifying the overall training process.
  • The second advantage is the computational power for reservoir computing comes from naturally available systems, either classical or quantum mechanical, and can be utilized to reduce the computational cost.
...

Another advantage to reservoir computing is the ease in multi-tasking or sequential learning. Where, inIn the approach of BPTT, an entire network is first optimized for a task and then additionally trained for other tasks, which can interfere during the update of weights within the same network. As well, inIn this situation, there is a danger that a network forgets previously learned tasks. In the reservoir computing framework, because training occurs at the readout part, no interference occurs among tasks, so multi-tasking can be safely implemented.

...

Based on the relative computational complexity and the simplicity of use of reservoir computing and the relative ease of use, these systems are considered to be well suitedwell-suited for forecasting dynamical systems. This could include training recurrent neural networks for:

...

The reservoir computing framework is used to test hardware systems for neuromorphic computing. One preferred task for benchmarking devices is speech recognition. This requires acoustic transformations from sound waveforms with varying amplitudes to frequency domain maps. The use of speech recognition has been shown to be an appropriate benchmark for different hardware, as the nonlinearity in acoustic transformation plays a critical role in the speech recognition success rate.

...

The photonic integrated circuit has been further proposed for performing prediction and classification tasks, with the main challenge for the miniaturization of photonic reservoir computing the use of integrated circuits. The use of reservoir computing with a photonic integrated circuit has been demonstrated with a semiconductor laser and a short external cavity. A method for increasing the number of virtual nodes was also proposed, in which delayed feedback, using short node intervals and outputs from multiple delay times, was used. A photonic integrated circuit in this construction using an optical feedback has been shown to perform a similar photonic integrated circuit without optical feedback specifically in prediction tasks.

...

In February 2021, a study by Nakajima, et al, looked at the possibility of using photonic implementation on-chip for a simplified recurrent neural network. This study used an integrated coherent linear photonic processor, and, in contrast to the previous approaches, the input and recurrent weights were encoded in the spatiotemporal domain using photonic linear processing. This could enable computing beyond the input electrical bandwidth of traditional computing systems. As well, the device was capable of processing multiple wavelength inputs over the telecom C-band simultaneously. The tests showed good performance for chaotic time-series forecasting and image classification. The study also confirmed the potential of photonic neuromorphic processing towards peta-scale neuromorphic super-computing on a photonic chip.

...

In machine learning, feed-forward structures, such as artificial neural networks, graphical Bayesian models, and kernel methods, have been studied for the processing of non-temporal problems. These methods are well understood due to their non-dynamic nature. The feed-forward network is a fundamental building block of a neural network. However, as the appeal of neural networks is the possibility of being parallel with the human brain, the network architecture of which is not a feedforward, and; this understanding leadled to the recurrent neural networks. In 2001, with difficulties in developing recurrent neural networks, a new approach to design and training was proposed independently by Wolfgang Maass and Herbert Jaeger. These respective approaches were called Liquid State Machines and Echo State Networks.

...

The Liquid State Machine (LSM), proposed by Wolfgang Maass, was originally presented as a framework to perform real-time computation on temporal signals. However, most descriptions use an abstract cortical microcolumn model, in which a 3D structured locally connected network of spiking neurons is created using biologically inspired parameters and excited by external input spikes. The responses from all neurons are projected to the next cortical layer where the training is performed. This usually modeled a simple linear regression function, but the description of LSM supports more advanced readout layers such as parallel perception. Based on the biologically inspired parameters leaveleaving LSMs as slow and computationally intensive, and these systems have not been commonly used for engineering purposes.

...

The Echo State Network (ESN), developed around the same time as the Liquid State Machine, was developed by Herbert Jaeger. The ESN consists of a random, recurrent network of analog neurons driven by a one-dimensional or a multi-dimensional time signal. The activations of the neurons are used to do linear classification and regression tasks. The ESN was introduced as a better way to use the computational power of recurrent neural networks, without needing to train the internal weights. In this way, the reservoir works as a complex nonlinear dynamic filter that transforms input signals using a temporal map. It is possible to use ESN to solve several classification tasks on an input signal by adding multiple readouts to a single reservoir. As well, becauseBecause ESNs are more motivated by machine learning theory, they often use sigmoid neurons over the biologically inspired models of LSMs.

...

Proposed by Schiller and Steil, the algorithm called Backpropagation-Decorrelation was a possible new RNN training method whichthat also treated the reservoir and readout layer separately and suggested fast convergence and good practical results. The proposition also provided a conceptual bridge between traditional BPTT and the reservoir computing framework.

...

Due to the different underlying theories of the LSM and ESN models, the literature concerning these was spread across different domains and did not interact often, if at all. But, once they did, the ideas were proposed to be combined into a common research stream, which they called reservoir computing. These methods, along with Backpropagation-Decorrelation are now considered reservoir computing.

...

The concept of reservoir computing used the recursive connections within neural networks to create a complex dynamical system in a generalization of LSMs and ESNs. Recurrent neural networks had previously been found to be useful for language processing and dynamic system modellingmodeling, but the training of the networks was challenging and computationally expensive. Reservoir computing reduced the training-related challenges through its use of a dynamic reservoir and the required need to only train the output. As well, reservoirReservoir computing was also shown to be able to use a variety of nonlinear dynamical systems for a reservoir to perform computations. The increased interest in reservoir computation has alsoled seento research into the use of photonics and lasers for computation in order to increase efficiency when compared to electrical components.

...

Reservoir computing has since also been extended to physical, or natural, ways of developing computing devices. For example, one experiment used a bucket of water into which inputs were projected and the wave'swaves were recorded in order to train a pattern recognizer. As well, an E.Coli. bacteria colony was used in which chemical stimuli waswere usedutilized as input and protein measures were used as output. Both experiments suchshowed that reservoir computing may be suitable to use, in combining computational power with unexpected hardware material.

Page 1 of 2
Amy Tomlinson Gayle
Amy Tomlinson Gayle approved a suggestion from Golden's AI on 21 Apr, 2021
Edits made to:
Article (+20/-20 characters)
Article

An early example of reservoir computing, the context reverberation network architecturenetwork architecture has an input layer which feeds into a dimensional dynamical systems read out by a trainable single-layer perceptron. In this network, two kinds of dynamical systems were described:

Amy Tomlinson Gayle
Amy Tomlinson Gayle approved a suggestion from Golden's AI on 21 Apr, 2021
Edits made to:
Article (+13/-13 characters)
Article

The readout is a neural network layernetwork layer that performs a linear transformation on the output of the reservoir. The weights of the readout layer are, in turn, trained through analyzing the spatiotemporal patterns of the reservoir after excitation by known inputs, and utilizing training methods such as linear regression or ridge regression. Because the readout implementation depends on the reservoir patterns, the details of readout methods are tailored to a specific reservoir.

Amy Tomlinson Gayle
Amy Tomlinson Gayle approved a suggestion from Golden's AI on 21 Apr, 2021
Edits made to:
Article (+16/-16 characters)
Article

Quantum reservoir computing utilizes the nonlinear nature of quantum mechanical interactions or processes to form a reservoir. It may also be accomplished using linear reservoirs when the injection of the input to the reservoir creates the nonlinearity. As well, the possible combination of quantum devices with machine learningmachine learning could lead to the development of quantum neuromorphic computing.

Amy Tomlinson Gayle
Amy Tomlinson Gayle approved a suggestion from Golden's AI on 21 Apr, 2021
Edits made to:
Article (+14/-14 characters)
Article

Physical reservoirs are possible because of the nonlinearity of certain natural systems. The interaction between ripples on the surface of water contains the nonlinear dynamics required in reservoir creation. A pattern recognition reservoir computer was developed by inputting ripples with electric motors and analyzing the ripples in the readout. The framework of exploiting physical systems as information-processing devices is especially suited for edge computingedge computing devices, in which the information processed is incorporated at the edge in a decentralized manner to reduce adaptation delays caused by data transmission overhead.

Amy Tomlinson Gayle
Amy Tomlinson Gayle approved a suggestion from Golden's AI on 21 Apr, 2021
Edits made to:
Article (+19/-19 characters)
Article

Reservoir computing differs from traditional recurrent neural network (RNN) learning techniques by making conceptual and computation separation between the reservoir and the readout. This means in contrast to traditional supervised learningsupervised learning, errors in the weights to input or in the reservoir will only influence the weights of the readout layer, as these weights are set at the start of the learning and do not change. Where in traditional supervised learning, the error between the desired output and the computed output will influence the weights of the entire network.

Amy Tomlinson Gayle
Amy Tomlinson Gayle approved a suggestion from Golden's AI on 21 Apr, 2021
Edits made to:
Article (+17/-17 characters)
Article

The readout is a neural network layer that performs a linear transformation on the output of the reservoir. The weights of the readout layer are, in turn, trained through analyzing the spatiotemporal patterns of the reservoir after excitation by known inputs, and utilizing training methods such as linear regressionlinear regression or ridge regression. Because the readout implementation depends on the reservoir patterns, the details of readout methods are tailored to a specific reservoir.

Amy Tomlinson Gayle
Amy Tomlinson Gayle approved a suggestion from Golden's AI on 21 Apr, 2021
Edits made to:
Article (+11/-11 characters)
Article
  1. A continuous reaction-diffusion system, inspired by Alan TuringAlan Turing's model of morphogenesis
Amy Tomlinson Gayle
Amy Tomlinson Gayle approved a suggestion from Golden's AI on 21 Apr, 2021
Edits made to:
Article (+26/-26 characters)
Article

In machine learning, feed-forward structures, such as artificial neural networksartificial neural networks, graphical Bayesian models, and kernel methods, have been studied for the processing of non-temporal problems. These methods are well understood due to their non-dynamic nature. The feed-forward network is a fundamental building block of a neural network. However, as the appeal of neural networks is the possibility of parallel with the human brain, the network architecture of which is not a feedforward, and this understanding lead to the recurrent neural networks. In 2001, with difficulties in developing recurrent neural networks, a new approach to design and training was proposed independently by Wolfgang Maass and Herbert Jaeger. These respective approaches were called Liquid State Machines and Echo State Networks.

Amy Tomlinson Gayle
Amy Tomlinson Gayle approved a suggestion from Golden's AI on 21 Apr, 2021
Edits made to:
Article (+24/-24 characters)
Article

Reservoir computing is an approach to recurrent neural networkrecurrent neural network design and training which maps input signals into higher dimensional computational spaces through a fixed, nonlinear system called a reservoir. The reservoir is treated as a black box from which a simple readout mechanism is trained to read the state of the reservoir and map it to the desired output. Reservoir computing is suited for temporal or sequential data processing. This computing setup features two key elements: a dynamical system that can respond to inputs (a reservoir) and a readout layer used to analyze the state of the system.

Golden logo
Text is available under the Creative Commons Attribution-ShareAlike 4.0; additional terms apply. By using this site, you agree to our Terms & Conditions.