An autoencoder is a neural network that uses an algorithm to learn through back-propagation, in an unsupervised manner. The autoencoder tries to learn an approximate to an identity function by placing constraints on the network to gather information about the structure of the data of the function.
Another way to describe an autoencoder is as an Unsupervised Pertained Network, or UPN. Each autoencoder is composed of three layers: an input layer, an hidden (encoding) layer, and a decoding layer. This makes them similar to principle components analysis, or PCA as some of their traits include an unsupervised ML algorithm, they minimize the same objective function as a PCA, and are a neural network with a target output that is the same as its input.
There are several types of autoencoders such as denoising autoencoders, sparse autoencoders, variational autoencoders (VAE), and contractive autoencoders (CAE).