Golden
Variational autoencoder

Variational autoencoder

Type of neural network that reconstruct output from input and consist of an encoder and a decoder

A type of generative model was first introduced in 2013, and is known as a Variational Autoencoder. The two people who introduced this technology are Diederik Kingma and Max Welling.



Variational autoencoder (VAE), one of the approaches to unsupervised learning of complicated distributions. Some use cases of for a VAE would include compressing data, reconstructing noisy or corrupted data, interpolating between real data, and are capable of sourcing new concepts and connections from copious amounts of unlabelled data. VAEs are built on top of neural networks (standard function approximators).



They can be trained with stochastic gradient descent. Consist of an encoder and a decoder, which are encoding and decoding the data. An example of the encoder and decoder functions inputting and outputting the same data would be as follows: The encoder function can be represented as a standard neural network function passed through an activation type function, which maps the original data to a latent space. The decoder function then maps the latent space at the bottleneck to the output (which is the same as the input). The same process is done when output differs from input, only the decoding function is represented with a different weight, bias, and potential activation functions in play. This is known as self-supervised learning.



When a variational autoencoder is used to change a photo of a female face to a male's, the VAE can grab random samples from the latent space it had learned its data generating distribution from. The random samples are added to the decoder network and generate unique images that have characteristics related to both the input (female face) and the output (male face or faces the network was trained with).



VAEs have shown results in generating many kinds of complicated data, including handwritten digits, faces, house numbers, images, physical models of scenes, segmentation and predicting the future from static images. Avoiding over-fitting and ensuring that the latent space has good properties which enable generative processes is what allows VAEs to create these types of data.

Timeline

People

Name
Role
LinkedIn

Diedreik Kingma

Co-creator



Max Welling

Co-creator



Further reading

Title
Author
Link
Type
Date

Auto-Encoding Variational Bayes

Diederik P Kingma, Max Welling

Academic paper

December 20, 2013

Generating Thematic Chinese Poetry using Conditional Variational Autoencoders with Hybrid Decoders

Xiaopeng Yang, Xiaowen Lin, Shunda Suo, Ming Li

Academic paper



GLSR-VAE: Geodesic Latent Space Regularization for Variational AutoEncoder Architectures

Gaëtan Hadjeres, Frank Nielsen, François Pachet

Academic paper



InfoVAE: Information Maximizing Variational Autoencoders

Shengjia Zhao, Jiaming Song, Stefano Ermon

Academic paper



Isolating Sources of Disentanglement in Variational Autoencoders

Tian Qi Chen, Xuechen Li, Roger Grosse, David Duvenaud

Academic paper



Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders

Tiancheng Zhao, Ran Zhao, Maxine Eskenazi

Academic paper



Tutorial on Variational Autoencoders

Carl Doersch

Academic paper



TVAE: Triplet-Based Variational Autoencoder using Metric Learning

Haque Ishfaq, Assaf Hoogi, Daniel Rubin

Academic paper



Documentaries, videos and podcasts

Title
Date
Link





Companies

Company
CEO
Location
Products/Services









References