Log in
Enquire now
Variational autoencoder

Variational autoencoder

Type of neural network that reconstruct output from input and consist of an encoder and a decoder

OverviewStructured DataIssuesContributors

Contents

TimelineTable: Further ResourcesReferences

A type of generative model was first introduced in 2013, and is known as a Variational Autoencoder. The two people who introduced this technology are Diederik Kingma and Max Welling.

Variational autoencoder (VAE), one of the approaches to unsupervised learning of complicated distributions. Some use cases of for a VAE would include compressing data, reconstructing noisy or corrupted data, interpolating between real data, and are capable of sourcing new concepts and connections from copious amounts of unlabelled data. VAEs are built on top of neural networks (standard function approximators).

They can be trained with stochastic gradient descent. Consist of an encoder and a decoder, which are encoding and decoding the data. An example of the encoder and decoder functions inputting and outputting the same data would be as follows: The encoder function can be represented as a standard neural network function passed through an activation type function, which maps the original data to a latent space. The decoder function then maps the latent space at the bottleneck to the output (which is the same as the input). The same process is done when output differs from input, only the decoding function is represented with a different weight, bias, and potential activation functions in play. This is known as self-supervised learning.

When a variational autoencoder is used to change a photo of a female face to a male's, the VAE can grab random samples from the latent space it had learned its data generating distribution from. The random samples are added to the decoder network and generate unique images that have characteristics related to both the input (female face) and the output (male face or faces the network was trained with).

VAEs have shown results in generating many kinds of complicated data, including handwritten digits, faces, house numbers, images, physical models of scenes, segmentation and predicting the future from static images. Avoiding over-fitting and ensuring that the latent space has good properties which enable generative processes is what allows VAEs to create these types of data.

Timeline

No Timeline data yet.

Further Resources

Title
Author
Link
Type
Date

Auto-Encoding Variational Bayes

Diederik P Kingma, Max Welling

https://arxiv.org/pdf/1312.6114.pdf

Academic paper

December 20, 2013

Generating Thematic Chinese Poetry using Conditional Variational Autoencoders with Hybrid Decoders

Xiaopeng Yang, Xiaowen Lin, Shunda Suo, Ming Li

http://arxiv.org/abs/1711.07632v2

Academic paper

GLSR-VAE: Geodesic Latent Space Regularization for Variational AutoEncoder Architectures

Gaëtan Hadjeres, Frank Nielsen, François Pachet

http://arxiv.org/abs/1707.04588v1

Academic paper

InfoVAE: Information Maximizing Variational Autoencoders

Shengjia Zhao, Jiaming Song, Stefano Ermon

http://arxiv.org/abs/1706.02262v2

Academic paper

Isolating Sources of Disentanglement in Variational Autoencoders

Tian Qi Chen, Xuechen Li, Roger Grosse, David Duvenaud

http://arxiv.org/abs/1802.04942v1

Academic paper

References

Find more entities like Variational autoencoder

Use the Golden Query Tool to find similar entities by any field in the Knowledge Graph, including industry, location, and more.
Open Query Tool
Access by API
Golden Query Tool
Golden logo

Company

  • Home
  • Press & Media
  • Blog
  • Careers
  • WE'RE HIRING

Products

  • Knowledge Graph
  • Query Tool
  • Data Requests
  • Knowledge Storage
  • API
  • Pricing
  • Enterprise
  • ChatGPT Plugin

Legal

  • Terms of Service
  • Enterprise Terms of Service
  • Privacy Policy

Help

  • Help center
  • API Documentation
  • Contact Us
By using this site, you agree to our Terms of Service.