Variational autoencoder (VAE), one of the approaches to unsupervised learning of complicated distributions. VAEs are built on top of neural networks (standard function approximators). They can be trained with stochastic gradient descent. Consist of an encoder and a decoder, which are encoding and decoding the data.
VAEs have shown results in generating many kinds of complicated data, including handwritten digits, faces, house numbers, images, physical models of scenes, segmentation and predicting the future from static images.
Generating Thematic Chinese Poetry using Conditional Variational Autoencoders with Hybrid Decoders
Xiaopeng Yang, Xiaowen Lin, Shunda Suo, Ming Li
GLSR-VAE: Geodesic Latent Space Regularization for Variational AutoEncoder Architectures
Gaëtan Hadjeres, Frank Nielsen, François Pachet
InfoVAE: Information Maximizing Variational Autoencoders
Shengjia Zhao, Jiaming Song, Stefano Ermon
Isolating Sources of Disentanglement in Variational Autoencoders
Tian Qi Chen, Xuechen Li, Roger Grosse, David Duvenaud
Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders
Tiancheng Zhao, Ran Zhao, Maxine Eskenazi
Tutorial on Variational Autoencoders
TVAE: Triplet-Based Variational Autoencoder using Metric Learning
Haque Ishfaq, Assaf Hoogi, Daniel Rubin
Documentaries, videos and podcasts