Variational autoencoder (VAE), one of the approaches to unsupervised learning of complicated distributions. VAEs are built on top of neural networks (standard function approximators). They can be trained with stochastic gradient descent. Consist of an encoder and a decoder, which are encoding and decoding the data.
VAEs have shown results in generating many kinds of complicated data, including handwritten digits, faces, house numbers, images, physical models of scenes, segmentation and predicting the future from static images.
Currently, no events have been added to this timeline yet.
Be the first one to add some.
Tutorial on Variational Autoencoders
Gaëtan Hadjeres, Frank Nielsen, François Pachet
GLSR-VAE: Geodesic Latent Space Regularization for Variational AutoEncoder Architectures
Haque Ishfaq, Assaf Hoogi, Daniel Rubin
TVAE: Triplet-Based Variational Autoencoder using Metric Learning
Shengjia Zhao, Jiaming Song, Stefano Ermon
InfoVAE: Information Maximizing Variational Autoencoders
Tian Qi Chen, Xuechen Li, Roger Grosse, David Duvenaud
Isolating Sources of Disentanglement in Variational Autoencoders
Tiancheng Zhao, Ran Zhao, Maxine Eskenazi
Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders
Xiaopeng Yang, Xiaowen Lin, Shunda Suo, Ming Li
Generating Thematic Chinese Poetry using Conditional Variational Autoencoders with Hybrid Decoders
Documentaries, videos and podcasts
No infobox has been created on this topic. Be the first to add one.
No Related Topics have been added to this topic yet. Be the first to add one.