## Wasserstein variational autoencoders

Variational auto-encoders (VAEs) are a latent space model. The idea is you have some latent space variable $z \in \mathbb{R}^{k}$ which describes your original variables $x\in\mathbb{R}^d$ in higher dimensional space by a latent model $p(x|z)$. Let’s assume that this distribution is given by a neural network with some parameters $\theta$ so that we assume $$x | z, \theta \sim N(g_\theta(z), 1).$$ Of course in reality, we don’t know $(z, \theta)$, we would like to infer these from the data. [Read More]