
Reconciling the DiscreteContinuous Divide: Towards a Mathematical Theory of Sparse Communication
Neural networks and other machine learning models compute continuous rep...
read it

Direct Optimization through for Discrete Variational AutoEncoder
Reparameterization of variational autoencoders with continuous latent s...
read it

A partial information decomposition for discrete and continuous variables
Conceptually, partial information decomposition (PID) is concerned with ...
read it

DVAE++: Discrete Variational Autoencoders with Overlapping Transformations
Training of discrete latent variable models remains challenging because ...
read it

The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
The reparameterization trick enables optimizing large scale stochastic c...
read it

Efficient Marginalization of Discrete and Structured Latent Variables via Sparsity
Training neural network models with discrete (categorical or structured)...
read it

Unsupervised and interpretable scene discovery with DiscreteAttendInferRepeat
In this work we present Discrete Attend Infer Repeat (DiscreteAIR), a R...
read it
Sparse Communication via Mixed Distributions
Neural networks and other machine learning models compute continuous representations, while humans communicate mostly through discrete symbols. Reconciling these two forms of communication is desirable for generating humanreadable interpretations or learning discrete latent variable models, while maintaining endtoend differentiability. Some existing approaches (such as the GumbelSoftmax transformation) build continuous relaxations that are discrete approximations in the zerotemperature limit, while others (such as sparsemax transformations and the Hard Concrete distribution) produce discrete/continuous hybrids. In this paper, we build rigorous theoretical foundations for these hybrids, which we call "mixed random variables." Our starting point is a new "direct sum" base measure defined on the face lattice of the probability simplex. From this measure, we introduce new entropy and KullbackLeibler divergence functions that subsume the discrete and differential cases and have interpretations in terms of code optimality. Our framework suggests two strategies for representing and sampling mixed random variables, an extrinsic ("sampleandproject") and an intrinsic one (based on face stratification). We experiment with both approaches on an emergent communication benchmark and on modeling MNIST and FashionMNIST data with variational autoencoders with mixed latent variables.
READ FULL TEXT
Comments
There are no comments yet.