Implicit MLE Backpropagating Through Discrete Exponential Family Distributions Paper Explained
>> YOUR LINK HERE: ___ http://youtube.com/watch?v=W2UT8NjUqrk
#imle #backpropagation #discrete • Backpropagation is the workhorse of deep learning, but unfortunately, it only works for continuous functions that are amenable to the chain rule of differentiation. Since discrete algorithms have no continuous derivative, deep networks with such algorithms as part of them cannot be effectively trained using backpropagation. This paper presents a method to incorporate a large class of algorithms, formulated as discrete exponential family distributions, into deep networks and derives gradient estimates that can easily be used in end-to-end backpropagation. This enables things like combinatorial optimizers to be part of a network's forward propagation natively. • OUTLINE: • 0:00 - Intro Overview • 4:25 - Sponsor: Weights Biases • 6:15 - Problem Setup Contributions • 8:50 - Recap: Straight-Through Estimator • 13:25 - Encoding the discrete problem as an inner product • 19:45 - From algorithm to distribution • 23:15 - Substituting the gradient • 26:50 - Defining a target distribution • 38:30 - Approximating marginals via perturb-and-MAP • 45:10 - Entire algorithm recap • 56:45 - Github Page Example • Paper: https://arxiv.org/abs/2106.01798 • Code (TF): https://github.com/nec-research/tf-imle • Code (Torch): https://github.com/uclnlp/torch-imle • Our Discord: / discord • Sponsor: Weights Biases • https://wandb.com • Abstract: • Combining discrete probability distributions and combinatorial optimization problems with neural network components has numerous applications but poses several challenges. We propose Implicit Maximum Likelihood Estimation (I-MLE), a framework for end-to-end learning of models combining discrete exponential family distributions and differentiable neural components. I-MLE is widely applicable as it only requires the ability to compute the most probable states and does not rely on smooth relaxations. The framework encompasses several approaches such as perturbation-based implicit differentiation and recent methods to differentiate through black-box combinatorial solvers. We introduce a novel class of noise distributions for approximating marginals via perturb-and-MAP. Moreover, we show that I-MLE simplifies to maximum likelihood estimation when used in some recently studied learning settings that involve combinatorial solvers. Experiments on several datasets suggest that I-MLE is competitive with and often outperforms existing approaches which rely on problem-specific relaxations. • Authors: Mathias Niepert, Pasquale Minervini, Luca Franceschi • Links: • TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick • YouTube: / yannickilcher • Twitter: / ykilcher • Discord: / discord • BitChute: https://www.bitchute.com/channel/yann... • LinkedIn: / ykilcher • BiliBili: https://space.bilibili.com/2017636191 • If you want to support me, the best thing to do is to share out the content :) • If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): • SubscribeStar: https://www.subscribestar.com/yannick... • Patreon: / yannickilcher • Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq • Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 • Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m • Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
#############################
![](http://youtor.org/essay_main.png)