Mirror Descent and Stochastic Variance Reduced Gradients

Optimization Algorithms, E1-260, Optimization for ML, IISc, 2021

In mirror descent, we replace the commonly use Euclidean norm with the Bregman divergence. The implementation of Mirror descent is available here.

Normal SGD, mini-batch gradient descent is seen to converge slower due to their variance. SVRG is an stochastic gradient descent algorithm which can be mathematically show to have lower variance than the aforesaid algorithms and is also seen to perform better. This contains the implementation of SVRG.