04 / 2015
en | de

As part of the Computational Intelligence Lab course at ETH (master program in computer science), I gave an almost self-contained 2 hour short introductory course on optimization. The material is very introductory only but meant to be useful for machine learning and potentially other applications.

We covered gradient methods for constrained and unconstrained optimization, and convexity together with (Lagrange) duality, and a bit of matrix factorizations. Didn't have time to cover Frank-Wolfe unfortunately, but hopefully next time...

Slides are available here (Some slides courtesy of Gabriel Krummenacher, Dmitry Laptev, and Boyd & Vandenberghe):