Motivation


In optimization theory, your ability to prove convergence rates and guarantees is limited if you don’t assume any structure in your function. You can’t really prove anything useful without assuming something about the structure of your function.

If you are able to assume certain structure — such as convexity or smoothness — this allows you to make claims about the global nature of your function using just local information. This allows much more interesting — and valuable — proofs.

For example, if you know your function is convex, you know that the function is lower bounded by the gradient of your function at point $x$. Utilizing this structure allows you to make strong claims about optimization.

Example: $y=x^2$ is a convex function — so the derivative at $x=1$ is $2$ and the tangent line (which has slope of the derivative at this point) lower bounds the entire function.

Example: $y=x^2$ is a convex function — so the derivative at $x=1$ is $2$ and the tangent line (which has slope of the derivative at this point) lower bounds the entire function.

Useful Properties


Convexity

Continuity

Smoothness