There is a way to demonstrate that Chernoff bounding is — in some sense — optimal, and it relies to the so-called Sanov’s theorem, which controls the empirical distribution (as a random measure) in terms of the Kullback-Leibler divergence from the underlying measure.

In this nice post Tim van Erven demonstrates how the asymptopic optimality of Chernoff bounding (called also Cramér-Chernoff theorem) is related to Sanov’s theorem. Meanwhile, an elementary proof of of Sanov’s theorem may be found in Ramon van Handel’s lecture notes.

### Like this:

Like Loading...

## About Dmitry Ostrovsky

PhD student with interests in statistics, optimization, and machine learning.