Chernoff bounding is good enough

There is a way to demonstrate that Chernoff bounding is — in some sense — optimal, and it relies to the so-called Sanov’s theorem, which controls the empirical distribution (as a random measure) in terms of the Kullback-Leibler divergence from the underlying measure.

In this nice post Tim van Erven demonstrates how the asymptopic optimality of Chernoff bounding (called also Cramér-Chernoff theorem) is related to Sanov’s theorem. Meanwhile, an elementary proof of of Sanov’s theorem may be found in Ramon van Handel’s lecture notes.

About Dmitry Ostrovsky

PhD student with interests in statistics, optimization, and machine learning.
This entry was posted in Course on Concentration Inequalities and tagged . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s