In the previous post of the mini-course, we proved Azuma–Hoeffding inequality. We were able to relax the assumption that were independent but we were dealing only with their sum. Now we are to demonstrate that martingale method lets also to control more general functions (now of independent arguments). The idea is to apply Azuma–Hoeffding inequality to the generic random sequence called *Doob martingale*.

For consider its “discrete partial derivatives”

It occurs that if all of them are uniformly bounded, then is well-concentrated.

Theorem (McDiarmid’s inequality)Let be independent. For any it holds

In other words, is -subgaussian.

**Proof.** Note that which prompts the following expansion:

It is trivially checked (with the properties of conditional expectation) that is indeed an MDS with respect to the filtration (the corresponding martingale is called Doob martingale).

Note that is conditionally bounded: for some non-random with The application of Azuma–Hoeffding inequality yields the result. ♦

**Remark 1. **McDiarmid’s inequality can be applied to control the deviations of an empirical process. Namely, let be a class of bounded functions and define

Noting that has differences bounded by 1/n, we obtain that with probability

This result is often referred to as McDiarmid’s inequality in the machine learning community (since this guys most often need such formulated only). It is in fact the same bound as one gets for just one sum (Hoeffding’s inequality!), when is a singleton.

**Remark 2. **McDiarmid’s inequality, just as Hoeffding’s inequality, may be sharpened by taking into account the variance. The resulting inequality is called *Talagrand’s inequality*; we state here its empirical process version. Under the same assumptions as in Remark 1, with probability at least

where