Category Archives: Course on Concentration Inequalities
This is the first of a series of two posts with a goal of providing a background on information theory. For today, the itinerary is as follows: we will start with introducing basic information quantities, proceed with proving tensorization statements for them — i.e. that they … Continue reading
In the previous post of the mini-course, we proved Azuma–Hoeffding inequality. We were able to relax the assumption that were independent but we were dealing only with their sum. Now we are to demonstrate that martingale method lets also to control more general functions (now of independent arguments). … Continue reading
In this post we will further expand the setting in which concentration inequalities are stated. Remember that we started from the arithmetical mean where all were independent. We first showed that if are sufficiently “light-tailed” (subgaussian), then concentrates above its mean, with a remarkable special … Continue reading
In this post we will finish our affair with subexponential distributions. We will consider yet another example where they can be applied, an important statement called Johnson–Lindenstrauss lemma. Essentially, it states that points in a high-dimensional Euclidean space may be projected in a subspace with dimension only … Continue reading
In the previous post we considered subexponential distributions and concentration inequality, called Berstein inequality, for independent sums of subexponential random variables. In this post we will concentrate more thoroughly on the meaning of parameters and the concept of deviation zones.
In this post we move on from subgaussian distributions to another important class of distributions called subexponential. The simplest and most common example of such distributions is chi-square. As usual, we are interested in tail probability bounds for such distributions, and … Continue reading