Information content

In information theory, the information content, self-information, surprisal, or Shannon information is a basic quantity derived from the probability of a particular event occurring from a random variable. It can be thought of as an alternative way of expressing probability, much like odds or log-odds, but which has particular mathematical advantages in the setting of information theory.

The Shannon information can be interpreted as quantifying the level of "surprise" of a particular outcome. As it is such a basic quantity, it also appears in several other settings, such as the length of a message needed to transmit the event given an optimal source coding of the random variable.

The Shannon information is closely related to entropy, which is the expected value of the self-information of a random variable, quantifying how surprising the random variable is "on average". This is the average amount of self-information an observer would expect to gain about a random variable when measuring it.[1]

The information content can be expressed in various units of information, of which the most common is the "bit" (more formally called the shannon), as explained below.

The term 'perplexity' has been used in language modelling to quantify the uncertainty inherent in a set of prospective events.

  1. ^ Jones, D.S., Elementary Information Theory, Vol., Clarendon Press, Oxford pp 11–15 1979

© MMXXIII Rich X Search. We shall prevail. All rights reserved. Rich X Search