Variance is a measure of dispersion which explains the distribution of values around the mean. It is the square of the standard deviation. We can calculate the variance by dividing the sum of the squared deviations of all measured values by the number of all measured values. The symbol of the variance of a random variable is „σ²“, the symbol of the empirical variance of a sample is „s²“.
Consider the variable 'age' in a sample of 5 people. The measured values are 14, 17, 20, 24 and 25 years. The mean value is therefore 100/5=20 years. Now, we can calculate the deviation from the mean of each measured value:
The squared deviations are 36, 9, 0, 16, 25 – their sum is 86. The variance therefore is 86/5=17.2 years².
As can be seen in the example, the disadvantage of the variance is that its unit is a different one than the unit of the measured values. At first glance, we do not get concrete information on how far the values actually are scattered. For an easier interpretation, therefore, we often use the standard deviation, which results from the square root of the variance.
Les définitions de notre encyclopédie sont des explications simplifiées de termes. Notre but est de rendre ces définitions compréhensibles pour un large public. Par conséquent, il est possible que certaines d’entre elles ne soient pas entièrement à la hauteur des standards scientifiques.