This is an old revision of the document!
The Greek astronomer Hipparchos is usually credited with the origin of the magnitude scale. He assigned the brightest stars he could see with his eye a magnitude of $1$ and the faintest a magnitude of $6$. However, in terms of the amount of energy received, a sixth magnitude star is not $6\times$ fainter than a first magnitude star, but more like $100\times$ fainter, due to the non-linear response of the human eye to light. This led the English astronomer Norman Pogson to formalize the magnitude system in 1856. He proposed that a sixth magnitude star should be precisely $100\times$ fainter than a first magnitude star, so that each magnitude corresponds to a change in brightness of $100^{1/(6-1)} = 2.512$. For example, a star of magnitude $2$ is $2.512^1\times=2.512\times$ times fainter than a star of magnitude $1$, a star of magnitude $6$ is $2.512^2\times=6.3\times$ fainter than a star of magnitude $4$, and a star of magnitude $25$ is $2.512^5\times=100\times$ fainter than a star of magnitude $20$. Note how it is only the magnitude difference that determines the brightness ratio of two stars, not the absolute values of their magnitudes.
Sources