The radiative energy of a source can be given either in relative terms via some magnitude scale or in absolute terms via some measure of radiative quantities.
The Greek astronomer Hipparchus is usually credited with the origin of the magnitude scale. He assigned the brightest stars he could see with his eye a magnitude of $1$ and the faintest a magnitude of $6$. However, in terms of the amount of energy received, a sixth magnitude star is not $6\times$ fainter than a first magnitude star, but more like $100\times$ fainter, due to the non-linear response of the human eye to light.
This led the English astronomer Norman Pogson to formalize the magnitude system in 1856. He proposed that a sixth magnitude star should be precisely $100\times$ fainter than a first magnitude star, so that each magnitude corresponds to a change in brightness of $100^{1/(6-1)} = 2.512$. For example, a star of magnitude $2$ is $2.512^1\times=2.512\times$ times fainter than a star of magnitude $1$, a star of magnitude $6$ is $2.512^2\times=6.3\times$ fainter than a star of magnitude $4$, and a star of magnitude $25$ is $2.512^5\times=100\times$ fainter than a star of magnitude $20$. 1)
Hence, Pogson's ratio of $2.512$ leads us to Pogson's Equation:
$$\frac{F_1}{F_2} = \biggl(100^{1/5} \biggr)^{-(m_1-m_2)}$$
where $F_1$ and $F_2$ are the fluxes of two stars, $m_1$ and $m_2$ are their magnitudes, and the minus sign in front of the exponent accounts for the fact that numerically larger magnitudes refer to fainter stars2).
Taking logarithms of Pogson's Equation, we obtain:
$$\log_{10}\frac{F_1}{F_2} = -(m_1-m_2) \cdot \log_{10}(100^{1/5}) = -\frac{2}{5} (m_1-m_2)$$
More conveniently, we can write:
$$\frac{F_1}{F_2} = 10^{-\frac{2}{5}(m_1-m_2)}$$ and
$$m_1-m_2 = -2.5 \log_{10}\frac{F_1}{F_2}$$
Sources