This is an old revision of the document!
The Greek astronomer Hipparchus is usually credited with the origin of the magnitude scale. He assigned the brightest stars he could see with his eye a magnitude of 1 and the faintest a magnitude of 6. However, in terms of the amount of energy received, a sixth magnitude star is not 6× fainter than a first magnitude star, but more like 100× fainter, due to the non-linear response of the human eye to light.
This led the English astronomer Norman Pogson to formalize the magnitude system in 1856. He proposed that a sixth magnitude star should be precisely 100× fainter than a first magnitude star, so that each magnitude corresponds to a change in brightness of 1001/(6−1)=2.512. For example, a star of magnitude 2 is 2.5121×=2.512× times fainter than a star of magnitude 1, a star of magnitude 6 is 2.5122×=6.3× fainter than a star of magnitude 4, and a star of magnitude 25 is 2.5125×=100× fainter than a star of magnitude 20. 1)
Hence, Pogson's ratio of 2.512 leads us to Pogson's equation:
F1F2=(1001/5)−(m1−m2)
where F1 and F2 are the fluxes of two stars, m1 and m2 are their magnitudes, and the minus sign in front of the exponent accounts for the fact that numerically larger magnitudes refer to fainter stars2).
Taking logarithms of Pogson's equation, we obtain:
log10F1F2=−(m1−m2)⋅log10(1001/5)=−25(m1−m2)
More conveniently, we can write:
F1F2=10−25(m1−m2) and
m1−m2=−2.5log10F1F2
Sources
- Lifted with minor modifications from: http://www.vikdhillon.staff.shef.ac.uk/teaching/phy217/instruments/phy217_inst_mags.html