Magnitude Primer

A primer on astronomical magnitdues

An astronomical magnitude is a measure of the brightness of a light source. Both its overall scaling and its reversal from our usual sense of the word "brightness" date back to the Greeks, so we have them to thank for this confusing system.

Why learn magnitudes?

It may seem like an antiquated way of reporting measurements (it is!). So why bother with it? The major reason is that it is used by nearly all observational astronomers, so if you want to read research papers, you'll need to understand the conventions.

Some useful resources

Wikipedia article: https://en.wikipedia.org/wiki/Magnitude_(astronomy)

Solar absolute magnitudes: http://mips.as.arizona.edu/~cnaw/sun.html

Common photometric filters and their properties: http://www.astronomy.ohio-state.edu/~martini/usefuldata.html

Notation

Symbols

$m$ : apparent magnitude

$m_a$ : apparent magnitude in photometric band, $a$ (e.g., $m_B$ would be the apparent magnitude measured in the $B$ band)

$m_a - m_b$ : color difference between passbands $a$ and $b$

$M$ : absolute magnitude

$\mu$ : distance modulus

$f$ : light flux (luminosity per unit area)

$L$ : luminosity (energy per time)

$L_\odot$ : luminosity of the Sun

Acronyms

UV : ultraviolet, bluer than optical light

NIR : near-infrared, redder than optical light

IR : infrared, even redder than NIR light

Definition

Astronomical magnitudes are defined as shown below, such that the difference in magnitudes is a measurement of the ratio of fluxes of two sources. When this is for a ratio of fluxes (as is the case here), we specify that this is the apparent magnitude, in contrast to absolute magnitudes (detailed in a later section).

$$m_1 - m_2 = -2.5 \log_{10}\left(\frac{f_1}{f_2}\right)$$

Usually, the second source is chosen to be some reference object for which we set $m_2 = 0$. This then becomes the so-called zero-point of our magnitude system. With a chosen zero-point, the magnitude of a source is thus

$$m = -2.5 \log_{10} \left(\frac{f}{f_\text{ref}}\right)$$

where $f$ is the physical flux (energy per time per area) of the source, and $f_\text{ref}$ is the flux of a reference source (i.e., it is some fixed flux value). If we refer to a particular part of the spectrum when reporting these measurements (see Photometric Passbands below), then these flux measurements are technically flux density (energy per time per area per frequency). If instead, we are considering flux across the entire spectrum, then we call this the bolometric magnitude.

Of particular note:

  • the logarithmic scale (base 10) with the silly factor of 2.5 means that every additional magnitude corresponds to an approximately 40% change in flux
  • the negative sign in the definition means that a larger magnitude corresponds to a smaller flux, so that going from a magnitude of 1 to a magnitude of 2 is a 40% decrease in the amount of flux

The point of the reference flux can also be confusing. Another way you might see this written is

$$m = -2.5 \log_{10} \left(f\right) + m_\text{ref}$$

where the reference magnitude, $m_\text{ref}$, is the magnitude zero-point. I personally prefer the first way of writing the definition of the magnitude, since we can clearly see that we are taking the logarithm of a dimensionless value.

Magnitude reference systems

What are these reference fluxes? There are two common systems in use: Vega magnitudes and AB magnitudes. The first system sets the reference flux to be that of Vega, one of the brightest stars in the sky. The corresponding flux values for the Vega system can be found in the third linked reference at the top of the notebook.

The second system (AB) sets the reference flux to be some constant physical value (3631 Jy, where 1 Jy is $10^{-23}$ erg/s/cm$^2$/Hz), which is the same across the entire spectrum. Sometimes you may even see other systems in use, so be sure to note which is being reported when you look up measurements!

Photometric passbands and colors

Most photometry (brightness measurements using imaging) is done using passbands (or filters) which restrict the allowed light to be within some subset of the electromagnetic spectrum. There are various commonly used filter sets which cover the spectrum, from ultraviolet to the optical to the infrared. For example, the Sloan Digital Sky Survey (SDSS) use the $ugriz$ filter set. This is a 5-band photometric system where $u$ is an ultraviolet (UV) filter, $g$ is blue/green, $r$ is red, $i$ is near-infrared (NIR), and $z$ is farther red into the near-infrared. Another example is the Johnson/Bessell system of $UBVRI$, where $U$ is UV, $B$ is blue, $V$ is in the green part of the optical spectrum (the $V$ stands for visual), $R$ is red, and $I$ is NIR. The transmittance curves (the efficiency at which light is transmitted as a function of wavelength) of these filters are shown below.

Since the flux from a source typically varies with wavelength (i.e., its spectrum is not flat), the magnitude value of a source in one band will be different than that in another passband. We can quantify the difference between the magnitudes measured in two passbands ($a$ and $b$) with the color, $m_a - m_b$. This is often abbreviated using just the filter labels, e.g.,

$$V - I \equiv m_V - m_I$$

By convention, the redder band is subtracted from the bluer band. Recall that a magnitude is a logarithm of the flux, so the difference in magnitudes represents a ratio of fluxes. Since there is a negative sign in the magnitude definition, holding to this convention means that source with a positive color value has more red light than blue light (and vice versa).

Absolute magnitudes and distance

The apparent magnitude is an observed quantity which depends on where in the universe we make our measurement. This is because the flux of light that we measure is inversely proportional to the square of the distance between the observer and the emitter. If we want to relate this to the intrinsic luminosity of the object, we need to know how far away it is. For a perfectly isotropic light emitter, the flux is given by

$$f = \frac{L}{4\pi d^2}$$

where $L$ is the intrinsic luminosity of the source and $d$ is the physical distance between the source and the observer.

In the wacky world of magnitudes, the luminosity of a source can be represented by the absolute magnitude, which is defined as the apparent magnitude that an object would have if observed from 10 parsecs (1 parsec $\approx 3\times 10^{18}$ cm $\approx 2\times 10^5$ AU is the distance from the Sun which would cause a parallax of 1 arcsecond to be observed from Earth). For many applications, this distance scale is entirely inappropriate. For instance, 10 pc away from the Milky Way Galaxy makes no sense, as it would still be inside the Galaxy. Nevertheless, it's the convention which has been used for nearly everything from the Sun to the most distant galaxies.

The distance can get translated to its impact on magnitudes by computing what is called the distance modulus,

$$\mu = 2.5\log_{10}\left(\frac{d}{10\text{ pc}}\right)$$

Note that the distance modulus (thank goodness) monotonically increases with distance.

If you have a measurement of the magnitude (flux) of a source and its distance, then the absolute magnitude is simply given by

$$M = m - \mu$$

Alternatively, if you know the intrinsic luminosity of a source, then the absolute magnitude is defined analogously to the apparent magnitude, and so the difference in absolute magnitudes is a measure of the ratio of luminosities.

$$M = -2.5\log_{10}\left(\frac{L}{L_\text{ref}}\right)$$

We can use this definition to calculate the luminosity of a source. Often, luminosities are reported in solar units (i.e., $L_\odot \approx 4\times 10^{33} \text{erg s}^{-1}$). The Sun then becomes our new standard of reference, and we can look up the zero-point offsets as the absolute magnitude of the Sun in our particular passband. For example, the absolute magnitude of the Sun in the $B$ band (in the Vega reference system) is about 4.8. If we measure the $B$ band absolute magnitude of a source as described above, then we can use the definition of magnitudes:

$$ M_B - M_{B, \odot} = -2.5\log_{10}\left(\frac{L_B}{L_{B, \odot}}\right)$$

where $M_{B, \odot} = 4.8$, and thus

$$ \frac{L_B}{L_{B, \odot}} = 10^{-0.4 (M_B - M_{B, \odot})}$$

The brightness scales of the universe

All these definitions are all well and good, but what range of values do "real life" magnitudes occupy?

On the scale of human eyesight, the brightest stars we see in the night sky are 0th to 1st magnitude, and the faintest stars you can see with your naked eye are 6th magnitude. The during daytime has a magnitude of $\approx -26$. If the Sun were at 10 parsecs away from the Earth, it would appear to be a fairly dim magnitude 5 star.

The most distance galaxies we can see generally have magnitudes of $\approx 30$.

Exercises

1) A star in the solar neighborhood has a measured $R$ band apparent magnitude of 5 and a parallax-derived distance of $100 \text{ pc}$. What is its absolute magnitude? What is this in solar luminosities?

2) Though we can learn much more about our own Milky Way Galaxy than more distant galaxies, observing the Milky Way from inside the Galaxy means that certain measurements are more distant than those for nearby galaxies. Here we'll do a rough estimate of the luminosity of the Milky Way Galaxy and compare that to the that of the Andromeda galaxy (our nearest non-dwarf neighbor).

a. What would the luminosity of the Milky Way be? There are about $10^{11}$ stars in the Milky Way. We can do a gross simplification and assume that all stars are like the Sun.

b. What would the absolute magnitude of the Milky Way be? The bolometric magnitude of the Sun is 4.74.

c. How does this compare to that of the Andromeda Galaxy (at around $2.6\times 10^{10} \ L_\odot$)?

social