Yesterday I wrote about the Alcock-Paczynski cosmological test, and how it narrowed the field of cosmological models down to two broad choices: an expanding universe with dark matter and dark energy, or a static universe that exhibits what is known as “tired light”. Now you might think that adding “tiredness” to light is no worse than inventing “dark matter” and “dark energy” to fit observational data. From a theoretical standpoint you’d be right. So why do astronomers accept dark matter and dark energy rather than tired light?
The short answer is that it’s where the evidence has led us, but let’s look at the details.
The idea of tired light was first proposed1 by Fritz Zwicky in 1929 (pdf here), soon after Edwin Hubble discovered the relation between the distance of a galaxy and its redshift (now known as Hubble’s law). The basic idea was that rather than galactic redshift being due to an expansion of the universe, as Hubble suggested, the redshift could be due to another mechanism such as gravitational interaction with galaxies, or collisions with diffuse gas between galaxies. The further away the galaxy, the more of these type of interactions the light has, and the greater its redshift.
This is not a crazy idea. In fact, gravitational redshift does occur, and there is a reddening of light due to diffuse gases in intergalactic space. But we now know these are small effects compared to the redshift for several reasons.
One prediction of tired light is that any type of collision or gravitational interaction that causes a redshift would have the light to lose energy. This would also cause the momentum of the light to change, and that would not only cause the wavelength of light to get longer (redden), it would also smear the light slightly. This means that distant galaxies would not only appear more red, they would also appear blurry. The more distant the galaxy, the more blurry it would appear. We don’t see that at all. Galaxies near or far can appear sharp.
Another prediction is that the redshift we observe is not due to cosmic expansion. Instead the universe is fairly stationary. If this were the case, then a particular kind of supernova known as type 1a should brighten and fade in the same way regardless of their distance. Distant supernova would appear more dim, but the rate at which they brighten and fade should be the same as that of a close supernova. What we actually observe is that distant supernova actually brighten and fade more slowly than closer ones. The more distant the supernova, the slower that process is. This is exactly what is predicted by cosmic expansion. If distant galaxies are moving away from us faster than closer ones, they should be time dilated by their relative motion to us. The more distant the galaxy, the more dilated the time, which is exactly what we see.
Perhaps the most convincing evidence that the tired light model doesn’t work can be seen in the cosmic microwave background (CMB). In the standard model the CMB is the heat remnant of the big bang. As the universe expanded, it cooled to a temperature of about 2.7 Kelvin. Just as a hot piece of iron glows red due to its heat, the universe glows due to its 2.7 Kelvin temperature. It just mainly glows at microwave wavelengths instead of visible ones. If the CMB is from an early hot period of the universe, then its brightness at different wavelengths should follow a very specific function known as the blackbody curve. If instead the cosmic background is due to the scattered remnants of tired light, it will follow a different curve.
In the image here, the observational data of the CMB is plotted. The error bars plotted on the graph are 400 sigma. Scientific studies usually use 6 sigma as the cutoff for “certainty”, meaning that 99.9999998% of your data falls within that range. Just for fun, I calculated 400 sigma in Mathematica to see how many 9s I would have in the percentage. The result came back as 100% to the limit of what Mathematica can calculate. Basically, if your theory doesn’t fall within those error bars it is so abysmally wrong it can’t be quantified in Mathematica. The black line shows the blackbody curve expected if the universe is expanding. The red curve is the curve predicted by tired light. It’s not remotely close.
So even though tired light agrees with the Alcock-Paczynski test, we can throw it out because of its disagreement with other evidence. Tired light is an example of a neat model that just doesn’t hold up to the evidence.
And in the end, the evidence wins.
Note: You can also check out this link for more details and images about tired light and why it fails to match the evidence.
Zwicky, Fritz. “On the redshift of spectral lines through interstellar space.” Proceedings of the National Academy of Sciences of the United States of America 15.10 (1929): 773. ↩︎