The standard model of cosmology is the Lambda Cold Dark Matter model (LCDM). It describes an expanding universe with matter, dark matter and dark energy. We have a confluence of evidence to show that this model works, so much of the current effort is focused on determining various parameters in the model, such as the density of dark matter or the sum of the neutrino masses. One of these parameters is known as the Hubble parameter (or Hubble constant), and it determines the rate at which our Universe is expanding. Since cosmic expansion is driven by dark energy, this parameter also tells us things about dark energy. We thought we had a pretty good handle on what the value of this parameter is, but new research could make us question that assumption.
One thing that makes us confident in the LCDM model is the fact that we can determine parameters in multiple ways. The Hubble parameter can be measured by looking at fluctuations in the cosmic microwave background, the scale at which galaxies cluster, and by comparing the redshift of distant galaxies with their distance as determined by cepheid variables and supernovae. It’s this last method that led to the initial discovery of dark energy. Each of these methods provide an independent measure of the Hubble parameter, and their agreement with each other is a way to validate the LCDM model.
Of course, these different measurements don’t all give exactly the same value. There’s a bit of variation in them, which is known as tension in the model. You would think that as our measurements get better the different methods would converge to a specific value, but that isn’t what’s happening. The latest results from the Planck spacecraft (which looks at the CMB) gives a Hubble constant value of about 67 – 68 (km/s)/Mpc, while new results from Cepheid variables and supernovae give a value of about 71 – 75 (km/s)/Mpc. What’s troubling is that both of these measurements are based on good data, so they should both be accurate within a few percent. Given the uncertainty of the measurements you could say that they “agree” statistically, but the best-value from each of these methods clearly don’t agree well. This tension between results has always been lingering in the data, but it seems to get worse as we get better data.
So what’s going on? The short answer is we aren’t sure. It could be that there is some bias in one or both of the methods that we haven’t accounted for. Planck, for example, has to account for gas and dust between us and the cosmic background, and that may be skewing the results. It could be that the supernovae we use as standard candles to measure galactic distance aren’t as standard as we think. It could also be that our cosmological model isn’t quite right. The current model presumes that the universe is flat, and that cosmic expansion is driven by a cosmological constant. We have measurements to support those assumptions, but if they are slightly wrong that could account for the discrepancy as well.
This disagreement between measurements isn’t enough to completely discard the LCDM model. All of the observations we have support the idea that the model is broadly correct. But there’s something in the details we aren’t getting right, and until we figure that out there will always be a bit of tension.
Paper: Adam G. Riess, et al. A 2.4% Determination of the Local Value of the Hubble Constant. arXiv:1604.01424 [astro-ph.CO] (2016)