Dark energy was “discovered” unexpectedly in 1997 (published in 1998), when a survey of relatively nearby Type Ia supernovae found something strange. These supernovae should all have about the same intrinsic luminosity. But instead, using assumptions current at the time about the rate of expansion of the universe, the farther away these supernovae were the less their intrinsic luminosity would have to be.
If in fact these supernovae have pretty much the same intrinsic luminosity in all cases, then they are “standard candles”. Hence a simple application of the inverse square law (apparent luminosity is proportional to the inverse square of the distance) would determine how far away the objects should be. However, by another chain of reasoning, the distance can be calculated using “Hubble’s law” from the observed redshift in the objects’ spectra, because the law states that relative velocity (which is inferred from redshift) is proportional to distance, with a constant of proportionality called “Hubble’s constant”. (The law was named after Edwin Hubble, who first conceived it.)
The problem was that this prediction of distance was too small for the observed luminosity. Otherwise stated, if the remote supernovae were actually as close as the distances calculated by Hubble’s law, they could not be as luminous intrinsically as more nearby supernovae. However, this problem would go away if there were an error in calculating the correct distance from Hubble’s law – which could happen if something now called “dark energy” existed.
In order to verify this hypothesis of dark energy, there are several reasons astrophysicists want to have another type of standard candle besides Type Ia supernovae for gauging very large distances. Ideally another type of standard candle that doesn’t depend on the behavior of Type Ia supernovae and works out to much larger distances could be identified. Astrophysicists have suspected that another type of supernova, which is responsible for a much more energetic pulse of electromagnetic radiation – a gamma-ray burst – can fill the bill.
Research just announced has studied the properties of relatively nearby gamma-ray bursts and identified certain characteristics that allow predicting the intrinsic brightness of the burst. Comparing that brightness to what’s actually observed determines the distance of the event.
Dark energy is the basic constituent of the Universe today, one that is responsible for its accelerated expansion. Although astronomers observe the cosmological effects of the impact of dark energy, they still do not know exactly what it is. A new method for measuring the largest distances in the Universe developed by scientists from the Faculty of Physics, University of Warsaw and the University of Naples Federico II helps solve the mystery. A key role is played by the most powerful cosmic explosions – gamma-ray bursts.
What is the nature of dark energy, a recently discovered dominant constituent of the Universe today? Is expansion-accelerating dark energy an intrinsic property of space-time itself or rather a field unknown to science? A new distance-measuring method developed by scientists from the Faculty of Physics, University of Warsaw (FUW) and the University of Naples Federico II can provide the answer. “We are able to determine the distance of an explosion on the basis of the properties of the radiation emitted during gamma-ray bursts. Given that some of these explosions are related to the most remote objects in space that we know about, we are able, for the first time, to assess the speed of space-time expansion even in the relatively early periods after the Big Bang,” says Prof. Marek Demiański (FUW). The method was used to verify models of the structure of the Universe containing dark energy.
Since the discrepancy between distances predicted by Hubble’s law and by the inverse square law (assuming no actual difference in intrinsic luminosity) increased with redshift, and there was no theoretical reason for supernovae to behave differently at different redshifts, the alternative hypothesis that Hubble’s law was wrong over large distances had to be considered.
To do that, better means of estimating very large distances were needed. There are several problems with using Type Ia supernovae as standard candles over long distances. One is the possibility that they really aren’t that standard, and can vary a lot more than supposed in intrinsic luminosity. Most studies of relatively nearby supernovae, whose distances can be estimated by other conventional techniques, don’t support such variability. However, we don’t actually have observations of Type Ia supernovae in the early, much more distant universe, so there could be unknown reasons for greater variability.
We don’t have such observations because, as bright as Type Ia supernovae are, they aren’t bright enough to be observed at really large distances. Therefore, the idea can’t be ruled out that they might behave differently much earlier than now in the history of the universe. Furthermore, while the deviations from Hubble’s Law in distance predictions seem real, they’re still not that large out to distances that can be reliable gauged with Type Ia supernovae.
Not only is another way of gauging very large distances needed in order to provide an independent check on results obtained by using Type Ia supernovae, but it’s also needed to better determine the properties of dark energy, if in fact it’s the right explanation for the accelerating expansion of the universe. The detailed properties of dark energy need to be determined in order to more accurately account for the varying rates of expansion of the universe in the past – and in the future.
Let’s back up and look at how Hubble’s Law was formerly understood, and how it needed to be modified.
Redshift, which can actually be observed and measured, is not directly a measure of distance (or velocity either, for that matter). Hubble’s Law was derived empirically based on observations of rather nearby objects and assumed that redshift was strictly a result of an object’s motion relative to us. It states that an object’s velocity relative to us (as supposedly indicated by redshift) is proportional to the object’s distance (measured independently, somehow). The constant of proportionality, Hubble constant, is denoted by H0. Symbolically, v = H0×D, where D is the distance and v is the velocity. Equivalently, D = v/H0.
Now suppose that H0 isn’t actually a constant – it only seems to be in the nearby universe – and instead it is a parameter that varies with time. Suppose further that the value of the parameter was less in the past than it is now. Then the measured value H0 used in the equation is actually too large, so we will underestimate the value of D. This in fact seems to be what was found: the supernova standard candles were actually farther away than expected because of the overestimated H0. And since they were actually farther away, they appeared dimmer than expected – which is exactly the anomaly that was observed.
If the expansion rate of the universe, H0, that we observe now is greater than it was in the past, it means that the rate is accelerating. Mathematically, there’s a differential equation, derived from General Relativity, that describes the expansion of the universe, assuming that H0 varies with time and isn’t actually constant. It’s called the Friedmann equation, and it contains various time-dependent parameters. The expansion rate H is one of the parameters (H0 being the value measured in our immediate vicinity).
In order for H to increase with time, another term in the equation represents a quantity called “dark energy” (just to pick a name) that accounts for the change in H. The mathematical expression for this dark energy can take various forms, but the simplest is that the dark energy contained in a volume of space is proportional to the volume. The constant of proportionality is denoted by Λ – which is the “cosmological constant” and happens to have been suggested by Einstein himself in order to “explain” why the universe did not collapse under its own gravity – before Hubble discovered that the universe is in fact expanding rather than collapsing.
Although a cosmological constant gives mathematically the simplest form of dark energy, in which the amount of dark energy in a volume of space is proportional to the volume (so that there is more of it all the time, as the universe expands), there are other forms it could take. As explained by the authors of the present research,
To this day no one knows exactly what dark energy is. There are two models explaining its nature. According to the first one, dark energy is a property described by the famous cosmological constant introduced by Albert Einstein. According to the second model, the accelerated expansion is caused by some unknown scalar field. “In other words, it is either-or: either space-time expands by itself or is expanded by a scalar physical field inside it,” says Prof. Demiański.