The Bolshoi (Большой) supercomputer simulation is a simulation of a substantial portion (a cube one billion light-years on a side) of the entire universe. Of course, any simulation is limited in the level of detail (the resolution) it can handle, which depends on the computing power used to run the model. A simulation also relies on the most accurately measured values available of a number of parameters. This new Bolshoi simulation has almost ten times the resolution of the best previous simulation. It also uses an updated set of parameters, replacing the previous set, which wasn’t as accurate.
The Bolshoi supercomputer simulation, the most accurate and detailed large cosmological simulation run to date, gives physicists and astronomers a powerful new tool for understanding such cosmic mysteries as galaxy formation, dark matter, and dark energy.
The simulation traces the evolution of the large-scale structure of the universe, including the evolution and distribution of the dark matter halos in which galaxies coalesced and grew. Initial studies show good agreement between the simulation’s predictions and astronomers’ observations.
So what does a sophisticated simulation of this kind actually tell us about the history of the universe, including parts we can’t observe directly?
The simulation assumes a particular model that describes the physics of the system, i. e. the universe. In this case, that’s the Lambda Cold Dark Matter model, which hypothesizes that the universe consists of a specific mix of ingredients. That includes estimated proportions of ordinary (baryonic) matter, dark matter, and dark energy (the Lambda in the name). It’s legitimate to question whether this is the “right” model and list of ingredients. But it is exactly the purpose of running the model to test these basic assumptions. If the evolution of the model universe produces results that agree with observations, that’s good evidence that the assumptions of the model are valid. It’s not conclusive “proof” that the assumptions are correct, but it is strong evidence.
Any simulation model is really just a way of computing the predictions of a theory, given certain parameters, so that they can be tested against observations. The model is then considered to be as “good” as the accuracy of its predictions. Some of the input parameters are based on observational measurements, while others are “free” parameters that can be varied to determine the outcomes at different values. These free parameters may actually be known to some extent, though not very precisely. Examples for a universe simulation include neutrino masses and the w constant in the “equation of state“. Running the model with slightly different values could help narrow down the range of possibilities that are consistent with observations. (Because of the extensive amount of computer time required, such experiments can’t be easily done yet.)
Although such adjustments of free parameters might seem like cheating, in fact it is very unlikely that the behavior the model predicts can match observations unless the assumptions (physics and parameters) are good. Since the system being modeled (the universe) is so complex, bad assumptions are likely to result in model behavior that’s wildly different from what can be observed, because there is a very sensitive dependence on initial conditions.
So how good are the predictions? It has to be understood that the simulation has produced a lot of raw data – about 90 terabyes worth. That’s about as much as 10,000 movie DVDs each holding 9 gigabytes of data. It will be released gradually to the astrophysical community, which then has the job of comparing the computed numbers to observations.
However, one of the papers just published discusses some important correct predictions already. These include the relationship between luminosity and rotation rate for many common types of galaxies, and the spatial distribution of galaxies as compared with data from the Sloan Digital Sky Survey.
These results may be more important than simple refinements of existing knowledge. For example, the fact that the actual spatial distribution of galaxies is in accord with modeling would mean that claims to the contrary from a few months ago were premature – because previous modeling was inadequate.