Archive for October, 2011

October 30, 2011

Astronomers Pin Down Galaxy Collision Rate

Galaxies have been growing over most of the 13.7 billion year history of the universe. Some of the growth is due to intergalactic gas gradually swept up by an existing galaxy and then driving star formation in the galaxy. But another growth mechanism is the merger of two (and sometimes more) existing galaxies into one. In this case, star formation in the merged galaxies increases as the gas within the galaxies is stirred up during the merger.

Bursts of star formation are important, since in most cases our only indication of the size of a galaxy is its brightness resulting from visible stars. In general, there’s no good way to determine the mass of a galaxy that is due to dark matter and the presence of gas outside stars.

Astrophysicists generally assume that large, symmetrical spiral galaxies (such as our Milky Way) have not merged with other galaxies of similar size, though they may have incorporated several smaller galaxies. However, a large elliptical galaxy is expected to result from the merger of a large spiral and another galaxy of similar size.

For a large elliptical galaxy, it’s natural to wonder about the relative importance of the two possible growth mechanisms in the galaxy’s history – growth by merger and growth by accretion of intergalactic gas. That in turn naturally raises the question of how often mergers occur between galaxies of varying relative sizes.

New research that is soon to be published gives much better estimates of merger rates.

NASA – Astronomers Pin Down Galaxy Collision Rate

A new analysis of Hubble surveys, combined with simulations of galaxy interactions, reveals that the merger rate of galaxies over the last 8 billion to 9 billion years falls between the previous estimates.

The galaxy merger rate is one of the fundamental measures of galaxy evolution, yielding clues to how galaxies bulked up over time through encounters with other galaxies. And yet, a huge discrepancy exists over how often galaxies coalesced in the past. Measurements of galaxies in deep-field surveys made by NASA’s Hubble Space Telescope generated a broad range of results: anywhere from 5 percent to 25 percent of the galaxies were merging.

The study, led by Jennifer Lotz of the Space Telescope Science Institute in Baltimore, Md., analyzed galaxy interactions at different distances, allowing the astronomers to compare mergers over time. Lotz’s team found that galaxies gained quite a bit of mass through collisions with other galaxies. Large galaxies merged with each other on average once over the past 9 billion years. Small galaxies were coalescing with large galaxies more frequently. In one of the first measurements of smashups between dwarf and massive galaxies in the distant universe, Lotz’s team found these mergers happened three times more often than encounters between two hefty galaxies.

It has not been easy to get good estimates of these merger rates. The problem is that all we can observe, for any given possible merger, is a snapshot of the action.

read more »

October 23, 2011

Diamonds for quantum networks and computing

There are more than a dozen promising ideas for technology to implement quantum computers. One of them uses nitrogen-vacancy centers in diamonds. There was recent news about a theoretical study of how this technology could be used to implement coupled qubits.

This technology is especially interesting as it can also be used for a closely related purpose: controlled emission of single photons for applications in quantum information networks. Research just published explains how nanoscale diamond crystals can be fabricated on a single chip in an efficient, scalable process.

Diamonds, Silver and the Quest for Single Photons

Building on earlier work, scientists and engineers recently developed a manufacturing process that allows them to craft an assortment of miniature, silver-plated-diamond posts that enable greater control of light producing photons at the atomic scale. The research could prove important for future generations of quantum computers.

Prior research demonstrated how nanowires carved in impurity-laden diamond crystal could efficiently emit individual photons, an important discovery for using light to rapidly read and write quantum-based data.

Now, research shows that novel nanostructures–silver-plated-diamond posts–can also control the speed at which the process emits individual photons.

The development supports efforts to create robust, room-temperature quantum computers by setting the stage for diamond-based microchips.

Further reading:

Progress in quantum computing, qubit by qubit

Enhanced single-photon emission from a diamond–silver aperture

October 22, 2011

Worms with Genes for Long Life Pass on Longevity to Offspring…Even Without the Genes

Epigenetic changes in your parents’ chromosomes could affect your lifespan. At least, that is, if you’re a simple roundworm.

Recent research has shown that some epigenetic changes in plant DNA can be inherited. (See here, here). However, these changes aren’t robust, and tend to drop out after a few generations.

The epigenetic changes in plants that were heritable involved DNA methylation. The new research on roundworms (Caenorhabditis elegans) concerned a slightly different type of epigenetic change: methylation of a histone protein. Histones make up chromatin, the scaffolding around which DNA is wrapped around in chromosomes. There are four different histones, and two copies of each of these form a nucleosome. About 146 base pairs of DNA are wrapped around each nucleosome. The expression of genes whose DNA is wrapped around a nucleosome can be affected by the methylation state of the H3 histone.

Previous research had found that decreased levels of methylation of a specific part of the H3 histone resulted in longevity extensions of C. elegans by up to 30%. More specifically, a protein complex called H3K4me3 does the job of methylating the critical location in the H3 histone. Mutations of certain components of H3K4me3 were known to bring about the longevity extension effect.

How Longevity Is Passed On

Anne Brunet, an associate professor of genetics at the Stanford School of Medicine, found that mutations in a chromatin-modifying complex also significantly increased lifespan in C. elegans. The complex, known as the histone H3 lysine 4 trimethylation (H3K4me3) complex, is responsible for methylating a chromatin packaging protein called histone H3. This methylation is often associated with the increased expression of genes in the vicinity.

When Brunet and her colleagues knocked down members of the H3K4me3 complex—such as the WDR-5 and SET-2—they extended C. elegans life by up to 30 percent, suggesting that the epigenetic changes regulated by the complex controlled genes related to lifespan.

“Basically we think that the reason why those worms live longer is because they have less of this H3K4 mark at specific loci in the genome,” Brunet explained. “That probably results in changes in the expression of some genes,” such as those that regulate the aging process, she added.

That much was understood before the latest research. The new and rather surprising thing the new research has shown is that even if the mutations affecting H3K4me3 levels are eliminated in succeeding generations, the longevity extending effect persists for two more generations. The research found that expression of certain genes affecting metabolism – which often in turn affects longevity – persisted across generations, suggesting that other, as yet unknown, epigenetic changes occurred due to the original changes in H3 methylation.

The research paper itself concludes:

Our observations are consistent with the notion that H3K4me3 at specific loci may not be completely erased and replenished. Alternatively, the ASH-2/WDR-5/SET-2 complex could control the expression of the genes responsible for the erasure and replenishment of histone methylation marks between generations. Modulation of H3K4me3 modifiers in parents may also affect an unidentified protein or RNA that could in turn be inherited and cause lifespan changes.

Further reading:

Long life passed down through generations

Live long, pass it on

Worms with Genes for Long Life Pass on Longevity to Offspring…Even Without the Genes

Transgenerational epigenetic inheritance of longevity in Caenorhabditis elegans

October 20, 2011

Distant Galaxies Reveal The Clearing of the Cosmic Fog

The first billion years after the big bang (out of about 13.7 billion years total since then) were among the most interesting in terms of giving birth to the kind of objects that still dominate the scene today. Mostly that means stars and galaxies, plus a few exotica such as quasars. Unfortunately, it’s very difficult for astronomers to actually see what was going on back then.

There are three reasons for this difficulty. First, astronomers can detect objects at very early times only at very large distances from us, due to the light travel time. So such objects are likely to be very dim, if detectable at all. Second, due to the expansion of the universe, light emitted by objects in the very early universe will be shifted in wavelength to much higher values. This places a large portion of the light into the infrared part of the spectrum, which is difficult or impossible to observe with ground-based telescopes.

However, the third reason that very early objects are difficult to observe is that conditions in the early universe, namely the presence of a great deal of neutral hydrogen gas in the space between galaxies, obscures light from the galaxies just like atmospheric fog. This is not only unfortunate but also ironic, since a good determination of just how much “fog” was present is one of the key pieces of information that astronomers need in order to understand what objects in the early universe were really like. Astronomers need to know how much “fog” there was in order to correct for it so the visual characteristics of early galaxies can be determined. Yet lack of understanding when and how the “fog” cleared makes this effort rather frustrating.

Research that’s just been published gives new information that helps clarify things a little. It’s based on observations of just 5 very early galaxies, which are among the earliest, most distant galaxies known. What the research is telling us is that the “fog” was clearing rapidly at the time the light we now see from these 5 galaxies was actually emitted.

Distant Galaxies Reveal The Clearing of the Cosmic Fog

An international team of astronomers used the VLT as a time machine, to look back into the early Universe and observe several of the most distant galaxies ever detected. They have been able to measure their distances accurately and find that we are seeing them as they were between 780 million and a billion years after the Big Bang.

The new observations have allowed astronomers to establish a timeline for what is known as the age of reionisation for the first time. During this phase the fog of hydrogen gas in the early Universe was clearing, allowing ultraviolet light to pass unhindered for the first time.

The new results, which will appear in the Astrophysical Journal, build on a long and systematic search for distant galaxies that the team has carried out with the VLT over the last three years.

Astronomers are sure that they know what the “fog” consisted of: ordinary atomic hydrogen gas that is not ionized. A neutral (not ionized) hydrogen atom consists of an electron and a single proton. High-energy photons interact strongly with neutral hydrogen but not with ionized hydrogen. (As will be explained below.) In the early universe about 75% of the mass was in the form of hydrogen, and the rest was helium, with only a small trace of a few other elements. It’s known that this hydrogen “fog” dissipated as most hydrogen atoms became ionized. But it’s not known just when the process of ionization began or when it ended. Even less is it known exactly what caused the ionization. The new research, however, does indicate that the process was occurring rapidly at a specific point in the universe’s history.

Let’s take a closer look at the details of the process as they are currently understood.

read more »

October 18, 2011

Crab nebula’s neutron star is pulsing with gamma rays

The Crab Nebula is a pretty strange crustacean. It’s a supernova remnant, from a supernova whose detonation was seen on Earth in 1054 CE. The explosion left behind a rapidly spinning neutron star and a large quantity of ejected matter that’s still expanding away from the blast site – and putting on an impressive show in optical and many other wavelengths.

Last year and earlier this year there were reports, based on detection by satellite observatories, of occasional very high-energy gamma-ray flares, cause unknown, having energies of at least 1015 eV. These are the highest energy particles detected in a discrete source, and the mechanism of their acceleration is unclear. (See here, here.)

And now very high-energy gamma-ray pulses – less energetic than the flares, but still with energies up to 400 GeV – have shown up, associated with the central pulsar itself, which spins and pulses about 30 times per second. Normally, energetic X-ray and gamma-ray photons are generated from high-speed charged particles in a strong magnetic field, by the process of synchrotron radiation (also known as “curvature radiation”). But that appears to be ruled out in this case, since the energy limit on gamma rays that could be produced by this process is about 100 GeV.

Crab nebula’s neutron star is pulsing with gamma rays

The new Science paper packages up 107 hours of Crab Nebula observations spread over the course of four years. When the data was analyzed, a clear pattern of pulses became apparent at energies above 120GeV, and the timing of the pulses lined up nicely with observations at lower energies made using the Fermi space telescope. The object there is pulsing at much higher energies than any previously detected.

The findings have a number of implications. For starters, they clearly demonstrate that the predictions of exponential decay in pulse energy, those based on observations of other pulsars, aren’t happening here. Instead, the fade-off at higher energies follows what’s called a “broken power law,” with a far more gradual tailing off. The results also allow the authors to estimate the distance from the neutron star to the source of the gamma rays: between 10 and 40 stellar radii from the surface.

If the process responsible for the gamma rays isn’t synchrotron radiation, what is it? The research paper suggests inverse Compton scattering, which involves high-energy particles transferring some of their energy to lower energy photons. But there’s as yet no evidence to confirm the nature of the process. It’s not even known whether a single process accounts for most of the gamma-ray flux, or instead different processes are responsible for the lower and higher energy portions.

Whatever is happening in the Crab Nebula, it’s of more interest than simply a curiosity of that object. There could be some connection with the still unknown mechanism in which ultra-high-energy cosmic rays are produced. There might even be new phenomena, such as a minute dependence of the speed of the highest-energy photons on their energy.

Further reading:

Crab pulsar beams most energetic gamma rays ever detected from a pulsar

Crab Pulsar emits light at highest energies ever detected in a pulsar system, scientists report

Astrophysicists spot pulsed radiation from Crab Nebula that wasn’t supposed to be there

Crab pulsar dazzles astronomers with its gamma-ray beams

Crab pulsar beams most energetic gamma rays ever detected from a pulsar

Star packs big gamma-ray jolt, researchers discover

Detection of Pulsed Gamma Rays Above 100 GeV from the Crab Pulsar

October 17, 2011

Universe’s “Standard Candles” Are White Dwarf Mergers

Supernovae are spectacular but fairly rare events, at least on the human time scale. In our own galaxy, only 5 have been seen (necessarily by the naked eye, before telescopes were invented in 1608) in the last 2000 years. Since there have been none in our galaxy when any telescopes were available to study them, let alone the sophisticated instruments we have now, it’s not very surprising that there’s a lot that isn’t known about how these stellar explosions occur.

We do know that the are several types of supernova that can be distinguished by properties of their spectra (if they are close enough to Earth for spectra to be obtained). The most basic distinction is between supernovae that have hydrogen lines in their spectra (Type II), and those that don’t (Type I). Further subdivisions are possible, but the most important is between subtypes of Type I that have a prominent line due to silicon (Type Ia), and those that don’t (Type Ib and Type Ic).

For various reasons, Type Ia supernovae are especially interesting. One is that at their peak they have a fairly uniform intrinsic brightness (absolute magnitude -19.3, in the somewhat peculiar way that astronomers measure brightness). All other types have absolute magnitudes that vary over a range of about -17 to -18.5. One unit in this scale corresponds to a factor of 2.51, so for example a supernova of absolute magnitude -17 is only about 10% as bright as the standard Type Ia supernova. The fact that all Type Ia supernovae have nearly the same absolute magnitude is what makes them useful as “standard candles” to measure cosmic distances.

That Type Ia supernovae have this uniform intrinsic brightness has been explained by a model in which a white dwarf star of mass less than 1.38 solar masses (the Chandrasekhar limit) gradually (or perhaps suddenly) exceeds this limit, resulting in an explosion that destroys the star. Almost all other types of supernovae can be explained as occurring when a very massive star (more than 9 solar masses) runs out of nuclear fuel, resulting in a “core collapse” since the whole mass of the star can no longer be supported by the pressure of nuclear reactions.

Two more detailed scenarios have been proposed for circumstances leading to a Type Ia supernova. One involves a binary system with one white dwarf and one normal star, portions of whose matter are gradually drawn off by the white dwarf. The second involves a binary system of two white dwarfs that eventually merge, as gravitational waves slowly dissipate their orbital kinetic energy. Historically, astrophysicists have favored the first scenario, though with rather little actual evidence.

Surprisingly (or perhaps not, since white dwarfs aren’t very luminous stars), it has never been possible to identify a progenitor for a Type Ia supernova – all that have been observed with telescopes are in other galaxies. Neither has it been possible to detect another star, afterwards, at the event location, which would be the remaining member of the binary pair. But now research has just been published that provides evidence, rather indirectly, for the second scenario, in which Type Ia supernovae in fact result from the merger of two white dwarfs.

Universe’s “Standard Candles” Are White Dwarf Mergers

A new survey of distant Type Ia supernovae suggests that many if not most of these supernovae – key to astronomers’ conclusion that dark energy is accelerating the expansion of the universe – result when two white dwarf stars merge and annihilate in a thermonuclear explosion.

Evidence that Type Ia supernovae are caused by the merger of two white dwarfs – the so called double-degenerate theory – has been accumulating over the past two years, based on surveys by the Hubble Space Telescope and others. Before, astronomers favored the single-degenerate model: the idea that Type Ia’s result from the explosion of a white dwarf grown too fat by feeding on its normal stellar companion.

So, if all Type Ia events look so much alike, in spite of different models that could explain them, and if it’s difficult or impossible to observe directly what the system was like before the event, just how does the research reach its conclusions? The reasoning is somewhat indirect, but ingenious and not that hard to follow.

read more »

October 14, 2011

Researchers Take the Temperature of Mars’ Past

Suppose you were told that it was possible to determine the surface temperature and presence of water on some planet other than Earth – at at time more than 4 billion years in the past? Sounds a little far-fetched, no? In fact, the deduction actually seems quite plausible, if you consider the reasoning.

The specific finding is that about 4 billion years ago, there was a spot on Mars that was wet and enjoying essentially shirt-sleeve weather – actually a bit warmer than a typical summer day on the beach at San Francisco.

How could anyone figure that out? Well, in the first place, a piece of Mars landed on Earth about 13,000 years ago in the form of a meteorite known as ALH84001. Standard methods of isotopic analysis established that minerals in the rock crystallized about 4 billion years ago.

But that’s only the beginning of the story. Carbonates in the minerals (which must have formed somewhat before the minerals themselves crystallized) contained rare isotopes of carbon and oxygen (carbon-13 and oxygen-18). The exact ratio of these two isotopes depends on the temperature at which the carbonates originally formed. That temperature must have been around 18° C. And the only way that carbonates could have formed at that temperature was by precipitation from liquid water. Voilà.

If this conclusion holds up, it will be the first time that we have good evidence the surface of Mars was once considerably warmer than it is now.

Wet and Mild: Caltech Researchers Take the Temperature of Mars’s Past

Researchers at the California Institute of Technology (Caltech) have directly determined the surface temperature of early Mars for the first time, providing evidence that’s consistent with a warmer and wetter Martian past.

By analyzing carbonate minerals in a four-billion-year-old meteorite that originated near the surface of Mars, the scientists determined that the minerals formed at about 18 degrees Celsius (64 degrees Fahrenheit). “The thing that’s really cool is that 18 degrees is not particularly cold nor particularly hot,” says Woody Fischer, assistant professor of geobiology and coauthor of the paper.

read more »

October 14, 2011

Researchers demonstrate that iPS stem cells may be used for gene therapy

Gene therapy sounds good, in principle, as a means of treating diseases that result from genetic defects. But there have been at least two major practical problems in making use of gene therapy in the clinic. First, it’s very important for the safety of the procedure to make changes only to the defective gene (or even just the critical part of the gene) and no other portion of the DNA. Second, there needs to be an effective way to deliver the therapy, in whatever form it takes, to exactly the right tissues in the body that are affected by the defective gene.

Solving these two problems simultaneously is especially difficult. There are a variety of techniques for modifying DNA in specific ways, using a number of specialized enzymes. But these techniques are actually usable only in a test tube. Simply setting the enzymes loose in a patient’s body doesn’t work. The usual way to get around this is by preparing the appropriate modified DNA segments and incorporating them into “vectors”, such as some sort of virus, and introducing the vector into a patient’s body, in the hope that it will reach the right tissues to deliver the DNA, without causing any other problems. That hasn’t worked very well, so far, despite a large number of attempts.

What about taking some cells from the patient, modifying their DNA in vitro and then putting them back in the body? The problem there is being able to produce enough cells with good DNA, either before or after reintroduction to the patient’s body, to make a significant difference. Most human cells just don’t reproduce very prolifically outside the body, or even inside for that matter.

At this point you should be thinking, “stem cells!” They are specialized to reproduce quickly, when needed. Up until now, there have been practical problems here, too. Adult stem cells capable of differentiating into the specific type of cell needed in a given tissue may be difficult or impossible to find in useful quantities. And embryonic stem cells, well, even any other problems aside, there’s the problem of avoiding rejection by the patient’s immune system.

The potential solution: induced pluripotent stem cells (iPSCs), made from any convenient cell type of the actual patient. It’s only been five years since iPSCs were first produced. Various practical problems have arisen along the way since then. Some have been mostly overcome; some haven’t. But progress seems to be occurring steadily – including very recently. The application to gene therapy involves making appropriate corrections to the DNA, producing a sufficient number of differentiated cells of the required type from the “fixed” iPSCs, and finally reintroducing them into the patient.

Alpha 1-antitrypsin deficiency is a genetic disorder caused by a point mutation that results in inadequate production of the alpha 1-antitrypsin (A1AT) enzyme in liver cells. The disease affects functioning of the lungs, as well as the liver. Research just published has shown that gene therapy to correct the mutation, applied to iPSCs, is successful at treating the condition in a mouse model.

Researchers demonstrate that iPS stem cells may be used for gene therapy

Researchers from the University of Cambridge, directed by Ludovic Vallier and David Lomas, and from the Sanger Institute, coordinated by Allan Bradley, began by sampling patients’ skin cells, which were then cultured in vitro for “differentiation” before applying the properties of the pluripotent stem cells: this is the “iPS cells” stage. Through genetic engineering, scientists were then able to correct the mutation responsible for the disease. They then engaged the now “healthy” stem cells in the maturation process, leading them to differentiate to liver cells.

Scientists from the Institut Pasteur and Inserm, led by H-l-ne Strick-Marchand in the mixed Institut Pasteur/Inserm Innate Immunity unit (directed by James Di Santo), then tested new human hepatic cells thus produced on an animal model afflicted with liver failure. Their research showed that the cells were entirely functional and suited to integration in existing tissue and that they may contribute to liver regeneration in the mice treated.

Further reading:

Liver-disease mutation corrected in human stem cells

Spell-Checked Stem Cells Show Promise Against Liver Disease

Targeted gene correction of α1-antitrypsin deficiency in induced pluripotent stem cells

October 14, 2011

Seeking superior stem cells

The first research reporting the productions of induced pluripotent stem cells (iPSCs) was announced five years ago. Since then, the field has made significant progress, encouraging high hopes that iPSCs could be so similar to embryonic stem cells (ESCs) that they could eventually replace ESCs in research and therapeutic applications.

However, as with almost any new technology, there have been various problems along the way. These include incomplete reproduction of all important characteristics of ESCs, alterations of cell DNA that might cause cancer, and susceptibility of iPSCs to rejection by the immune system even of the donor of the reprogrammed cells.

Further progress depends on being able to deal with these problems. In addition, depending on the method used to produce iPSCs, the process may be too slow and inefficient for practical use. The latest research demonstrates what appears to be an effective technique to significantly improve speed and efficiency of reprogramming.

Seeking superior stem cells – Wellcome Trust Sanger Institute

Researchers from the Wellcome Trust Sanger Institute have today (10/10/2011) announced a new technique to reprogramme human cells, such as skin cells, into stem cells. Their process increases the efficiency of cell reprogramming by one hundred-fold and generates cells of a higher quality at a faster rate.

Until now cells have been reprogrammed using four specific regulatory proteins. By adding two further regulatory factors, Liu and co-workers brought about a dramatic improvement in the efficiency of reprogramming and the robustness of stem cell development. The new streamlined process produces cells that can grow more easily.

read more »

October 14, 2011

Suspects in the quenching of star formation exonerated

Active galactic nuclei (AGN) – produced by matter swept violently into the vortex around a supermassive black hole that may have billions of times as much mass as our Sun – can put on some of the most spectacular fireworks in the universe, over periods as long as 100 million years.

Astrophysicists have sometimes speculated that AGN may be so energetically active that they diminish or even extinguish formation of new stars in the host galaxy.

Recently published research, making use of the PRIMUS faint galaxy spectroscopic redshift survey, suggests that the speculations are wrong, and that AGN can be found even in galaxies in which very active star formation is occurring. The research gives some answers to the larger question of what special characteristics, if any, the host galaxies of AGN may have.

Suspects in the quenching of star formation exonerated

Because astronomers had seen these objects primarily in the oldest, most massive galaxies that glow with the red light of aging stars, many thought active galactic nuclei might help to bring an end to the formation of new stars, though the evidence was always circumstantial.

That idea has now been overturned by a new survey of the sky that found active galactic nuclei in all kinds and sizes of galaxies, including young, blue, star-making factories.

“The misconception was simply due to observational biases in the data,” said Alison Coil, assistant professor of physics at the University of California, San Diego and an author of the new report, which will be published in The Astrophysical Journal.

“Before this study, people found active galactic nuclei predominantly at the centers of the most massive galaxies, which are also the oldest and are making no new stars,” said James Aird, a postdoc at the University of California, San Diego’s Center for Astrophysics and Space Sciences, who led the study.

read more »