Saturday, 11 October 2014
On 19:19 by Asveth Sreiram No comments
For the first time, robotic prostheses controlled via implanted neuromuscular interfaces have become a clinical reality. A novel osseointegrated (bone-anchored) implant system gives patients new opportunities in their daily life and professional activities.
In January 2013 a Swedish arm amputee was the first person in the world to receive a prosthesis with a direct connection to bone, nerves and muscles. An article about this achievement and its long-term stability will now be published in the Science Translational Medicine journal.
"We have used osseointegration to create a long-term stable fusion between man and machine, where we have integrated them at different levels. The artificial arm is directly attached to the skeleton, thus providing mechanical stability. Then the human's biological control system, that is nerves and muscles, is also interfaced to the machine's control system via neuromuscular electrodes. This creates an intimate union between the body and the machine; between biology and mechatronics."
The direct skeletal attachment is created by what is known as osseointegration, a technology in limb prostheses pioneered by associate professor Rickard Brånemark and his colleagues at Sahlgrenska University Hospital. Rickard Brånemark led the surgical implantation and collaborated closely with Max Ortiz Catalan and Professor Bo Håkansson at Chalmers University of Technology on this project.
The patient's arm was amputated over ten years ago. Before the surgery, his prosthesis was controlled via electrodes placed over the skin. Robotic prostheses can be very advanced, but such a control system makes them unreliable and limits their functionality, and patients commonly reject them as a result.
Now, the patient has been given a control system that is directly connected to his own. He has a physically challenging job as a truck driver in northern Sweden, and since the surgery he has experienced that he can cope with all the situations he faces; everything from clamping his trailer load and operating machinery, to unpacking eggs and tying his children's skates, regardless of the environmental conditions (read more about the benefits of the new technology below).
The patient is also one of the first in the world to take part in an effort to achieve long-term sensation via the prosthesis. Because the implant is a bidirectional interface, it can also be used to send signals in the opposite direction -- from the prosthetic arm to the brain. This is the researchers' next step, to clinically implement their findings on sensory feedback.
"Reliable communication between the prosthesis and the body has been the missing link for the clinical implementation of neural control and sensory feedback, and this is now in place," says Max Ortiz Catalan. "So far we have shown that the patient has a long-term stable ability to perceive touch in different locations in the missing hand. Intuitive sensory feedback and control are crucial for interacting with the environment, for example to reliably hold an object despite disturbances or uncertainty. Today, no patient walks around with a prosthesis that provides such information, but we are working towards changing that in the very short term."
The researchers plan to treat more patients with the novel technology later this year.
"We see this technology as an important step towards more natural control of artificial limbs," says Max Ortiz Catalan. "It is the missing link for allowing sophisticated neural interfaces to control sophisticated prostheses. So far, this has only been possible in short experiments within controlled environments."
More about: How the technology works
The new technology is based on the OPRA treatment (osseointegrated prosthesis for the rehabilitation of amputees), where a titanium implant is surgically inserted into the bone and becomes fixated to it by a process known as osseointegration (Osseo = bone). A percutaneous component (abutment) is then attached to the titanium implant to serve as a metallic bone extension, where the prosthesis is then fixated. Electrodes are implanted in nerves and muscles as the interfaces to the biological control system. These electrodes record signals which are transmitted via the osseointegrated implant to the prostheses, where the signals are finally decoded and translated into motions.
More about: Benefits of the new technology, compared to socket prostheses
Direct skeletal attachment by osseointegration means:
- Increased range of motion since there are no physical limitations by the socket -- the patient can move the remaining joints freely
- Elimination of sores and pain caused by the constant pressure from the socket
- Stable and easy attachment/detachment
- Increased sensory feedback due to the direct transmission of forces and vibrations to the bone (osseoperception)
- The prosthesis can be worn all day, every day
- No socket adjustments required (there is no socket)
Implanting electrodes in nerves and muscles means that:
- Due to the intimate connection, the patients can control the prosthesis with less effort and more precisely, and can thus handle smaller and more delicate items.
- The close proximity between source and electrode also prevents activity from other muscles from interfering (cross-talk), so that the patient can move the arm to any position and still maintain control of the prosthesis.
- More motor signals can be obtained from muscles and nerves, so that more movements can be intuitively controlled in the prosthesis.
- After the first fitting of the controller, little or no recalibration is required because there is no need to reposition the electrodes on every occasion the prosthesis is worn (as opposed to superficial electrodes).
- Since the electrodes are implanted rather than placed over the skin, control is not affected by environmental conditions (cold and heat) that change the skin state, or by limb motions that displace the skin over the muscles. The control is also resilient to electromagnetic interference (noise from other electric devices or power lines) as the electrodes are shielded by the body itself.
- Electrodes in the nerves can be used to send signals to the brain as sensations coming from the prostheses.
On 19:18 by Asveth Sreiram No comments
Even before he lost his right hand to an industrial accident 4 years ago, Igor Spetic had family open his medicine bottles. Cotton balls give him goose bumps.
Now, blindfolded during an experiment, he feels his arm hairs rise when a researcher brushes the back of his prosthetic hand with a cotton ball.
That's one of several types of sensation Spetic, of Madison, Ohio, can feel with the prosthetic system being developed by Case Western Reserve University and the Louis Stokes Cleveland Veterans Affairs Medical Center.
Spetic was excited just to "feel" again, and quickly received an unexpected benefit. The phantom pain he'd suffered, which he's described as a vice crushing his closed fist, subsided almost completely. A second patient, who had less phantom pain after losing his right hand and much of his forearm in an accident, said his, too, is nearly gone.
Despite having phantom pain, both men said that the first time they were connected to the system and received the electrical stimulation, was the first time they'd felt their hands since their accidents. In the ensuing months, they began feeling sensations that were familiar and were able to control their prosthetic hands with more -- well -- dexterity.
To watch a video of the research, click here: http://youtu.be/l7jht5vvzR4.
"The sense of touch is one of the ways we interact with objects around us," said Dustin Tyler, an associate professor of biomedical engineering at Case Western Reserve and director of the research. "Our goal is not just to restore function, but to build a reconnection to the world. This is long-lasting, chronic restoration of sensation over multiple points across the hand."
"The work reactivates areas of the brain that produce the sense of touch, said Tyler, who is also associate director of the Advanced Platform Technology Center at the Cleveland VA. "When the hand is lost, the inputs that switched on these areas were lost."
How the system works and the results will be published online in the journal Science Translational Medicine Oct. 8.
"The sense of touch actually gets better," said Keith Vonderhuevel, of Sidney, Ohio, who lost his hand in 2005 and had the system implanted in January 2013. "They change things on the computer to change the sensation.
"One time," he said, "it felt like water running across the back of my hand."
The system, which is limited to the lab at this point, uses electrical stimulation to give the sense of feeling. But there are key differences from other reported efforts.
First, the nerves that used to relay the sense of touch to the brain are stimulated by contact points on cuffs that encircle major nerve bundles in the arm, not by electrodes inserted through the protective nerve membranes.
Surgeons Michael W Keith, MD and J. Robert Anderson, MD, from Case Western Reserve School of Medicine and Cleveland VA, implanted three electrode cuffs in Spetic's forearm, enabling him to feel 19 distinct points; and two cuffs in Vonderhuevel's upper arm, enabling him to feel 16 distinct locations.
Second, when they began the study, the sensation Spetic felt when a sensor was touched was a tingle. To provide more natural sensations, the research team has developed algorithms that convert the input from sensors taped to a patient's hand into varying patterns and intensities of electrical signals. The sensors themselves aren't sophisticated enough to discern textures, they detect only pressure.
The different signal patterns, passed through the cuffs, are read as different stimuli by the brain. The scientists continue to fine-tune the patterns, and Spetic and Vonderhuevel appear to be becoming more attuned to them.
Third, the system has worked for 2 ½ years in Spetic and 1½ in Vonderhueval. Other research has reported sensation lasting one month and, in some cases, the ability to feel began to fade over weeks.
A blindfolded Vonderhuevel has held grapes or cherries in his prosthetic hand -- the signals enabling him to gauge how tightly he's squeezing -- and pulled out the stems.
"When the sensation's on, it's not too hard," he said. "When it's off, you make a lot of grape juice."
Different signal patterns interpreted as sandpaper, a smooth surface and a ridged surface enabled a blindfolded Spetic to discern each as they were applied to his hand. And when researchers touched two different locations with two different textures at the same time, he could discern the type and location of each.
Tyler believes that everyone creates a map of sensations from their life history that enables them to correlate an input to a given sensation.
"I don't presume the stimuli we're giving is hitting the spots on the map exactly, but they're familiar enough that the brain identifies what it is," he said.
Because of Vonderheuval's and Spetic's continuing progress, Tyler is hopeful the method can lead to a lifetime of use. He's optimistic his team can develop a system a patient could use at home, within five years.
In addition to hand prosthetics, Tyler believes the technology can be used to help those using prosthetic legs receive input from the ground and adjust to gravel or uneven surfaces. Beyond that, the neural interfacing and new stimulation techniques may be useful in controlling tremors, deep brain stimulation and more.
On 19:17 by Asveth Sreiram No comments
Just look into the light: not quite, but researchers at the UC Davis Center for Neuroscience and Department of Psychology have used light to erase specific memories in mice, and proved a basic theory of how different parts of the brain work together to retrieve episodic memories.
Optogenetics, pioneered by Karl Diesseroth at Stanford University, is a new technique for manipulating and studying nerve cells using light. The techniques of optogenetics are rapidly becoming the standard method for investigating brain function.
Kazumasa Tanaka, Brian Wiltgen and colleagues at UC Davis applied the technique totest a long-standing idea about memory retrieval. For about 40 years, Wiltgen said, neuroscientists have theorized that retrieving episodic memories -- memories about specific places and events -- involves coordinated activity between the cerebral cortex and the hippocampus, a small structure deep in the brain.
"The theory is that learning involves processing in the cortex, and the hippocampus reproduces this pattern of activity during retrieval, allowing you to re-experience the event," Wiltgen said. If the hippocampus is damaged, patients can lose decades of memories.
But this model has been difficult to test directly, until the arrival of optogenetics.
Wiltgen and Tanaka used mice genetically modified so that when nerve cells are activated, they both fluoresce green and express a protein that allows the cells to be switched off by light. They were therefore able both to follow exactly which nerve cells in the cortex and hippocampus were activated in learning and memory retrieval, and switch them off with light directed through a fiber-optic cable.
They trained the mice by placing them in a cage where they got a mild electric shock. Normally, mice placed in a new environment will nose around and explore. But when placed in a cage where they have previously received a shock, they freeze in place in a "fear response."
Tanaka and Wiltgen first showed that they could label the cells involved in learning and demonstrate that they were reactivated during memory recall. Then they were able to switch off the specific nerve cells in the hippocampus, and show that the mice lost their memories of the unpleasant event. They were also able to show that turning off other cells in the hippocampus did not affect retrieval of that memory, and to follow fibers from the hippocampus to specific cells in the cortex.
"The cortex can't do it alone, it needs input from the hippocampus," Wiltgen said. "This has been a fundamental assumption in our field for a long time and Kazu’s data provides the first direct evidence that it is true."
They could also see how the specific cells in the cortex were connected to the amygdala, a structure in the brain that is involved in emotion and in generating the freezing response.
Co-authors are Aleksandr Pevzner, Anahita B. Hamidi, Yuki Nakazawa and Jalina Graham, all at the Center for Neuroscience. The work was funded by grants from the Whitehall Foundation, McKnight Foundation, Nakajima Foundation and the National Science Foundation.
On 19:16 by Asveth Sreiram No comments
A team of scientists using NASA's Hubble Space Telescope has made the most detailed global map yet of the glow from a planet orbiting another star, revealing secrets of air temperatures and water.
"These measurements have opened the door for a new kind of comparative planetology," said team leader Jacob Bean of the University of Chicago.
"Our observations are the first of their kind in terms of providing a two-dimensional map of the planet's thermal structure that can be used to constrain atmospheric circulation and dynamical models for hot exoplanets," said team member Kevin Stevenson of the University of Chicago.
The Hubble observations show that the planet, called WASP-43b, is no place to call home. It's a world of extremes, where seething winds howl at the speed of sound from a 3,000-degree-Fahrenheit day side that is hot enough to melt steel to a pitch-black night side that sees temperatures plunge below a relatively cool 1,000 degrees Fahrenheit.
As a hot ball of predominantly hydrogen gas, there are no surface features on the planet, such as oceans or continents that can be used to track its rotation. Only the severe temperature difference between the day and night sides can be used by a remote observer to mark the passage of a day on this world.
WASP-43b is located 260 light-years away and was first discovered in 2011. WASP-43b is too distant to be photographed, but because its orbit is observed edge-on to Earth, astronomers detected it by observing regular dips in the light of its parent star as the planet passes in front of it.
The planet is about the same size as Jupiter, but is nearly twice as massive. The planet is so close to its orange dwarf host star that it completes an orbit in just 19 hours. The planet is also gravitationally locked so that it keeps one hemisphere facing the star, just as our moon keeps one face toward Earth.
The scientists combined two previous methods of analyzing exoplanets and put them together in one for the first time to study the atmosphere of WASP-43b. Spectroscopy allowed them to determine the water abundance and temperature structure of the atmosphere. By observing the planet's rotation, the astronomers were also able to measure the water abundances and temperatures at different longitudes.
Because there's no planet with these tortured conditions in our
solar
system, characterizing the atmosphere of such a bizarre world provides a unique laboratory for better understanding planet formation and planetary physics. "The planet is so hot that all the water in its atmosphere is vaporized, rather than condensed into icy clouds like on Jupiter," said team member Laura Kreidberg of the University of Chicago.
"Water is thought to play an important role in the formation of giant planets, since comet-like bodies bombard young planets, delivering most of the water and other molecules that we can observe," said Jonathan Fortney, a member of the team from the University of California, Santa Cruz.
However, the water abundances in the giant planets of our solar system are poorly known because water is locked away as ice that has precipitated out of their upper atmospheres. But on "hot Jupiters" -- that is, large planets like Jupiter that have high surface temperatures because they orbit very close to their stars -- water is in a vapor that can be readily traced. Kreidberg also emphasized that the team didn't simply detect water in the atmosphere of WASP-43b, but also precisely measured how much of it there is and how it is distributed with longitude.
In order to understand how giant planets form, astronomers want to know how enriched they are in different elements. The team found that WASP-43b has about the same amount of water as we would expect for an object with the same chemical composition as the Sun. Kreidberg said that this tells something fundamental about how the planet formed.
For the first time astronomers were able to observe three complete rotations of a planet, which occurred during a span of four days. This was essential to making such a precise measurement according to Jean-Michel Désert of the University of Colorado, Boulder.
The team next aims to make water-abundance measurements for different planets to explore their chemical abundances. Hubble's planned successor, the James Webb Space Telescope, will be able to not only measure water abundances, but also the abundances of carbon monoxide, carbon dioxide, ammonia, and methane, depending on the planet's temperature.
The results are presented in two new papers, one published online in Science Expresson Oct. 9, and the other published in The Astrophysical Journal Letters on Sept. 12.
Story Source:
The above story is based on materials provided by Space Telescope Science Institute (STScI). Note: Materials may be edited for content and length.
Journal References:
- Kevin B. Stevenson, Jean-Michel Désert, Michael R. Line, Jacob L. Bean, Jonathan J. Fortney, Adam P. Showman, Tiffany Kataria, Laura Kreidberg, Peter R. McCullough, Gregory W. Henry, David Charbonneau, Adam Burrows, Sara Seager, Nikku Madhusudhan, Michael H. Williamson, and Derek Homeier. Thermal structure of an exoplanet atmosphere from phase-resolved emission spectroscopy. Science, 9 October 2014 DOI: 10.1126/science.1256758
- Laura Kreidberg, Jacob L. Bean, Jean-Michel Désert, Michael R. Line, Jonathan J. Fortney, Nikku Madhusudhan, Kevin B. Stevenson, Adam P. Showman, David Charbonneau, Peter R. McCullough, Sara Seager, Adam Burrows, Gregory W. Henry, Michael Williamson, Tiffany Kataria, Derek Homeier. A PRECISE WATER ABUNDANCE MEASUREMENT FOR THE HOT JUPITER WASP-43b. The Astrophysical Journal, 2014; 793 (2): L27 DOI: 10.1088/2041-8205/793/2/L27
Monday, 3 February 2014
On 19:43 by Asveth Sreiram No comments
Add yet another threat to the list of problems facing the rapidly disappearing rainforests of Central America: drug trafficking.
In an article in the journal Science, seven researchers who have done work in Central America point to growing evidence that drug trafficking threatens forests in remote areas of Honduras, Guatemala, Nicaragua and nearby countries.
Traffickers are slashing down forests, often within protected areas, to make way for clandestine landing strips and roads to move drugs, and converting forests into agribusinesses to launder their drug profits, the researchers say.
Much of this appears to be a response to U.S.-led anti-trafficking efforts, especially in Mexico, said Kendra McSweeney, lead author of the Science article and an associate professor of geography at The Ohio State University.
"In response to the crackdown in Mexico, drug traffickers began moving south into Central America around 2007 to find new routes through remote areas to move their drugs from South America and get them to the United States," McSweeney said.
"When drug traffickers moved in, they brought ecological devastation with them."
For example, the researchers found that the amount of new deforestation per year more than quadrupled in Honduras between 2007 and 2011 -- the same period when cocaine movements in the country also spiked.
McSweeney is a geographer who has done research in Honduras for more than 20 years, studying how indigenous people interact with their environment. The drug trade is not something she would normally investigate, but it has been impossible to ignore in recent years, she said.
"Starting about 2007, we started seeing rates of deforestation there that we had never seen before. When we asked the local people the reason, they would tell us: "los narcos" (drug traffickers)."
There were other indications of drug trafficking taking place in the area.
"I would get approached by people who wanted to change $20 bills in places where cash is very scarce and dollars are not the normal currency. When that starts happening, you know narcos are there," she said.
When McSweeney talked to other researchers in Central America, they had similar stories.
"The emerging impacts of narco-trafficking were being mentioned among people who worked in Central America, but usually just as a side conversation. We heard the same kinds of things from agricultural specialists, geographers, conservationists. Several of us decided we needed to bring more attention to this issue."
In the Science article, McSweeney and her co-authors say deforestation starts with the clandestine roads and landing strips that traffickers create in the remote forests. The infusion of drug cash into these areas helps embolden resident ranchers, land speculators and timber traffickers to expand their activities, primarily at the expense of the indigenous people who are often key forest defenders.
In addition, the drug traffickers themselves convert forest to agriculture as a way to launder their money. While much of this land conversion occurs within protected areas and is therefore illegal, drug traffickers often use their profits to influence government leaders to look the other way.
McSweeney said more research is needed to examine the links between drug trafficking and conservation issues. But there is already enough evidence to show that U.S. drug policy has a much wider effect than is often realized.
"Drug policies are also conservation policies, whether we realize it or not," McSweeney said.
"U.S.-led militarized interdiction, for example, has succeeded mainly in moving traffickers around, driving them to operate in ever-more remote, biodiverse ecosystems. Reforming drug policies could alleviate some of the pressures on Central America's disappearing forests."
The paper was co-authored by Erik Neilsen and Ophelia Wang of Northern Arizona University; Matthew Taylor of the University of Denver; David Wrathall of United Nations University Institute for Environment and Human Security in Bonn, Germany; Spencer Plumb of the University of Idaho; and Zoe Pearson, a graduate student in geography at Ohio State.
The research was supported in part by the National Geographic Society, Association of American Geographers, Ohio State and Northern Arizona University.
|
Traffickers are slashing down forests, often within protected areas, to make way for clandestine landing strips and roads to move drugs, and converting forests into agribusinesses to launder their drug profits, the researchers say.
Much of this appears to be a response to U.S.-led anti-trafficking efforts, especially in Mexico, said Kendra McSweeney, lead author of the Science article and an associate professor of geography at The Ohio State University.
"In response to the crackdown in Mexico, drug traffickers began moving south into Central America around 2007 to find new routes through remote areas to move their drugs from South America and get them to the United States," McSweeney said.
"When drug traffickers moved in, they brought ecological devastation with them."
For example, the researchers found that the amount of new deforestation per year more than quadrupled in Honduras between 2007 and 2011 -- the same period when cocaine movements in the country also spiked.
McSweeney is a geographer who has done research in Honduras for more than 20 years, studying how indigenous people interact with their environment. The drug trade is not something she would normally investigate, but it has been impossible to ignore in recent years, she said.
"Starting about 2007, we started seeing rates of deforestation there that we had never seen before. When we asked the local people the reason, they would tell us: "los narcos" (drug traffickers)."
There were other indications of drug trafficking taking place in the area.
"I would get approached by people who wanted to change $20 bills in places where cash is very scarce and dollars are not the normal currency. When that starts happening, you know narcos are there," she said.
When McSweeney talked to other researchers in Central America, they had similar stories.
"The emerging impacts of narco-trafficking were being mentioned among people who worked in Central America, but usually just as a side conversation. We heard the same kinds of things from agricultural specialists, geographers, conservationists. Several of us decided we needed to bring more attention to this issue."
In the Science article, McSweeney and her co-authors say deforestation starts with the clandestine roads and landing strips that traffickers create in the remote forests. The infusion of drug cash into these areas helps embolden resident ranchers, land speculators and timber traffickers to expand their activities, primarily at the expense of the indigenous people who are often key forest defenders.
In addition, the drug traffickers themselves convert forest to agriculture as a way to launder their money. While much of this land conversion occurs within protected areas and is therefore illegal, drug traffickers often use their profits to influence government leaders to look the other way.
McSweeney said more research is needed to examine the links between drug trafficking and conservation issues. But there is already enough evidence to show that U.S. drug policy has a much wider effect than is often realized.
"Drug policies are also conservation policies, whether we realize it or not," McSweeney said.
"U.S.-led militarized interdiction, for example, has succeeded mainly in moving traffickers around, driving them to operate in ever-more remote, biodiverse ecosystems. Reforming drug policies could alleviate some of the pressures on Central America's disappearing forests."
The paper was co-authored by Erik Neilsen and Ophelia Wang of Northern Arizona University; Matthew Taylor of the University of Denver; David Wrathall of United Nations University Institute for Environment and Human Security in Bonn, Germany; Spencer Plumb of the University of Idaho; and Zoe Pearson, a graduate student in geography at Ohio State.
The research was supported in part by the National Geographic Society, Association of American Geographers, Ohio State and Northern Arizona University.
Thursday, 23 January 2014
On 20:50 by Asveth Sreiram No comments
By tracking the fast-produced elements, specifically magnesium in this study, astronomers can determine how rapidly different parts of the Milky Way were formed. The research suggests that stars in the inner regions of the Galactic disc were the first to form, supporting ideas that our Galaxy grew from the inside-out.
Using data from the 8-m VLT in Chile, one of the world's largest telescopes, an international team of astronomers took detailed observations of stars with a wide range of ages and locations in the Galactic disc to accurately determine their 'metallicity': the amount of chemical elements in a star other than hydrogen and helium, the two elements most stars are made from.
Immediately after the Big Bang, the Universe consisted almost entirely of hydrogen and helium, with levels of "contaminant metals" growing over time. Consequently, older stars have fewer elements in their make-up -- so have lower metallicity.
"The different chemical elements of which stars -- and we -- are made are created at different rates -- some in massive stars which live fast and die young, and others in sun-like stars with more sedate multi-billion-year lifetimes," said Professor Gerry Gilmore, lead investigator on the Gaia-ESO Project.
Massive stars, which have short lives and die as 'core-collapse supernovae', produce huge amounts of magnesium during their explosive death throes. This catastrophic event can form a neutron star or a black hole, and even trigger the formation of new stars.
The team have shown that older, 'metal-poor' stars inside the Solar Circle -- the orbit of our Sun around the centre of the Milky Way, which takes roughly 250 million years to complete -- are far more likely to have high levels of magnesium. The higher level of the element inside the Solar Circle suggests this area contained more stars that "lived fast and die young" in the past.
The stars that lie in the outer regions of the Galactic disc -- outside the Solar Circle -- are predominantly younger, both 'metal-rich' and 'metal-poor', and have surprisingly low magnesium levels compared to their metallicity.
This discovery signifies important differences in stellar evolution across the Milky Way disc, with very efficient and short star formation timescales occurring inside the Solar Circle; whereas, outside the Sun's orbit, star formation took much longer.
"We have been able to shed new light on the timescale of chemical enrichment across the Milky Way disc, showing that outer regions of the disc take a much longer time to form," said Maria Bergemann from Cambridge's Institute of Astronomy, who led the study.
"This supports theoretical models for the formation of disc galaxies in the context of Cold Dark Matter cosmology, which predict that galaxy discs grow inside-out."
The findings offer new insights into the assembly history of our Galaxy, and are the part of the first wave of new observations from the Gaia-ESO survey, the ground-based extension to the Gaia space mission -- launched by the European Space Agency at the end of last year -- and the first large-scale survey conducted on one the world's largest telescopes: the 8-m VLT in Paranal, Chile.
The study is published online today through the astronomical database Astro-ph, and has been submitted to the journal Astronomy and Astrophysics.
The new research also sheds further light on another much debated "double structure" in the Milky Way's disc -- the so-called 'thin' and 'thick' discs.
"The thin disc hosts spiral arms, young stars, giant molecular clouds -- all objects which are young, at least in the context of the Galaxy," explains Aldo Serenelli from the Institute of Space Sciences (Barcelona), a co-author of the study. "But astronomers have long suspected there is another disc, which is thicker, shorter and older. This thick disc hosts many old stars that have low metallicity."
During the latest research, the team found that:
Added Gilmore: "This study provides exciting new evidence that the inner parts of the Milky Way's thick disc formed much more rapidly than did the thin disc stars, which dominate near our Solar neighbourhood."
In theory, say astronomers, the thick disc -- first proposed by Gilmore 30 years ago -- could have emerged in a variety of ways, from massive gravitational instabilities to consuming satellite galaxies in its formative years. "The Milky Way has cannibalised many small galaxies during its formation. Now, with the Gaia-ESO Survey, we can study the detailed tracers of these events, essentially dissecting the belly of the beast," said Greg Ruchti, a researcher at Lund Observatory in Sweden, who co-leads the project.
Using data from the 8-m VLT in Chile, one of the world's largest telescopes, an international team of astronomers took detailed observations of stars with a wide range of ages and locations in the Galactic disc to accurately determine their 'metallicity': the amount of chemical elements in a star other than hydrogen and helium, the two elements most stars are made from.
Immediately after the Big Bang, the Universe consisted almost entirely of hydrogen and helium, with levels of "contaminant metals" growing over time. Consequently, older stars have fewer elements in their make-up -- so have lower metallicity.
"The different chemical elements of which stars -- and we -- are made are created at different rates -- some in massive stars which live fast and die young, and others in sun-like stars with more sedate multi-billion-year lifetimes," said Professor Gerry Gilmore, lead investigator on the Gaia-ESO Project.
Massive stars, which have short lives and die as 'core-collapse supernovae', produce huge amounts of magnesium during their explosive death throes. This catastrophic event can form a neutron star or a black hole, and even trigger the formation of new stars.
The team have shown that older, 'metal-poor' stars inside the Solar Circle -- the orbit of our Sun around the centre of the Milky Way, which takes roughly 250 million years to complete -- are far more likely to have high levels of magnesium. The higher level of the element inside the Solar Circle suggests this area contained more stars that "lived fast and die young" in the past.
The stars that lie in the outer regions of the Galactic disc -- outside the Solar Circle -- are predominantly younger, both 'metal-rich' and 'metal-poor', and have surprisingly low magnesium levels compared to their metallicity.
This discovery signifies important differences in stellar evolution across the Milky Way disc, with very efficient and short star formation timescales occurring inside the Solar Circle; whereas, outside the Sun's orbit, star formation took much longer.
"We have been able to shed new light on the timescale of chemical enrichment across the Milky Way disc, showing that outer regions of the disc take a much longer time to form," said Maria Bergemann from Cambridge's Institute of Astronomy, who led the study.
"This supports theoretical models for the formation of disc galaxies in the context of Cold Dark Matter cosmology, which predict that galaxy discs grow inside-out."
The findings offer new insights into the assembly history of our Galaxy, and are the part of the first wave of new observations from the Gaia-ESO survey, the ground-based extension to the Gaia space mission -- launched by the European Space Agency at the end of last year -- and the first large-scale survey conducted on one the world's largest telescopes: the 8-m VLT in Paranal, Chile.
The study is published online today through the astronomical database Astro-ph, and has been submitted to the journal Astronomy and Astrophysics.
The new research also sheds further light on another much debated "double structure" in the Milky Way's disc -- the so-called 'thin' and 'thick' discs.
"The thin disc hosts spiral arms, young stars, giant molecular clouds -- all objects which are young, at least in the context of the Galaxy," explains Aldo Serenelli from the Institute of Space Sciences (Barcelona), a co-author of the study. "But astronomers have long suspected there is another disc, which is thicker, shorter and older. This thick disc hosts many old stars that have low metallicity."
During the latest research, the team found that:
- Stars in the young, 'thin' disc aged between 0 -- 8 billion years all have a similar degree of metallicity, regardless of age in that range, with many of them considered 'metal-rich'.
- There is a "steep decline" in metallicity for stars aged over 9 billion years, typical of the 'thick' disc, with no detectable 'metal-rich' stars found at all over this age.
- But stars of different ages and metallicity can be found in both discs.
Added Gilmore: "This study provides exciting new evidence that the inner parts of the Milky Way's thick disc formed much more rapidly than did the thin disc stars, which dominate near our Solar neighbourhood."
In theory, say astronomers, the thick disc -- first proposed by Gilmore 30 years ago -- could have emerged in a variety of ways, from massive gravitational instabilities to consuming satellite galaxies in its formative years. "The Milky Way has cannibalised many small galaxies during its formation. Now, with the Gaia-ESO Survey, we can study the detailed tracers of these events, essentially dissecting the belly of the beast," said Greg Ruchti, a researcher at Lund Observatory in Sweden, who co-leads the project.
On 20:48 by Asveth Sreiram No comments
Jan. 19, 2014 — Extreme weather events fueled by unusually strong El Ninos, such as the 1983 heatwave that led to the Ash Wednesday bushfires in Australia, are likely to double in number as our planet warms.
An international team of scientists from organizations including the ARC Centre of Excellence for Climate System Science (CoECSS), the US National Oceanic and Atmospheric Administration and CSIRO, published their findings in the journal Nature Climate Change.
"El Nino events are a multi-dimensional problem, and only now are we starting to understand better how they respond to global warming," said Dr Santoso. Extreme El Niño events develop differently from standard El Ninos, which first appear in the western Pacific. Extreme El Nino's occur when sea surface temperatures exceeding 28°C develop in the normally cold and dry eastern equatorial Pacific Ocean. This different location for the origin of the temperature
increase causes massive changes in global rainfall patterns.
"The question of how global warming will change the frequency of extreme El Niño events has challenged scientists for more than 20 years," said co-author Dr Mike McPhaden of US National Oceanic and Atmospheric Administration.
"This research is the first comprehensive examination of the issue to produce robust and convincing results," said Dr McPhaden.
The impacts of extreme El Niño events extend to every continent across the globe.
The 1997-98 event alone caused $35-45 US billion in damage and claimed an estimated 23,000 human lives worldwide.
"During an extreme El Niño event countries in the western Pacific, such as Australia and Indonesia, experienced devastating droughts and wild fires, while catastrophic floods occurred in the eastern equatorial region of Ecuador and northern Peru," said lead author, CSIRO's Dr Wenju Cai
In Australia, the drought and dry conditions induced by the 1982-83 extreme El Niño preconditioned the Ash Wednesday Bushfire in southeast Australia, leading to 75 fatalities.
To achieve their results, the team examined 20 climate models that consistently simulate major rainfall reorganization during extreme El Niño events. They found a substantial increase in events from the present-day through the next 100 years as the eastern Pacific Ocean warmed in response to global warming.
"This latest research based on rainfall patterns, suggests that extreme El Niño events are likely to double in frequency as the world warms leading to direct impacts on extreme weather events worldwide."
"For Australia, this could mean summer heat waves, like that recently experienced in the south-east of the country, could get an additional boost if they coincide with extreme El Ninos," said co-author, Professor Matthew England from CoECSS.
Wednesday, 22 January 2014
On 18:02 by Asveth Sreiram No comments
Jan. 16, 2014 — Archaeologists working at the southern Egyptian site of Abydos have discovered the tomb of a previously unknown pharaoh: Woseribre Senebkay -- and the first material proof of a forgotten Abydos Dynasty, ca. 1650-1600 BC. Working in cooperation with Egypt's Supreme Council of Antiquities, a team from the Penn Museum, University of Pennsylvania, discovered king Senebkay's tomb close to a larger royal tomb, recently identified as belonging to a king Sobekhotep (probably Sobekhotep I, ca. 1780 BC) of the 13th Dynasty.
The discovery of pharaoh Senebkay's tomb is the culmination of work that began during the summer of 2013 when the Penn Museum team, led by Dr. Josef Wegner, Egyptian Section Associate Curator of the Penn Museum, discovered a huge 60-ton royal sarcophagus chamber at South Abydos. The sarcophagus chamber, of red quartzite quarried and transported to Abydos from Gebel Ahmar (near modern Cairo), could be dated to the late Middle Kingdom, but its owner remained unidentified. Mysteriously, the sarcophagus had been extracted from its original tomb and reused in a later tomb -- but the original royal owner remained unknown when the summer season ended.
In the last few weeks of excavations, fascinating details of a series of kings' tombs and a lost dynasty at Abydos have emerged. Archaeologists now know that the giant quartzite sarcophagus chamber derives from a royal tomb built originally for a pharaoh Sobekhotep -- probably Sobekhotep I, the first king of Egypt's 13th Dynasty. Fragments of that king's funerary stela were found just recently in front of his huge, badly robbed tomb. A group of later pharaohs (reigning about a century and a half later during Egypt's Second Intermediate Period) were reusing elements from Sobekhotep's tomb for building and equipping their own tombs. One of these kings (whose name is still unknown) had extracted and reused the quartzite sarcophagus chamber. Another king's tomb found just last week is that of the previously unknown pharaoh: Woseribre-Senebkay.
A Lost Pharaoh and a Forgotten Dynasty
The newly discovered tomb of pharaoh Senebkay dates to ca. 1650 BC during Egypt's Second Intermediate Period. The identification was made by Dr. Wegner and Kevin Cahail, Ph.D. student, Department of Near Eastern Languages and Civilizations, University of Pennsylvania. The tomb of Senebkay consists of four chambers with a decorated limestone burial chamber. The burial chamber is painted with images of the goddesses Nut, Nephthys, Selket, and Isis flanking the king's canopic shrine. Other texts name the sons of Horus and record the king's titulary and identify him as the "king of Upper and Lower Egypt, Woseribre, the son of Re, Senebkay."
Senebkay's tomb was badly plundered by ancient tomb robbers who had ripped apart the king's mummy as well as stripped the pharaoh's tomb equipment of its gilded surfaces. Nevertheless, the Penn Museum archaeologists recovered the remains of king Senebkay amidst debris of his fragmentary coffin, funerary mask, and canopic chest. Preliminary work on the king's skeleton of Senebkay by Penn graduate students Paul Verhelst and Matthew Olson (of the Department of Near Eastern Languages and Civilizations) indicates he was a man of moderate height, ca. 1.75 m (5'10), and died in his mid to late 40s.
The discovery provides significant new evidence on the political and social history of Egypt's Second Intermediate Period. The existence of an independent "Abydos Dynasty," contemporary with the 15th (Hyksos) and 16th (Theban) Dynasties, was first hypothesized by Egyptologist K. Ryholt in 1997. The discovery of pharaoh Senebkay now proves the existence of this Abydos dynasty and identifies the location of their royal necropolis at South Abydos in an area anciently called Anubis-Mountain. The kings of the Abydos Dynasty placed their burial ground adjacent to the tombs of earlier Middle Kingdom pharaohs including Senwosret III (Dynasty 12, ca. 1880-1840 BC), and Sobekhotep I (ca. 1780 BC). There is evidence for about 16 royal tombs spanning the period ca. 1650-1600 BC. Senebkay appears to be one of the earliest kings of the "Abydos Dynasty." His name may have appeared in a broken section of the famous Turin King List (a papyrus document dating to the reign of Ramses II, ca. 1200 BC) where two kings with the throne name "Woser...re" are recorded at the head of a group of more than a dozen kings, most of whose names are entirely lost.
The tomb of pharaoh Senebkay is modest in scale. An important discovery was the badly decayed remains of Senebkay's canopic chest. This chest was made of cedar wood that had been reused from the nearby tomb of Sobekhotep I and still bore the name of that earlier king, covered over by gilding. Such reuse of objects from the nearby Sobekhotep tomb by Senebkay, like the reused sarcophagus chamber found during the summer, provides evidence that suggests the limited resources and isolated economic situation of the Abydos Kingdom which lay in the southern part of Middle Egypt between the larger kingdoms of Thebes (Dynasties 16-17) and the Hyksos (Dynasty 15) in northern Egypt. Unlike these numbered dynasties, the pharaohs of the Abydos Dynasty were forgotten to history and their royal necropolis unknown until this discovery of Senebkay's tomb.
"It's exciting to find not just the tomb of one previously unknown pharaoh, but the necropolis of an entire forgotten dynasty," noted Dr. Wegner. "Continued work in the royal tombs of the Abydos Dynasty promises to shed new light on the political history and society of an important but poorly understood era of Ancient Egypt."
In the last few weeks of excavations, fascinating details of a series of kings' tombs and a lost dynasty at Abydos have emerged. Archaeologists now know that the giant quartzite sarcophagus chamber derives from a royal tomb built originally for a pharaoh Sobekhotep -- probably Sobekhotep I, the first king of Egypt's 13th Dynasty. Fragments of that king's funerary stela were found just recently in front of his huge, badly robbed tomb. A group of later pharaohs (reigning about a century and a half later during Egypt's Second Intermediate Period) were reusing elements from Sobekhotep's tomb for building and equipping their own tombs. One of these kings (whose name is still unknown) had extracted and reused the quartzite sarcophagus chamber. Another king's tomb found just last week is that of the previously unknown pharaoh: Woseribre-Senebkay.
A Lost Pharaoh and a Forgotten Dynasty
The newly discovered tomb of pharaoh Senebkay dates to ca. 1650 BC during Egypt's Second Intermediate Period. The identification was made by Dr. Wegner and Kevin Cahail, Ph.D. student, Department of Near Eastern Languages and Civilizations, University of Pennsylvania. The tomb of Senebkay consists of four chambers with a decorated limestone burial chamber. The burial chamber is painted with images of the goddesses Nut, Nephthys, Selket, and Isis flanking the king's canopic shrine. Other texts name the sons of Horus and record the king's titulary and identify him as the "king of Upper and Lower Egypt, Woseribre, the son of Re, Senebkay."
Senebkay's tomb was badly plundered by ancient tomb robbers who had ripped apart the king's mummy as well as stripped the pharaoh's tomb equipment of its gilded surfaces. Nevertheless, the Penn Museum archaeologists recovered the remains of king Senebkay amidst debris of his fragmentary coffin, funerary mask, and canopic chest. Preliminary work on the king's skeleton of Senebkay by Penn graduate students Paul Verhelst and Matthew Olson (of the Department of Near Eastern Languages and Civilizations) indicates he was a man of moderate height, ca. 1.75 m (5'10), and died in his mid to late 40s.
The discovery provides significant new evidence on the political and social history of Egypt's Second Intermediate Period. The existence of an independent "Abydos Dynasty," contemporary with the 15th (Hyksos) and 16th (Theban) Dynasties, was first hypothesized by Egyptologist K. Ryholt in 1997. The discovery of pharaoh Senebkay now proves the existence of this Abydos dynasty and identifies the location of their royal necropolis at South Abydos in an area anciently called Anubis-Mountain. The kings of the Abydos Dynasty placed their burial ground adjacent to the tombs of earlier Middle Kingdom pharaohs including Senwosret III (Dynasty 12, ca. 1880-1840 BC), and Sobekhotep I (ca. 1780 BC). There is evidence for about 16 royal tombs spanning the period ca. 1650-1600 BC. Senebkay appears to be one of the earliest kings of the "Abydos Dynasty." His name may have appeared in a broken section of the famous Turin King List (a papyrus document dating to the reign of Ramses II, ca. 1200 BC) where two kings with the throne name "Woser...re" are recorded at the head of a group of more than a dozen kings, most of whose names are entirely lost.
The tomb of pharaoh Senebkay is modest in scale. An important discovery was the badly decayed remains of Senebkay's canopic chest. This chest was made of cedar wood that had been reused from the nearby tomb of Sobekhotep I and still bore the name of that earlier king, covered over by gilding. Such reuse of objects from the nearby Sobekhotep tomb by Senebkay, like the reused sarcophagus chamber found during the summer, provides evidence that suggests the limited resources and isolated economic situation of the Abydos Kingdom which lay in the southern part of Middle Egypt between the larger kingdoms of Thebes (Dynasties 16-17) and the Hyksos (Dynasty 15) in northern Egypt. Unlike these numbered dynasties, the pharaohs of the Abydos Dynasty were forgotten to history and their royal necropolis unknown until this discovery of Senebkay's tomb.
"It's exciting to find not just the tomb of one previously unknown pharaoh, but the necropolis of an entire forgotten dynasty," noted Dr. Wegner. "Continued work in the royal tombs of the Abydos Dynasty promises to shed new light on the political history and society of an important but poorly understood era of Ancient Egypt."
Saturday, 11 January 2014
On 17:41 by Asveth Sreiram No comments
Nicolas B. Cowan and Dorian Abbot's new model challenges the conventional wisdom which says super-Earths actually would be very unlike Earth -- each would be a waterworld, with its surface completely covered in water. They conclude that most tectonically active super-Earths -- regardless of mass -- store most of their water in the mantle and will have both oceans and exposed continents, enabling a stable climate such as Earth's.
Cowan is a postdoctoral fellow at Northwestern's Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA), and Abbot is an assistant professor in geophysical sciences at UChicago.
"Are the surfaces of super-Earths totally dry or covered in water?" Cowan said. "We tackled this question by applying known geophysics to astronomy.
"Super-Earths are expected to have deep oceans that will overflow their basins and inundate the entire surface, but we show this logic to be flawed," he said. "Terrestrial planets have significant amounts of water in their interior. Super-Earths are likely to have shallow oceans to go along with their shallow ocean basins."
In their model, Cowan and Abbot treated the intriguing exoplanets like Earth, which has quite a bit of water in its mantle, the rocky part that makes up most of the volume and mass of the planet. The rock of the mantle contains tiny amounts of water, which quickly adds up because the mantle is so large. And a deep water cycle moves water between oceans and the mantle. (An exoplanet, or extrasolar planet, is a planet outside our solar system.)
Cowan presented the findings at a press conference, "Windows on Other Worlds," held Jan. 7 at the 223rd meeting of the American Astronomical Society (AAS) annual meeting in Washington, D.C.
He also will discuss the research at a scientific session to be held from 2 to 3:30 p.m. EST Wednesday, Jan. 8, at the AAS meeting (Potomac Ballroom D, Gaylord National Resort and Convention Center). The study will be published Jan. 20 in theAstrophysical Journal.
Water is constantly traded back and forth between the ocean and the rocky mantle because of plate tectonics, Cowan and Abbot said. The division of water between ocean and mantle is controlled by seafloor pressure, which is proportional to gravity.
Accounting for the effects of seafloor pressure and high gravity are two novel factors in their model. As the size of the super-Earths increase, gravity and seafloor pressure also go up.
"We can put 80 times more water on a super-Earth and still have its surface look like Earth," Cowan said. "These massive planets have enormous seafloor pressure, and this force pushes water into the mantle."
It doesn't take that much water to tip a planet into being a waterworld. "If Earth was 1 percent water by mass, we'd all drown, regardless of the deep water cycle," Cowan said. "The surface would be covered in water. Whether or not you have a deep water cycle really matters for planets that are one one-thousandth or one ten-thousandth water."
The ability of super-Earths to maintain exposed continents is important for planetary climate. On planets with exposed continents, like Earth, the deep carbon cycle is mediated by surface temperatures, which produces a stabilizing feedback (a thermostat on geological timescales).
"Such a feedback probably can't exist in a waterworld, which means they should have a much smaller habitable zone," Abbot said. "By making super-Earths 80 times more likely to have exposed continents, we've dramatically improved their odds of having an Earth-like climate."
Cowan and Abbot accede that there are two major uncertainties in their model: that super-Earths have plate tectonics and the amount of water Earth stores in its mantle.
"These are the two things we would like to know better to improve our model," Cowan said. "Our model is a shot from the hip, but it's an important step in advancing how we think about super-Earths.
"
"
On 17:38 by Asveth Sreiram No comments
"One-percent accuracy in the scale of the universe is the most precise such measurement ever made," says BOSS's principal investigator, David Schlegel, a member of the Physics Division of the U.S. Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab). "Twenty years ago astronomers were arguing about estimates that differed by up to fifty percent. Five years ago, we'd refined that uncertainty to five percent; a year ago it was two percent. One-percent accuracy will be the standard for a long time to come."
BOSS is the largest program in the third Sloan Digital Sky Survey (SDSS-III). Since 2009, BOSS has used the Sloan Foundation Telescope at the Apache Point Observatory in New Mexico to record high-precision spectra of well over a million galaxies with redshifts from 0.2 to 0.7, looking back over six billion years into the universe's past. Schlegel says, "We believe the BOSS database includes more redshifts of galaxies than collected by all the other telescopes in the world."
BOSS will continue gathering data until June, 2014. However, says Martin White, a member of Berkeley Lab, a professor of physics and astronomy at the University of California at Berkeley, and chair of the BOSS science survey team, "We've done the analysis now because we have 90 percent of BOSS's final data and we're tremendously excited by the results."
Baryon acoustic oscillations (BAO) are the regular clustering of galaxies, whose scale provides a "standard ruler" to measure the evolution of the universe's structure. Accurate measurement dramatically sharpens our knowledge of fundamental cosmological properties, including how dark energy accelerates the expansion of the universe.
Combined with recent measures of the cosmic microwave background radiation (CMB) and supernova measures of accelerating expansion, the BOSS results suggest that dark energy is a cosmological constant whose strength does not vary in space or time. Although unlikely to be a flaw in Einstein's General Theory of Relativity, the authors of the BOSS analysis note that "understanding the physical cause of the accelerated expansion remains one of the most interesting problems in modern physics."
Among other cosmic parameters, says White, the BOSS analysis "also provides one of the best-ever determinations of the curvature of space. The answer is, it's not curved much."
Calling a three-dimensional universe "flat" means its shape is well described by the Euclidean geometry familiar from high school: straight lines are parallel and triangles add up to 180 degrees. Extraordinary flatness means the universe experienced relatively prolonged inflation, up to a decillionth of a second or more, immediately after the big bang.
"One of the reasons we care is that a flat universe has implications for whether the universe is infinite," says Schlegel. "That means -- while we can't say with certainty that it will never come to an end -- it's likely the universe extends forever in space and will go on forever in time. Our results are consistent with an infinite universe."
The BOSS analysis is based on SDSS-III's Data Releases 10 and 11 (DR 10 and DR 11) and has been submitted for publication in the Monthly Notices of the Royal Astronomical Society.
Ripples in a sea of galaxies
The BOSS analysis incorporates spectra of 1,277,503 galaxies and covers 8,509 square degrees of the sky visible from the northern hemisphere. This is the largest sample of the universe ever surveyed at this density. When complete, BOSS will have collected high-quality spectra of 1.3 million galaxies, plus 160,000 quasars and thousands of other astronomical objects, covering 10,000 square degrees.
Periodic ripples of density in visible matter ("baryons," for short) pervade the universe like raindrops on the surface of a pond. Regular galaxy clustering is the direct descendant of pressure waves that moved through the hot plasma of the early universe, which was so hot and dense that particles of light (photons) and particles of matter, including protons and electrons, were tightly coupled together. Invisible dark matter was also part of the mix.
By 380,000 years after the big bang, however, the temperature of the expanding mixture had cooled enough for light to escape, suffusing the newly transparent universe with intense radiation, which in the 13.4 billion years since has continued to cool to today's faint but pervasive cosmic microwave background.
Minute variations in the temperature of the CMB record periodicity in the original density ripples, of which the European Space Agency's Planck satellite has made the most recent and most accurate measures. The same periodicity is preserved in the clustering of the BOSS galaxies, a BAO signal which also mirrors the distribution of underlying dark matter.
Regular clustering at different eras, starting with the CMB, establishes the expansion history of the universe. BOSS collaborator Beth Reid of Berkeley Lab translates the two-dimensional sky coordinates of galaxies, plus their redshifts, into 3-D maps of the density of galaxies in space.
"It's from fluctuations in the density of galaxies in the volume we're looking at that we extract the BAO standard ruler," she says. "To compare different regions of the sky on an equal footing, first we have to undo variations from atmospheric effects or other patterns caused by how we observe the sky with our telescope." The results depend crucially on accurate measures of redshifts, which disclose the galaxies' positions in space and time. But galaxies don't move in lock step.
"When galaxies are close together their mutual gravitational attraction pushes them around and interferes with attempts to measure large-scale structure," Schlegel says. "Their peculiar motion makes it hard to write a formula for overall gravitational growth."
However, says Reid, "We have a very good model for what these distortions look like. The galaxy density field shows you where there are concentrations of matter, and the peculiar velocity field points in the direction of the net effect of all the local over- and under-densities."
"The BOSS data are awe-inspiring," says Martin White, "but many other pieces had to be put into place before we could get what we're after out of the data." Complex computer algorithms were essential for reconciling the inherent uncertainties. "We made thousands of model universes in a computer, and then observed them as BOSS would do and ran our analysis on them to answer the questions of 'What if?'"
By gauging how well their algorithms could analyze these model universes, known as "mocks" and based on catalogues of realistic but artificial galaxies, the experienced BOSS team was able to assess and fine-tune the algorithms when they were applied to the real BOSS data.
The National Energy Research Scientific Computing Center (NERSC), based at Berkeley Lab, was critical to the analysis and the creation of the mocks. Says White, "NERSC set aside resources for us to push analyses through quickly when we were up against deadlines. They provide a virtual meeting place where members of the collaboration from all around the world can come together on a shared platform, with both the data and the computational resources they need to perform their research."
BOSS has now provided the most accurate calibration ever of BAO's standard ruler. The universe's expansion history has been measured with unprecedented accuracy during the very stretch of ancient time, over six billion years in the past, when expansion had stopped slowing and acceleration began. But accurate as they are, the new BOSS results are just the beginning. Greater coverage and better resolution in scale are essential to understanding dark energy itself.
The proposed Dark Energy Spectroscopic Instrument (DESI), based on an international partnership of nearly 50 institutions led by Berkeley Lab, would enable the Mayall Telescope on Kitt Peak in Arizona to map over 20 million galaxies, plus over three million quasars, in 14,000 square degrees of the northern sky. By filling in the missing eons that BOSS can't reach, DESI could sharpen and extend coverage of the expansion history of the universe from the first appearance of the cosmic background radiation to the present day.
In the meantime, BOSS, ahead of schedule for completion in June, 2014, continues to be the premier instrument for mapping the universe
.
.
On 17:36 by Asveth Sreiram No comments
The novel battery technology is reported in a paper published in Nature on January 9. Under the OPEN 2012 program, the Harvard team received funding from the U.S. Department of Energy's Advanced Research Projects Agency-Energy (ARPA-E) to develop the innovative grid-scale battery and plans to work with ARPA-E to catalyze further technological and market breakthroughs over the next several years.
The paper reports a metal-free flow battery that relies on the electrochemistry of naturally abundant, inexpensive, small organic (carbon-based) molecules called quinones, which are similar to molecules that store energy in plants and animals.
The mismatch between the availability of intermittent wind or sunshine and the variability of demand is the biggest obstacle to getting a large fraction of our electricity from renewable sources. A cost-effective means of storing large amounts of electrical energy could solve this problem.
The battery was designed, built, and tested in the laboratory of Michael J. Aziz, Gene and Tracy Sykes Professor of Materials and Energy Technologies at the Harvard School of Engineering and Applied Sciences (SEAS). Roy G. Gordon, Thomas Dudley Cabot Professor of Chemistry and Professor of Materials Science, led the work on the synthesis and chemical screening of molecules. Alán Aspuru-Guzik, Professor of Chemistry and Chemical Biology, used his pioneering high-throughput molecular screening methods to calculate the properties of more than 10,000 quinone molecules in search of the best candidates for the battery.
Flow batteries store energy in chemical fluids contained in external tanks -- as with fuel cells -- instead of within the battery container itself. The two main components -- the electrochemical conversion hardware through which the fluids are flowed (which sets the peak power capacity), and the chemical storage tanks (which set the energy capacity) -- may be independently sized. Thus the amount of energy that can be stored is limited only by the size of the tanks. The design permits larger amounts of energy to be stored at lower cost than with traditional batteries.
By contrast, in solid-electrode batteries, such as those commonly found in cars and mobile devices, the power conversion hardware and energy capacity are packaged together in one unit and cannot be decoupled. Consequently they can maintain peak discharge power for less than an hour before being drained, and are therefore ill suited to store intermittent renewables.
"Our studies indicate that one to two days' worth of storage is required for making solar and wind dispatchable through the electrical grid," said Aziz.
To store 50 hours of energy from a 1-megawatt power capacity wind turbine (50 megawatt-hours), for example, a possible solution would be to buy traditional batteries with 50 megawatt-hours of energy storage, but they'd come with 50 megawatts of power capacity. Paying for 50 megawatts of power capacity when only 1 megawatt is necessary makes little economic sense.
For this reason, a growing number of engineers have focused their attention on flow battery technology. But until now, flow batteries have relied on chemicals that are expensive or difficult to maintain, driving up the energy storage costs.
The active components of electrolytes in most flow batteries have been metals. Vanadium is used in the most commercially advanced flow battery technology now in development, but its cost sets a rather high floor on the cost per kilowatt-hour at any scale. Other flow batteries contain precious metal electrocatalysts such as the platinum used in fuel cells.
The new flow battery developed by the Harvard team already performs as well as vanadium flow batteries, with chemicals that are significantly less expensive, and with no precious metal electrocatalyst.
"The whole world of electricity storage has been using metal ions in various charge states but there is a limited number that you can put into solution and use to store energy, and none of them can economically store massive amounts of renewable energy," Gordon said. "With organic molecules, we introduce a vast new set of possibilities. Some of them will be terrible and some will be really good. With these quinones we have the first ones that look really good."
Aspuru-Guzik noted that the project is very well aligned with the White House Materials Genome Initiative. "This project illustrates what the synergy of high-throughput quantum chemistry and experimental insight can do," he said. "In a very quick time period, our team honed in to the right molecule. Computational screening, together with experimentation, can lead to discovery of new materials in many application domains."
Quinones are abundant in crude oil as well as in green plants. The molecule that the Harvard team used in its first quinone-based flow battery is almost identical to one found in rhubarb. The quinones are dissolved in water, which prevents them from catching fire.
To back up a commercial wind turbine, a large storage tank would be needed, possibly located in a below-grade basement, said co-lead author Michael Marshak, a postdoctoral fellow at SEAS and in the Department of Chemistry and Chemical Biology. Or if you had a whole field of turbines or large solar farm, you could imagine a few very large storage tanks.
The same technology could also have applications at the consumer level, Marshak said. "Imagine a device the size of a home heating oil tank sitting in your basement. It would store a day's worth of sunshine from the solar panels on the roof of your house, potentially providing enough to power your household from late afternoon, through the night, into the next morning, without burning any fossil fuels."
"The Harvard team's results published in Nature demonstrate an early, yet important technical achievement that could be critical in furthering the development of grid-scale batteries," said ARPA-E Program Director John Lemmon. "The project team's result is an excellent example of how a small amount of catalytic funding from ARPA-E can help build the foundation to hopefully turn scientific discoveries into low-cost, early-stage energy technologies."
Team leader Aziz said the next steps in the project will be to further test and optimize the system that has been demonstrated on the bench top and bring it toward a commercial scale. "So far, we've seen no sign of degradation after more than 100 cycles, but commercial applications require thousands of cycles," he said. He also expects to achieve significant improvements in the underlying chemistry of the battery system. "I think the chemistry we have right now might be the best that's out there for stationary storage and quite possibly cheap enough to make it in the marketplace," he said. "But we have ideas that could lead to huge improvements."
By the end of the three-year development period, Connecticut-based Sustainable Innovations, LLC, a collaborator on the project, expects to deploy demonstration versions of the organic flow battery contained in a unit the size of a horse trailer. The portable, scaled-up storage system could be hooked up to solar panels on the roof of a commercial building, and electricity from the solar panels could either directly supply the needs of the building or go into storage and come out of storage when there's a need. Sustainable Innovations anticipates playing a key role in the product's commercialization by leveraging its ultra-low cost electrochemical cell design and system architecture already under development for energy storage applications.
"You could theoretically put this on any node on the grid," Aziz said. "If the market price fluctuates enough, you could put a storage device there and buy electricity to store it when the price is low and then sell it back when the price is high. In addition, you might be able to avoid the permitting and gas supply problems of having to build a gas-fired power plant just to meet the occasional needs of a growing peak demand."
This technology could also provide very useful backup for off-grid rooftop solar panels -- an important advantage considering some 20 percent of the world's population does not have access to a power distribution network.
William Hogan, Raymond Plank Professor of Global Energy Policy at Harvard Kennedy School, and one of the world's foremost experts on electricity markets, is helping the team explore the economic drivers for the technology.
Trent M. Molter, President and CEO of Sustainable Innovations, LLC, provides expertise on implementing the Harvard team's technology into commercial electrochemical systems.
"The intermittent renewables storage problem is the biggest barrier to getting most of our power from the sun and the wind," Aziz said. "A safe and economical flow battery could play a huge role in our transition off fossil fuels to renewable electricity. I'm excited that we have a good shot at it."
In addition to Aziz, Marshak, Aspuru-Guzik, and Gordon, the co-lead author of the Nature paper was Brian Huskinson, a graduate student with Aziz; coauthors included research associate Changwon Suh and postdoctoral researcher Süleyman Er in Aspuru-Guzik's group; Michael Gerhardt, a graduate student with Aziz; Cooper Galvin, a Pomona College undergraduate; and Xudong Chen, a postdoctoral fellow in Gordon's group.
This work was supported in part by the U.S. Department of Energy's Advanced Research Project Agency-Energy (ARPA-E), the Harvard School of Engineering and Applied Sciences, the National Science Foundation (NSF) Extreme Science and Engineering Discovery Environment (OCI-1053575), an NSF Graduate Research Fellowship, and the Fellowships for Young Energy Scientists program of the Foundation for Fundamental Research on Matter, which is part of the Netherlands Organization for Scientific Research (NWO)
.
.
Subscribe to:
Posts (Atom)
Search
Popular Posts
-
A team of scientists using NASA's Hubble Space Telescope has made the most detailed global map yet of the glow from a planet orbiti...
-
Aug. 29, 2013 — The age at which children learn a second language can have a significant bearing on the structure of their adult brain, ...
-
Nov. 2, 2013 — It doesn't take a Watson to realize that even the world's best supercomputers are staggeringly inefficient and ene...
-
Oct. 3, 2013 — Scientists have revealed nearly 100 genetic variants implicated in the development of cancers such as breast cancer and pr...
-
Nov. 1, 2013 — It was once thought that each cell in a person's body possesses the same DNA code and that the particular way the geno...
-
Oct. 30, 2013 — Video gaming causes increases in the brain regions responsible for spatial orientation, memory formation and strategic pl...
-
What you'll need: A plastic comb (or an inflated balloon) A narrow stream of water from a tap Dry hair Instructions: Tu...
-
Aug. 26, 2013 — Where did the Chelyabinsk meteorite come from? As a meteoroid, it either collided with another body in the solar system ...
-
Dec. 13, 2013 — South Pole Telescope scientists have detected for the first time a subtle distortion in the oldest light in the universe,...
-
This image shows two of the galaxy clusters Aug. 1, 2013 — Our universe is filled with gobs of galaxies, bound together by gravity...
Recent Posts
Sample Text
Blog Archive
-
▼
2014
(11)
-
►
January
(6)
- Milky Way May Have Formed 'Inside-Out:' Gaia Provi...
- Get Used to Heat Waves: Extreme El Nino Events to ...
- Egypt: Sarcophagus Leads to the Tomb of a Previous...
- Massive Exoplanets May Be More Earth-Like Than Tho...
- Universe Measured to One-Percent Accuracy: Most Pr...
- Organic Mega Flow Battery Promises Breakthrough fo...
-
►
January
(6)