Saturday, 31 August 2013

On 10:38 by Asveth Sreiram   No comments

Aug. 29, 2013 — By lowering the expression of a single gene, researchers at the National Institutes of Health have extended the average lifespan of a group of mice by about 20 percent -- the equivalent of raising the average human lifespan by 16 years, from 79 to 95. The research team targeted a gene called mTOR, which is involved in metabolism and energy balance, and may be connected with the increased lifespan associated with caloric restriction.

A detailed study of these mice revealed that gene-influenced lifespan extension did not affect every tissue and organ the same way. For example, the mice retained better memory and balance as they aged, but their bones deteriorated more quickly than normal.
This study appears in the Aug. 29 edition of Cell Reports.
"While the high extension in lifespan is noteworthy, this study reinforces an important facet of aging; it is not uniform," said lead researcher Toren Finkel, M.D., Ph.D., at NIH's National Heart, Lung, and Blood Institute (NHLBI). "Rather, similar to circadian rhythms, an animal might have several organ-specific aging clocks that generally work together to govern the aging of the whole organism."
Finkel, who heads the NHLBI's Laboratory of Molecular Biology in the Division of Intramural Research, noted that these results may help guide therapies for aging-related diseases that target specific organs, like Alzheimer's. However, further studies in these mice as well as human cells are needed to identify exactly how aging in these different tissues is connected at the molecular level.
The researchers engineered mice that produce about 25 percent of the normal amount of the mTOR protein, or about the minimum needed for survival. The engineered mTOR mice were a bit smaller than average, but they otherwise appeared normal.
The median lifespan for the mTOR mice was 28.0 months for males and 31.5 months for females, compared to 22.9 months and 26.5 months for normal males and females, respectively. The mTOR mice also had a longer maximal lifespan; seven of the eight longest-lived mice in this study were mTOR mice. This lifespan increase is one of the largest observed in mice so far.
While the genetically modified mTOR mice aged better overall, they showed only selective improvement in specific organs. They generally outperformed normal mice of equivalent age in maze and balance tests, indicating better retention of memory and coordination. Older mTOR mice also retained more muscle strength and posture. However, mTOR mice had a greater loss in bone volume as they aged, and they were more susceptible to infections at old age, suggesting a loss of immune function.
In addition to the NHLBI, this study was carried out by intramural researchers at the NIH's National Cancer Institute; National Institute of Diabetes and Digestive and Kidney Diseases; and National Institute on Aging.
On 10:37 by Asveth Sreiram   No comments

Aug. 29, 2013 — The age at which children learn a second language can have a significant bearing on the structure of their adult brain, according to a new joint study by the Montreal Neurological Institute and Hospital -- The Neuro at McGill University and Oxford University. The majority of people in the world learn to speak more than one language during their lifetime. Many do so with great proficiency particularly if the languages are learned simultaneously or from early in development.
The study concludes that the pattern of brain development is similar if you learn one or two language from birth. However, learning a second language later on in childhood after gaining proficiency in the first (native) language does in fact modify the brain's structure, specifically the brain's inferior frontal cortex. The left inferior frontal cortex became thicker and the right inferior frontal cortex became thinner. The cortex is a multi-layered mass of neurons that plays a major role in cognitive functions such as thought, language, consciousness and memory.
The study suggests that the task of acquiring a second language after infancy stimulates new neural growth and connections among neurons in ways seen in acquiring complex motor skills such as juggling. The study's authors speculate that the difficulty that some people have in learning a second language later in life could be explained at the structural level.
"The later in childhood that the second language is acquired, the greater are the changes in the inferior frontal cortex," said Dr. Denise Klein, researcher in The Neuro's Cognitive Neuroscience Unit and a lead author on the paper published in the journalBrain and Language. "Our results provide structural evidence that age of acquisition is crucial in laying down the structure for language learning."
Using a software program developed at The Neuro, the study examined MRI scans of 66 bilingual and 22 monolingual men and women living in Montreal. The work was supported by a grant from the Natural Science and Engineering Research Council of Canada and from an Oxford McGill Neuroscience Collaboration Pilot project.
On 10:35 by Asveth Sreiram   No comments

Aug. 29, 2013 — Your mother was right when she warned you that loud music could damage your hearing, but now scientists have discovered exactly what gets damaged and how. In a research report published in the September 2013 issue of The FASEB Journal, scientists describe exactly what type of damage noise does to the inner ear, and provide insights into a compound that may prevent noise-related damage.

"Noise-induced hearing loss, with accompanying tinnitus and sound hypersensitivity is a common condition which leads to communication problems and social isolation," said Xiaorui Shi, M.D., Ph.D., study author from the Department of Otolaryngology/Head and Neck Surgery at the Oregon Hearing Research Center at Oregon Health and Science University in Portland, Oregon. "The goal of our study is to understand the molecular mechanisms well enough to mitigate damage from exposure to loud sound."
To make this discovery, Shi and colleagues used three groups of 6 -- 8 week old mice, which consisted of a control group, a group exposed to broadband noise at 120 decibels for three hours a day for two days, and a third group given single-dose injections of pigment epithelium-derived factor (PEDF) prior to noise exposure. PEDF is a protein found in vertebrates that is currently being researched for the treatment of diseases like heart disease and cancer. The cells that secrete PEDF in control animals showed a characteristic branched morphology, with the cells arranging in a self-avoidance pattern which provided good coverage of the capillary wall. The morphology of the same cells in the animals exposed to wide-band noise, however, showed clear differences -- noise exposure caused changes in melanocytes located in the inner ear.
"Hearing loss over time robs people of their quality of life," said Gerald Weissmann, M.D., Editor-in-Chief of The FASEB Journal. "It's easy to say that we should avoid loud noises, but in reality, this is not always possible. Front-line soldiers or first responders do not have time to worry about the long-term effects of loud noise when they are giving their all. If, however, a drug could be developed to minimize the negative effects of loud noises, it would benefit one and all."
On 10:34 by Asveth Sreiram   No comments

Aug. 29, 2013 — Scientists have studied bird migration for centuries, but it remains one of nature's great mysteries. How do birds find their way over long distances between breeding and wintering sites? Is their migration route encoded in their genes, or is it learned?
Working with records from a long-term effort to reintroduce critically endangered whooping cranes in the Eastern U.S., a University of Maryland-led research team found evidence that these long-lived birds learn their migration route from older cranes, and get better at it with age.
Whooping crane groups that included a seven-year-old adult deviated 38% less from a migratory straight-line path between their Wisconsin breeding grounds and Florida wintering grounds, the researchers found. One-year-old birds that did not follow older birds veered, on average, 60 miles (97 kilometers) from a straight flight path. When the one-year-old cranes traveled with older birds, the average deviation was less than 40 miles (64 kilometers).
Individual whoopers' ability to stick to the route increased steadily each year up to about age 5, and remained roughly constant from that point on, the researchers found.
Many migration studies are done in short-lived species like songbirds, or by comparing a young bird to an older bird, said UMD biologist Thomas Mueller, an expert on animal migration and the study's lead scientist. "Here we could look over the course of the individual animals' lifetimes, and show that learning takes place over many years."
The researchers' findings, to be published August 30 in the journal Science, are based on data from an intensive effort to restore the endangered bird to its native range. The whooping crane (Grus americana), is North America's largest bird, standing five feet tall, and one of its longest-lived, surviving 30 years or more in the wild. The species was near extinction in the 1940s, with fewer than 25 individuals. Today about 250 wild whoopers summer in Canada and migrate to Texas for the winter.
The Whooping Crane Eastern Partnership, made up of government and non-profit experts, has been working since 2001 to establish a second population in the Eastern U.S., which now numbers more than 100 birds. At Maryland's Patuxent Wildlife Research Refuge and other captive breeding sites, adult whooping cranes produce chicks and biologists hand-raise them, using special methods designed to prepare the chicks for life in the wild. Each summer in a Wisconsin marsh, experts train a group of captive-raised chicks to follow an ultralight aircraft, using techniques like those portrayed in the fictional 1996 movie "Fly Away Home" to lead them on a 1,300-mile journey to their Florida wintering grounds.
Only this first migration is human-assisted; from then on the young birds travel on their own, usually in the company of other whooping cranes. Their movements are monitored daily via satellite transmitters, radio telemetry and on-the-ground observers. The result is a record of the movements of individual birds over several years, all with known parentage and the same upbringing.
"This is a globally unique data set in which we can control for genetics and test for the effect of experience," said UMD Biology Professor William F. Fagan, a co-author of the paper, "and it gives us an indication of just how important this kind of socially learned behavior is."
Using data on all the ultralight-trained birds' spring and fall migrations from 2002 to 2009, the researchers found that neither genetic relatedness nor gender had any effect on the whooping cranes' tendency to stay on the shortest migratory route. They were surprised to find that the migrating groups' size also made no difference.
"Many biologists would have expected to find a strong effect of group size," Fagan said, "with input from more birds' brains leading to improved navigation, but we didn't see that effect."
Only one experienced bird per group was enough to keep the migration on track. The researchers hypothesize that older birds are better at recognizing landmarks and coping with bad weather. Stronger autumn winds may explain why the whoopers tended to stray further from their straight course during fall migration, Mueller said.
The study shows the migration training for captive-born whooping cranes is working, Mueller said. However, the reintroduced whoopers are having trouble breeding in the wild. Based on the migration study's finding, "we need to take into consideration that these birds may also reproduce more successfully as they age," he said.
Given the whooping cranes' recent plunge towards extinction, it wouldn't be surprising if the birds need to re-learn how best to raise their chicks, said Patuxent-based scientist Sarah J. Converse of the U.S. Geological Survey, a co-author of the paper.
"These birds' behaviors have evolved over millennia," Converse said. "Managers here are trying to restore a culture, that is, the knowledge that these birds accumulate over time. We need to give these birds the time and the opportunity to get the breeding right. We might need to be a little bit patient."
On 10:33 by Asveth Sreiram   No comments
Aug. 29, 2013 — Nanosatellites are smartphone-sized spacecraft that can perform simple, yet valuable, space missions. Dozens of these little vehicles are now tirelessly orbiting Earth performing valuable functions for NASA, the Department of Defense and even private companies.
Nanosatellites borrow many of their components from terrestrial gadgets: miniaturized cameras, wireless radios and GPS receivers that have been perfected for hand-held devices are also perfect for spacecraft. However, according to Michigan Technological University's L. Brad King, there is at least one technology need that is unique to space: "Even the best smartphones don't have miniaturized rocket engines, so we need to develop them from scratch."
Miniature rockets aren't needed to launch a nanosatellite from Earth. The small vehicles can hitchhike with a regular rocket that is going that way anyway. But because they are hitchhikers, these nanosats don't always get dropped off in their preferred location. Once in space, a nanosatellite might need some type of propulsion to move it from its drop-off point into its desired orbit. This is where the micro rocket engine comes in.
For the last few years, researchers around the world have been trying to build such rockets using microscopic hollow needles to electrically spray thin jets of fluid, which push the spacecraft in the opposite direction. The fluid propellant is a special chemical known as an ionic liquid. A single thruster needle is finer than a human hair, less than one millimeter long and produces a thrust force equivalent to the weight of a few grains of sand. A few hundred of these needles fit in a postage-stamp-size package and produce enough thrust to maneuver a nanosatellite.
These new electrospray thrusters face some design challenges, however. "Because they are so small and intricate, they are expensive to make, and the needles are fragile," says King, the Ron and Elaine Starr Professor of Mechanical Engineering-Engineering Mechanics. "They are easily destroyed either by a careless bump or an electrical arc when they're running."
To get around the problem, King and his team have developed an elegant strategy: eliminate the expensive and tedious microfabrication required to make the needles by letting Mother Nature take care of the assembly. "We're working with a unique type of liquid called a ferrofluid that naturally forms a stationary pattern of sharp tips in the liquid surface," he says. "Each tip in this self-assembling structure can spray a jet of fluid just like a micro-needle, so we don't actually have to make any needles."
Ferrofluids have been around since the 1960s. They are made of tiny magnetic particles suspended in a solvent that moves when magnetic force is applied. King illustrates with a tiny container holding a ferrofluid made of kerosene and iron dust. The fluid lies flat until he puts a magnet beneath it. Then suddenly, the liquid forms a regular series of peaks reminiscent of a mountain range or Bart Simpson's haircut. These peaks remain perfectly stable despite vigorous shaking and even turning the container upside down. It is, nonetheless, completely liquid, as a finger-tip touch proves undeniably. When the magnet is removed, the liquid relaxes to a perfectly flat surface.
King's team was trying to make an ionic liquid that behaved like a ferrofluid when they learned about a research team at the University of Sydney that was already making these substances. The Sydney team was using magnetic nanoparticles made by the life-sciences company Sirtex, which are used to treat liver cancer. "They sent us a sample, and we've used it to develop a thruster," King said. "Now we have a nice collaboration going. It's amazing that the same technology used to treat cancer can also function as a micro rocket for spacecraft."
King's first thruster is made of a one-inch block of aluminum containing a small ring of the special fluid. When a magnet is placed beneath the block, the liquid forms a tiny, five-tipped crown. When an electric force is then applied to the ferrofluid crown liquid jets emerge from each point, producing thrust. "It's fascinating to watch," King says. "The peaks get taller and skinnier, and taller and skinnier, and at some point the rounded tips instantly pop into nano-sharp points and start emitting ions."
The thruster appears to be almost immune to permanent damage. The tips automatically heal themselves and re-grow if they are somehow damaged. King's team has already demonstrated its self-healing properties, albeit inadvertently. "We accidentally turned the voltage up too high, and the tips exploded in a small arc," King says. While this would spell death for a typical thruster, "A completely new crown immediately formed from the remaining ferrofluid and once again resumed thrusting."
Their thruster isn't ready to push a satellite around in orbit just yet. "First we have to really understand what is happening on a microscopic level, and then develop a larger prototype based on what we learn," King said. "We're not quite there yet; we can't build a person out of liquid, like the notorious villain from the Terminator movies. But we're pretty sure we can build a rocket engine."
King has applied for a patent on the new technology. The research is funded by the Air Force Office of Scientific Research.
On 10:32 by Asveth Sreiram   No comments

Aug. 29, 2013 — In the search for clean, green sustainable energy sources to meet human needs for generations to come, perhaps no technology matches the ultimate potential of artificial photosynthesis. Bionic leaves that could produce energy-dense fuels from nothing more than sunlight, water and atmosphere-warming carbon dioxide, with no byproducts other than oxygen, represent an ideal alternative to fossil fuels but also pose numerous scientific challenges. A major step toward meeting at least one of these challenges has been achieved by researchers with the U.S. Department of Energy (DOE)'s Lawrence Berkeley National Laboratory (Berkeley Lab) working at the Joint Center for Artificial Photosynthesis (JCAP).
"We've developed a method by which molecular hydrogen-producing catalysts can be interfaced with a semiconductor that absorbs visible light," says Gary Moore, a chemist with Berkeley Lab's Physical Biosciences Division and principal investigator for JCAP. "Our experimental results indicate that the catalyst and the light-absorber are interfaced structurally as well as functionally."
Moore is the corresponding author, along with Junko Yano and Ian Sharp, who also hold joint appointments with Berkeley Lab and JCAP, of a paper describing this research in the Journal of the American Chemical Society (JACS). The article is titled "Photofunctional Construct That Interfaces Molecular Cobalt-Based Catalysts for H2 Production to a Visible-Light-Absorbing Semiconductor." Co-authors are Alexandra Krawicz, Jinhui Yang and Eitan Anzenberg.
Earth receives more energy in one hour's worth of sunlight than all of humanity uses in an entire year. Through the process of photosynthesis, green plants harness solar energy to split molecules of water into oxygen, hydrogen ions (protons) and free electrons. The oxygen is released as waste and the protons and electrons are used to convert carbon dioxide into the carbohydrate sugars that plants use for energy. Scientists aim to mimic the concept but improve upon the actual process.
JCAP, which has a northern branch in Berkeley and a southern branch on the campus of the California Institute of Technology (Caltech), was established in 2010 by DOE as an Energy Innovation Hub. Operated as a partnership between Caltech and Berkeley Lab, JCAP is the largest research program in the United States dedicated to developing an artificial solar-fuel technology. While artificial photosynthesis can be used to generate electricity, fuels can be a more effective means of storing and transporting energy. The goal is an artificial photosynthesis system that's at least 10 times more efficient than natural photosynthesis.
To this end, once photoanodes have used solar energy to split water molecules, JCAP scientists need high performance semiconductor photocathodes that can use solar energy to catalyze fuel production. In previous efforts to produce hydrogen fuel, catalysts have been immobilized on non-photoactive substrates. This approach requires the application of an external electrical potential to generate hydrogen. Moore and his colleagues have combined these steps into a single material.
"In coupling the absorption of visible light with the production of hydrogen in one material, we can generate a fuel simply by illuminating our photocathode," Moore says. "No external electrochemical forward biasing is required."
The new JCAP photocathode construct consists of the semiconductor gallium phosphide and a molecular cobalt-containing hydrogen production catalyst from the cobaloxime class of compounds. As an absorber of visible light, gallium phosphide can make use of a greater number of available solar photons than semiconductors that absorb ultraviolet light, which means it is capable of producing significantly higher photocurrents and rates of fuel production. However, gallium phosphide can be notoriously unstable during photoelectrochemical operations.
Moore and his colleagues found that coating the surface of gallium phosphide with a film of the polymer vinylpyridine alleviates the instability problem, and if the vinylpyridine is then chemically treated with the cobaloxime catalyst, hydrogen production is significantly boosted.
"The modular aspect of our method allows independent modification of the light-absorber, linking material and catalyst, which means it can be adapted for use with other catalysts tethered over structured photocathodes as new materials and discoveries emerge," Moore says. "This could allow us, for example, to replace the precious metal catalysts currently used in many solar-fuel generator prototypes with catalysts made from earth-abundant elements."
Despite its promising electronic properties, gallium phosphide features a mid-sized optical band gap which ultimately limits the total fraction of solar photons available for absorption. Moore and his colleagues are now investigating semiconductors that cover a broader range of the solar spectrum, and catalysts that operate faster at lower electrical potentials. They also plan to investigate molecular catalysts for carbon dioxide reduction.
"We look forward to adapting our method to incorporate materials with improved properties for converting sunlight to fuel," Moore says. "We believe our method provides researchers at JCAP and elsewhere with an important tool for developing integrated photocathode materials that can be used in future solar-fuel generators as well as other technologies capable of reducing net carbon dioxide emissions."
On 10:31 by Asveth Sreiram   No comments

Aug. 29, 2013 — Bacteria living in the Gulf of Mexico beaches were able to 'eat up' the contamination from the Deep Water Horizon oil spill by supplementing their diet with nitrogen, delegates at the Goldschmidt conference will be told today, Friday 30th August.
Professor Joel Kostka will tell geochemists gathered in Florence for the conference that detailed genetic analysis showed some of the bacteria thrived on a diet of oil because they were able to fix nitrogen from the air. The research -- the first to use next generation sequencing technologies to dig into the detail of how the native beach microbes are metabolising the oil over time -- could open the door to much more sophisticated clean up techniques.
"Oil is a natural product, made of decayed plants and animals, and so is similar to the normal food sources for these bacteria." explains Professor Kostka, a microbiologist from Georgia Institute of Technology in Atlanta. "But because oil is low in nutrients such as nitrogen, this can limit how fast the bacteria grow and how quickly they are able to break down the oil. Our analysis showed that some bacteria are able to solve this problem themselves -- by getting their own nitrogen from the air."
Professor Kostka worked with Professor Markus Huettel, a biogeochemist from Florida State University, to take more than 500 samples over two years from Pensacola beach in the Gulf of Mexico, starting when the Deep Water Horizon oil slick first came ashore in June 2010. By analysing every gene of every bacteria in the sample, they were able to see which bacteria were present and how they responded as the conditions on the beach changed.The researchers looked at the prevalence of genes which encode for different types of activity -- such as nitrogen fixing or phosphorus uptake -- to identify exactly how the bacteria were degrading the oil.
"By understanding how the oil is degraded by microbes, which microbes do the work, and the impact of the surrounding environmental conditions, we can develop ways to intervene to support the natural clean-up process," says Professor Kostka. "However, we need to do this in a very measured and targeted way, to avoid long-term, unintended damage to the ecosystem. For example, in the past, nitrogen fertiliser has been sprayed onto contaminated beaches to speed up the work of the bacteria. Our analysis shows that, where bacteria can get this nitrogen naturally, such drastic intervention may not be necessary."
The genetic analysis carried out by Professor Kostka and his colleague Konstantinos Konstantinidis at Georgia Tech can show exactly how the oil-degrading bacteria are working at each part of an affected coastline, making it possible to identify which beaches are most effective at self-cleaning and target mitigation efforts -- such as offshore booms -- at the most vulnerable areas.But not all the bacteria thrived on a diet of oil. Professor Kostka's research showed that some bacteria which play an important role in the ecosystem of the beaches experienced a sharp decline following the contamination in June 2010.
"There's a tendency to focus on the short-term, visible effects of an oil spill on the beach and assume that once the beach looks 'clean' then all is back to normal," he says. "Our analysis shows some of the invisible impact in the loss of these important microbes. We need to be aware of the long-term chronic damage both a spill -- and in some cases our attempts to deal with it -- can cause."
On 10:30 by Asveth Sreiram   No comments

Aug. 29, 2013 — It's a fiercely debated question amongst palaeontologists: was the giant 'terror bird', which lived in Europe between 55 to 40 million years ago, really a terrifying predator or just a gentle herbivore?
New research presented at the Goldschmidt conference in Florence today (Thursday 29th August) may finally provide an answer. A team of German researchers has studied fossilised remains of terror birds from a former open-cast brown coal mine in the Geiseltal (Saxony-Anhalt, Germany) and their findings indicate the creature was most likely not a meat eater.
The terror bird -- also known asGastornis -- was a flightless bird up to two metres in height with an enormous, ferocious beak. Based upon its size and ominous appearance, scientists have long assumed that it was a ruthless carnivore.
"The terror bird was thought to have used its huge beak to grab and break the neck of its prey, which is supported by biomechanical modelling of its bite force," says Dr Thomas Tütken, from the University of Bonn. "It lived after the dinosaurs became extinct and at a time when mammals were at an early stage of evolution and relatively small; thus, the terror bird was though to have been a top predator at that time on land."
Recent research has cast some doubt on its diet, however. Palaeontologists in the United States found footprints believed to belong to the American cousin of Gastornis, and these do not show the imprints of sharp claws, used to grapple prey, that might be expected of a raptor. Also, the bird's sheer size and inability to move fast has made some believe it couldn't have predated on early mammals -- though others claim it might have ambushed them. But, without conclusive findings either way, the dietary inclinations of Gastornis remain a mystery.
Dr Tütken and his colleagues Dr Meinolf Hellmund, Dr Stephen Galer and Petra Held have taken a new geochemical approach to determine the diet of Gastornis. By analysing the calcium isotope composition in fossilised bones, they have been able to identify what proportion of a creature's diet was plant or animal and, on that basis, their position in the food chain of the local ecosystem. This depends on the calcium isotopic composition becoming "lighter" as it passes through the food chain. They tested the method first with herbivorous and carnivorous dinosaurs -- including top predator T-Rex -- as well as mammals living today, before applying it to terror bird bones held in the Geiseltal collection at Martin-Luther University in Halle.
Their results show that the calcium isotope compositions of terror bird bones are similar to those of herbivorous mammals and dinosaurs and not carnivorous ones. Before the debate is finally closed, however, the researchers want to cross check their data using other fossil assemblages to be completely sure.
"Tooth enamel preserves original geochemical signatures much better than bone, but since Gastornis didn't have any teeth, we've had to work with their bones to do our calcium isotope assay," explains Dr Tütken. "Because calcium is a major proportion of bone -- around 40% by weight -- its composition is unlikely to have been affected much by fossilisation. However, we want to be absolutely confident in our findings by analysing known herbivores and carnivores using fossilised bone from the same site and the same time period. This will give us an appropriate reference frame for the terror bird values."
On 10:29 by Asveth Sreiram   No comments

Aug. 30, 2013 — Tiny hair-like structures (cilia) are found on the surface of most cells. Cilia are responsible for the locomotion of cells (e.g. sperm cells), they process external signals and coordinate the correct arrangement of the inner organs during the development of an organism. For proper assembly and function of cilia, they need to be supplied with the appropriate building blocks.

Scientists at the Max Planck Institute of Biochemistry (MPIB) in Martinsried near Munich, Germany, now identified the mechanism of how Tubulin, the main building block of cilia, is transported within the cilium. "Defects in cilia cause numerous diseases that affect millions of people worldwide," says Sagar Bhogaraju, scientist at the MPI of Biochemistry.
The results now published in the journalScience could help to understand and potentially prevent these diseases.
Although cilia fulfill various tasks, they all have a similar structure: They are only five to ten micrometers (0.0005 to 0.001 centimeters) long and are located on the surface of eukaryotic cells. About 600 different ciliary proteins are synthesized inside the cell and then transported into the cilium. Disruption of this transport system, which scientists call intraflagellar transport (IFT), can lead to errors during the assembly of the cilia and thus cause diseases resulting in mental and physical symptoms. Mistakes in ciliary function can for example cause a "situs inversus," a condition where the left/right arrangement of the inner organs in the body is reversed.
Even though the importance of the intraflagellar transport (IFT) and the cilium to human health has been known for a long time, a structural and mechanistic understanding of IFT has been missing so far. Scientists from the research group "Intraflagellar Transport" headed by Esben Lorentzen now succeeded in identifying the transport mechanism of the key protein Tubulin. It is the most abundant protein in the cilium and forms its backbone. "We found that the two proteins IFT74 and IFT81 work together to form a tubulin-binding module," says Sagar Bhogaraju. When the researchers disturbed the binding of IFT74 and -81 to tubulin in human cells, it had severe impact on the formation of the cilia. "Our results provide the first glimpse into the assembly of the cilium at the molecular level," says the biochemist
.
On 10:28 by Asveth Sreiram   No comments

Aug. 29, 2013 — New research in mice reveals why the body is so slow to recover from jet lag and identifies a target for the development of drugs that could help us to adjust faster to changes in time zone.
With funding from the Wellcome Trust and F. Hoffmann La Roche, researchers at the University of Oxford, University of Notre Dame and F. Hoffmann La Roche have identified a mechanism that limits the ability of the body clock to adjust to changes in patterns of light and dark. And the team show that if you block the activity of this gene in mice, they recover faster from disturbances in their daily light/dark cycle that were designed to simulate jet-lag.
Nearly all life on Earth has an internal circadian body clock that keeps us ticking on a 24-hour cycle, synchronising a variety of bodily functions such as sleeping and eating with the cycle of light and dark in a solar day. When we travel to a different time zone our body clock eventually adjusts to the local time. However this can take up to one day for every hour the clock is shifted, resulting in several days of fatigue and discombobulation.
In mammals, the circadian clock is controlled by an area of the brain called the suprachiasmatic nuclei (SCN) which pulls every cell in the body into the same biological rhythm. It receives information from a specialised system in the eyes, separate from the mechanisms we use to 'see', which senses the time of day by detecting environmental light, synchronising the clock to local time. Until now, little was known about the molecular mechanisms of how light affects activity in the SCN to 'tune' the clock and why it takes so long to adjust when the light cycle changes.
To investigate this, the Oxford University team led by Dr Stuart Peirson and Professor Russell Foster, used mice to examine the patterns of gene expression in the SCN following a pulse of light during the hours of darkness. They identified around 100 genes that were switched on in response to light, revealing a sequence of events that act to retune the circadian clock. Amongst these, they identified one molecule, SIK1, that terminates this response, acting as a brake to limit the effects of light on the clock. When they blocked the activity of SIK1, the mice adjusted faster to changes in light cycle.
Dr Peirson explains: "We've identified a system that actively prevents the body clock from re-adjusting. If you think about, it makes sense to have a buffering mechanism in place to provide some stability to the clock. The clock needs to be sure that it is getting a reliable signal, and if the signal occurs at the same time over several days it probably has biological relevance. But it is this same buffering mechanism that slows down our ability to adjust to a new time zone and causes jet lag."
Disruptions in the circadian system have been linked to chronic diseases including cancer, diabetes, and heart disease, as well as weakened immunity to infections and impaired cognition. More recently, researchers are uncovering that circadian disturbances are a common feature of several mental illnesses, including schizophrenia and bipolar disorder.
Russell Foster, Director of the recently established Oxford University Sleep and Circadian Neuroscience Institute supported by the Wellcome Trust, said: "We're still several years away from a cure for jet-lag but understanding the mechanisms that generate and regulate our circadian clock gives us targets to develop drugs to help bring our bodies in tune with the solar cycle.Such drugs could potentially have broader therapeutic value for people with mental health issues."
On 10:27 by Asveth Sreiram   No comments

Aug. 30, 2013 — Whales have been shown to increase the pigment in their skin in response to sunshine, just as we get a tan.
Research published today in Nature journal, Scientific Reports, reveals that not only do some species of whales get darker with sun exposure, incurring DNA damage in their skin just like us, they also accumulate damage to the cells in the skin as they get older.
Experts in the response of skin to UV radiation at Newcastle University were called in after marine biologists in Mexico noticed an increasing number of whales in the area had blistered skin. Analysing samples from three types of whales -- blue, sperm and fin -- they worked together to study the changes in the whale skin after their annual migration to sunnier climes.
Mark Birch-Machin, Professor of Molecular Dermatology at Newcastle University and joint senior author of the paper said: "Whales can be thought of as the UV barometers of the sea. It's important that we study them as they are some of the longest living sea creatures and are sensitive to changes in their environment so they reflect the health of the ocean."
Migrating whales 'tan'
Over three years, the team of marine biologists from Trent University, Canada and Universities in La Paz and Querétaro, Mexico, took skin samples from the backs of three species of whales during their annual migration. Occurring between February and April the whales move to the sunnier Gulf of California, along the northwest coast of Mexico.
Blue whales, the jumbo-jet sized giants, have a very pale pigmentation. During migration time the team found a seasonal change with the pigment in their skin increasing as well as mitochondrial DNA damage. This internal damage to the mitochondria, the engines of the cells, is caused by UV exposure and is what we find in sunburned human skin.
Sperm whales with their distinctive rounded foreheads have a darker pigmentation, also migrate between February and April to the Gulf of California, but have a different lifestyle. They spend long periods at the surface between feeds and are therefore, exposed to more sun and UV.
The scientists found the sperm whales had a different mechanism for protecting themselves from the sun, triggering a stress response in their genes. Newcastle University researcher Amy Bowman added: "We saw for the first time evidence of genotoxic pathways being activated in the cells of the whales -- this is similar to the damage response caused by free radicals in human skin which is our protective mechanism against sun damage."
In contrast, the darkest whales, the deeply pigmented fin whales, were found to be resistant to sun damage showing the lowest prevalence of sunburn lesions in their skin.
Blistered skin
Karina Acevedo-Whitehouse, currently Senior Lecturer at the Universidad Autónoma de Querétaro, Mexico and joint senior author of the paper said: "There has been an increase in the number of reports on blister-type skin lesions in various whale species in areas of high UV radiation. In many cases no infectious microorganism has been found associated with these lesions. It's important that we study the effect of UV radiation on whale skin and the mechanisms that these species use to counteract such damage, both from an evolutionary approach and from a conservation perspective."
To carry out the research the Newcastle University team had to develop an analysis which allowed three whale genomes to be analysed at the same time, a difficult task as whales have very different sequences. This research is the first time that whales have been studied at a genetic level linking to migratory patterns and genetic damage.
"We need to investigate further what is happening," said Professor Birch-Machin, "if we are already seeing blistered skin in the whales caused by UV damage then we want to know whether this could develop into skin cancer and therefore serve as an early warning system.
"These whales occupy the same area year after year, so it is increasingly possible to understand the status of their populations, and what may be going on around them and in the environment. They are a reminder that changing climatic conditions are affecting every creature on the planet."
On 10:25 by Asveth Sreiram   No comments

Aug. 30, 2013 — Sea-level rise (SLR) has been isolated as a principal cause of coastal erosion in Hawaii. Differing rates of relative sea-level rise on the islands of Oahu and Maui, Hawaii remain as the best explanation for the difference in island-wide shoreline trends (that is, beach erosion or accretion) after examining other influences on shoreline change including waves, sediment supply and littoral processes, and anthropogenic changes. Researchers from the University of Hawaii -- Manoa (UHM), School of Ocean and Earth Science and Technology (SOEST) and the State of Hawaii, Department of Land and Natural Resources published a paper recently showing that SLR is a primary factor driving historical shoreline changes in Hawaii and that historical rates of shoreline change are about two orders of magnitude greater than SLR.
The authors of the work point out that knowing that SLR is a primary cause of shoreline change on a regional scale allows managers and other coastal zone decision-makers to target SLR impacts in their research programs and long-term planning. This study is confirmation that future SLR is a major concern for decision-makers charged with managing beaches.
"It is common knowledge among coastal scientists that sea level rise leads to shoreline recession," stated Dr. Brad Romine, coastal geologist with the University of Hawaii Sea Grant College Program. "Shorelines find an equilibrium position that is a balance between sediment availability and rising ocean levels. On an individual beach with adequate sediment availability, beach processes may not reflect the impact of SLR. With this research we confirm the importance of SLR as a primary driver of shoreline change on a regional to island-wide basis."
Globally-averaged sea-level rose at about 2 mm per year over the past century. Previous studies indicate that the rate of rise is now approximately 3 mm per year and may accelerate over coming decades. The results of the recent publication show that SLR is an important factor in historical shoreline change in Hawaii and will be increasingly important with projected SLR acceleration in this century. "Improved understanding of the influence of SLR on historical shoreline trends will aid in forecasting beach changes with increasing SLR," said Dr. Charles Fletcher, Associate Dean and Professor of Geology and Geophysics at the UHM SOEST.
"The research being conducted by SOEST provides us with an opportunity to anticipate SLR effects on coastal areas, including Hawaii's world famous beaches, coastal communities, and infrastructure. We hope this information will inform long range planning decisions and allow for the development of SLR adaptation plans," said Sam Lemmo, Administrator, Department of Land and natural Resources, Office of Conservation and Coastal Lands.
Results of island-wide historical trends indicate that Maui beaches are significantly more erosional than beaches on Oahu. On Maui, 78% of beaches eroded over the past century with an overall (island-wide) average shoreline change rate of 13 cm of erosion per year, while 52% of Oahu beaches eroded with an overall average shoreline change rate of 3 cm of erosion per year.
The variation in long-term relative SLR rates along the Hawaii archipelago is due, in large part, to variations in island subsidence with distance from actively growing Hawaii Island and/or variations in upper ocean water masses. The islands of Oahu and Maui, Hawaii, with significantly different rates of localized sea-level rise (SLR has been approximately 65% higher rate on Maui) over the past century, provided a natural laboratory to investigate possible relations between historical shoreline changes and SLR.
Island-wide and regional historical shoreline trends were calculated for the islands using shoreline positions measured from aerial photographs and survey charts. Shoreline positions were manually digitized using photogrammetric and geographic information system (GIS) software from aerial photo mosaics and topographic and hydrographic survey charts provided by the National Ocean Service (NOS). Shoreline movement through time was measured using GIS software. Historical shoreline data were optimized to reduce anthropogenic influences (e.g., constructing seawalls or sand mining) on shoreline change measurements. The researchers controlled for influences other than SLR to determine if SLR remains as the best explanation for observed changes. They also utilized a series of consistency checks to determine if results are significant and to eliminate other possible explanations.
On 10:24 by Asveth Sreiram   No comments

Aug. 29, 2013 — Physicists have reproduced a pattern resembling the cosmic microwave background radiation in a laboratory simulation of the big bang, using ultracold cesium atoms in a vacuum chamber at the University of Chicago.
"This is the first time an experiment like this has simulated the evolution of structure in the early universe," said Cheng Chin, professor in physics. Chin and his associates reported their feat in the Aug. 1 edition of Science Express, and it will appear soon in the print edition of Science.
Chin pursued the project with lead author Chen-Lung Hung, PhD'11, now at the California Institute of Technology, and Victor Gurarie of the University of Colorado, Boulder. Their goal was to harness ultracold atoms for simulations of the big bang to better understand how structure evolved in the infant universe.
The cosmic microwave background is the echo of the big bang. Extensive measurements of the CMB have come from the orbiting Cosmic Background Explorer in the 1990s, and later by the Wilkinson Microwave Anisotropy Probe and various ground-based observatories, including the UChicago-led South Pole Telescope collaboration. These tools have provided cosmologists with a snapshot of how the universe appeared approximately 380,000 years following the Big Bang, which marked the beginning of the universe.
It turns out that under certain conditions, a cloud of atoms chilled to a billionth of a degree above absolute zero (-459.67 degrees Fahrenheit) in a vacuum chamber displays phenomena similar to those that unfolded following the big bang, Hung said.
"At this ultracold temperature, atoms get excited collectively. They act as if they are sound waves in air," he said. The dense package of matter and radiation that existed in the very early universe generated similar sound-wave excitations, as revealed by COBE, WMAP and the other experiments.
The synchronized generation of sound waves correlates with cosmologists' speculations about inflation in the early universe. "Inflation set out the initial conditions for the early universe to create similar sound waves in the cosmic fluid formed by matter and radiation," Hung said.
Big bang's rippling echo
The sudden expansion of the universe during its inflationary period created ripples in space-time in the echo of the big bang. One can think of the big bang, in oversimplified terms, as an explosion that generated sound, Chin said. The sound waves began interfering with each other, creating complicated patterns. "That's the origin of complexity we see in the universe," he said.
These excitations are called Sakharov acoustic oscillations, named for Russian physicist Andrei Sakharov, who described the phenomenon in the 1960s. To produce Sakharov oscillations, Chin's team chilled a flat, smooth cloud of 10,000 or so cesium atoms to a billionth of a degree above absolute zero, creating an exotic state of matter known as a two-dimensional atomic superfluid.
Then they initiated a quenching process that controlled the strength of the interaction between the atoms of the cloud. They found that by suddenly making the interactions weaker or stronger, they could generate Sakharov oscillations.
The universe simulated in Chin's laboratory measured no more than 70 microns in diameter, approximately the diameter as a human hair. "It turns out the same kind of physics can happen on vastly different length scales," Chin explained. "That's the power of physics."
The goal is to better understand the cosmic evolution of a baby universe, the one that existed shortly after the big bang. It was much smaller then than it is today, having reached a diameter of only a hundred thousand light years by the time it had left the CMB pattern that cosmologists observe on the sky today.
In the end, what matters is not the absolute size of the simulated or the real universes, but their size ratios to the characteristic length scales governing the physics of Sakharov oscillations. "Here, of course, we are pushing this analogy to the extreme," Chin said.
380,000 years versus 10 milliseconds
"It took the whole universe about 380,000 years to evolve into the CMB spectrum we're looking at now," Chin said. But the physicists were able to reproduce much the same pattern in approximately 10 milliseconds in their experiment. "That suggests why the simulation based on cold atoms can be a powerful tool," Chin said.
None of the Science co-authors are cosmologists, but they consulted several in the process of developing their experiment and interpreting its results. The co-authors especially drew upon the expertise of UChicago's Wayne Hu, John Carlstrom and Michael Turner, and of Stanford University's Chao-Lin Kuo.
Hung noted that Sakharov oscillations serve as an excellent tool for probing the properties of cosmic fluid in the early universe. "We are looking at a two-dimensional superfluid, which itself is a very interesting object. We actually plan to use these Sakharov oscillations to study the property of this two-dimensional superfluid at different initial conditions to get more information."
The research team varied the conditions that prevailed early in the history of the expansion of their simulated universes by quickly changing how strongly their ultracold atoms interacted, generating ripples. "These ripples then propagate and create many fluctuations," Hung said. He and his co-authors then examined the ringing of those fluctuations.
Today's CMB maps show a snapshot of how the universe appeared at a moment in time long ago. "From CMB, we don't really see what happened before that moment, nor do we see what happened after that," Chin said. But, Hung noted, "In our simulation we can actually monitor the entire evolution of the Sakharov oscillations."
Chin and Hung are interested in continuing this experimental direction with ultracold atoms, branching into a variety of other types of physics, including the simulation of galaxy formation or even the dynamics of black holes.
"We can potentially use atoms to simulate and better understand many interesting phenomena in nature," Chin said. "Atoms to us can be anything you want them to be."
On 10:23 by Asveth Sreiram   No comments
Aug. 30, 2013 — The origin of cosmic rays in the universe has confounded scientists for decades. But a study by researchers using data from the IceCube Neutrino Observatory at the South Pole reveals new information that may help unravel the longstanding mystery of exactly how and where these "rays" (they are actually high-energy particles) are produced.

Cosmic rays can damage electronics on Earth, as well as human DNA, putting astronauts in space especially at risk.
The research, which draws on data collected by IceTop, the IceCube Observatory's surface array of detectors, is published online in Physical Review D, a leading journal in elementary particle physics.
University of Delaware physicist Bakhtiyar Ruzybayev is the study's corresponding author. UD scientists were the lead group for the construction of IceTop with support from the National Science Foundation and coordination by the project office at the University of Wisconsin, Madison.
The more scientists learn about the energy spectrum and chemical composition of cosmic rays, the closer humanity will come to uncovering where these energetic particles originate.
Cosmic rays are known to reach energies above 100 billion giga-electron volts (1011 GeV). The data reported in this latest paper cover the energy range from 1.6 times 106 GeV to 109 GeV.
Researchers are particularly interested in identifying cosmic rays in this interval because the transition from cosmic rays produced in the Milky Way Galaxy to "extragalactic" cosmic rays, produced outside our galaxy, is expected to occur in this energy range.
Exploding stars called supernovae are among the sources of cosmic rays here in the Milky Way, while distant objects such as collapsing massive stars and active galactic nuclei far from the Milky Way are believed to produce the highest energy particles in nature.
As Ruzybayev points out, the cosmic-ray energy spectrum does not follow a simple power law between the "knee" around 4 PeV (peta-electron volts) and the "ankle" around 4 EeV (exa-electron volts), as previously thought, but exhibits features like hardening around 20 PeV and steepening around 130 PeV.
"The spectrum steepens at the 'knee,' which is generally interpreted as the beginning of the end of the galactic population. Below the knee, cosmic rays are galactic in origin, while above that energy, particles from more distant regions in our universe become more and more likely," Ruzybayev explained. "These measurements provide new constraints that must be satisfied by any models that try to explain the acceleration and propagation of cosmic rays."
IceTop consists of 81 stations in its final configuration, covering an area of one square kilometer on the South Pole surface above the detectors of IceCube, which are buried over a mile deep in the ice. The analysis presented in this article was performed using data taken from June 2010 to May 2011, when the array consisted of only 73 stations.
The IceCube collaboration includes nearly 250 people from 39 research institutions in 11 countries, including the University of Delaware
.
On 10:23 by Asveth Sreiram   No comments

Aug. 29, 2013 — In a materials science laboratory at Harvard University, a transparent disk connected to a laptop fills the room with music -- it's the "Morning" prelude from Peer Gynt, played on an ionic speaker.

No ordinary speaker, it consists of a thin sheet of rubber sandwiched between two layers of a saltwater gel, and it's as clear as a window. A high-voltage signal that runs across the surfaces and through the layers forces the rubber to rapidly contract and vibrate, producing sounds that span the entire audible spectrum, 20 hertz to 20 kilohertz.
But this is not an electronic device, nor has it ever been seen before. Published in the August 30 issue of Science, it represents the first demonstration that electrical charges carried by ions, rather than electrons, can be put to meaningful use in fast-moving, high-voltage devices.
"Ionic conductors could replace certain electronic systems; they even offer several advantages," says co-lead author Jeong-Yun Sun, a postdoctoral fellow at the Harvard School of Engineering and Applied Sciences (SEAS).
For example, ionic conductors can be stretched to many times their normal area without an increase in resistivity -- a problem common in stretchable electronic devices. Secondly, they can be transparent, making them well suited for optical applications. Thirdly, the gels used as electrolytes are biocompatible, so it would be relatively easy to incorporate ionic devices -- such as artificial muscles or skin -- into biological systems.
After all, signals carried by charged ions are the electricity of the human body, allowing neurons to share knowledge and spurring the heart to beat. Bioengineers would dearly love to mesh artificial organs and limbs with that system.
"The big vision is soft machines," says co-lead author Christoph Keplinger, who worked on the project as a postdoctoral fellow at Harvard SEAS and in the Department of Chemistry and Chemical Biology. "Engineered ionic systems can achieve a lot of functions that our body has: they can sense, they can conduct a signal, and they can actuate movement. We're really approaching the type of soft machine that biology has to offer."
The audio speaker represents a robust proof of concept for ionic conductors because producing sounds across the entire audible spectrum requires both high voltage (to squeeze hard on the rubber layer) and high-speed actuation (to vibrate quickly) -- two criteria which are important for applications but which would have ruled out the use of ionic conductors in the past.
The traditional constraints are well known: high voltages can set off electrochemical reactions in ionic materials, producing gases and burning up the materials. Ions are also much larger and heavier than electrons, so physically moving them through a circuit is typically slow. The system invented at Harvard overcomes both of these problems, opening up a vast number of potential applications including not just biomedical devices, but also fast-moving robotics and adaptive optics.
"It must seem counterintuitive to many people, that ionic conductors could be used in a system that requires very fast actuation, like our speaker," says Sun. "Yet by exploiting the rubber layer as an insulator, we're able to control the voltage at the interfaces where the gel connects to the electrodes, so we don't have to worry about unwanted chemical reactions. The input signal is an alternating current (AC), and we use the rubber sheet as a capacitor, which blocks the flow of charge carriers through the circuit. As a result, we don't have to continuously move the ions in one direction, which would be slow; we simply redistribute them, which we can do thousands of times per second."
Sun works in a research group led by Zhigang Suo, the Allen E. and Marilyn M. Puckett Professor of Mechanics and Materials at Harvard SEAS. An expert in the mechanical behaviors of materials, Suo is also a Kavli Scholar at the Kavli Institute for Bionano Science & Technology, which is based at SEAS.
Suo teamed up with George M. Whitesides, a prominent chemist who specializes in soft machines, among many other topics. Whitesides is the Woodford L. and Ann A. Flowers University Professor in the Department of Chemistry and Chemical Biology, co-director of the Kavli Institute at Harvard, and a Core Faculty Member at the Wyss Institute for Biologically Inspired Engineering at Harvard.
"We'd like to change people's attitudes about where ionics can be used," says Keplinger, who now works in Whitesides' research group. "Our system doesn't need a lot of power, and you can integrate it anywhere you would need a soft, transparent layer that deforms in response to electrical stimuli -- for example, on the screen of a TV, laptop, or smartphone to generate sound or provide localized haptic feedback -- and people are even thinking about smart windows. You could potentially place this speaker on a window and achieve active noise cancellation, with complete silence inside."
Sam Liss, Director of Business Development in Harvard's Office of Technology Development, is working closely with the Suo and Whitesides labs to commercialize the technology. Their plan is to work with companies in a range of product categories, including tablet computing, smartphones, wearable electronics, consumer audio devices, and adaptive optics.
"With wearable computing devices becoming a reality, you could imagine eventually having a pair of glasses that toggles between wide-angle, telephoto, or reading modes based on voice commands or gestures," suggests Liss.
For now, there is much more engineering and chemistry work to be done. The Harvard team chose to make its audio speaker out of very simple materials -- the electrolyte is a polyacrylamide gel swollen with salt water -- but they emphasize that an entire class of ionically conductive materials is available for experimentation. Future work will focus on identifying the best combinations of materials for compatibility, long life, and adhesion between the layers.
In addition to Keplinger, Sun, Whitesides, and Suo, coauthors included Keith Choon Chiang Foo, a former postdoctoral fellow at Harvard SEAS, now at the Institute of High Performance Computing in Singapore; and Philipp Rothemund, a graduate student at Harvard SEAS.
This research was supported by the National Science Foundation through a grant to the Materials Research Science and Engineering Center at Harvard University (DMR-0820484) and by the Army Research Office (W911NF-09-1-0476). It was also enabled in part by the Department of Energy (ER45852) and the Agency for Science, Technology, and Research (A*STAR), Singapore
.
On 10:22 by Asveth Sreiram   No comments

Aug. 29, 2013 — UBC astronomers have discovered the first Trojan asteroid sharing the orbit of Uranus, and believe 2011 QF99 is part of a larger-than-expected population of transient objects temporarily trapped by the gravitational pull of the Solar System's giant planets.
Trojans are asteroids that share the orbit of a planet, occupying stable positions known as Lagrangian points. Astronomers considered their presence at Uranus unlikely because the gravitational pull of larger neighbouring planets would destabilize and expel any Uranian Trojans over the age of the Solar System.
To determine how the 60 kilometre-wide ball of rock and ice ended up sharing an orbit with Uranus the astronomers created a simulation of the Solar System and its co-orbital objects, including Trojans.
"Surprisingly, our model predicts that at any given time three per cent of scattered objects between Jupiter and Neptune should be co-orbitals of Uranus or Neptune," says Mike Alexandersen, lead author of the study to be published tomorrow in the journalScience. This percentage had never before been computed, and is much higher than previous estimates.
Several temporary Trojans and co-orbitals have been discovered in the Solar System during the past decade. QF99 is one of those temporary objects, only recently (within the last few hundred thousand years) ensnared by Uranus and set to escape the planet's gravitational pull in about a million years.
"This tells us something about the current evolution of the Solar System," says Alexandersen. "By studying the process by which Trojans become temporarily captured, one can better understand how objects migrate into the planetary region of the Solar System."
UBC astronomers Brett Gladman, Sarah Greenstreet and colleagues at the National Research Council of Canada and Observatoire de Besancon in France were part of the research team.
On 10:21 by Asveth Sreiram   No comments

Aug. 29, 2013 — Poverty and all its related concerns require so much mental energy that the poor have less remaining brainpower to devote to other areas of life, according to research based at Princeton University. As a result, people of limited means are more likely to make mistakes and bad decisions that may be amplified by -- and perpetuate -- their financial woes.
Published in the journal Science, the study presents a unique perspective regarding the causes of persistent poverty. The researchers suggest that being poor may keep a person from concentrating on the very avenues that would lead them out of poverty. A person's cognitive function is diminished by the constant and all-consuming effort of coping with the immediate effects of having little money, such as scrounging to pay bills and cut costs. Thusly, a person is left with fewer "mental resources" to focus on complicated, indirectly related matters such as education, job training and even managing their time.
In a series of experiments, the researchers found that pressing financial concerns had an immediate impact on the ability of low-income individuals to perform on common cognitive and logic tests. On average, a person preoccupied with money problems exhibited a drop in cognitive function similar to a 13-point dip in IQ, or the loss of an entire night's sleep.
But when their concerns were benign, low-income individuals performed competently, at a similar level to people who were well off, said corresponding author Jiaying Zhao, who conducted the study as a doctoral student in the lab of co-author Eldar Shafir, Princeton's William Stewart Tod Professor of Psychology and Public Affairs. Zhao and Shafir worked with Anandi Mani, an associate professor of economics at the University of Warwick in Britain, and Sendhil Mullainathan, a Harvard University economics professor.
"These pressures create a salient concern in the mind and draw mental resources to the problem itself. That means we are unable to focus on other things in life that need our attention," said Zhao, who is now an assistant professor of psychology at the University of British Columbia.
"Previous views of poverty have blamed poverty on personal failings, or an environment that is not conducive to success," she said. "We're arguing that the lack of financial resources itself can lead to impaired cognitive function. The very condition of not having enough can actually be a cause of poverty."
The mental tax that poverty can put on the brain is distinct from stress, Shafir explained. Stress is a person's response to various outside pressures that -- according to studies of arousal and performance -- can actually enhance a person's functioning, he said. In the Science study, Shafir and his colleagues instead describe an immediate rather than chronic preoccupation with limited resources that can be a detriment to unrelated yet still important tasks.
"Stress itself doesn't predict that people can't perform well -- they may do better up to a point," Shafir said. "A person in poverty might be at the high part of the performance curve when it comes to a specific task and, in fact, we show that they do well on the problem at hand. But they don't have leftover bandwidth to devote to other tasks. The poor are often highly effective at focusing on and dealing with pressing problems. It's the other tasks where they perform poorly."
The fallout of neglecting other areas of life may loom larger for a person just scraping by, Shafir said. Late fees tacked on to a forgotten rent payment, a job lost because of poor time-management -- these make a tight money situation worse. And as people get poorer, they tend to make difficult and often costly decisions that further perpetuate their hardship, Shafir said. He and Mullainathan were co-authors on a 2012 Science paper that reported a higher likelihood of poor people to engage in behaviors that reinforce the conditions of poverty, such as excessive borrowing.
"They can make the same mistakes, but the outcomes of errors are more dear," Shafir said. "So, if you live in poverty, you're more error prone and errors cost you more dearly -- it's hard to find a way out."
The first set of experiments took place in a New Jersey mall between 2010 and 2011 with roughly 400 subjects chosen at random. Their median annual income was around $70,000 and the lowest income was around $20,000. The researchers created scenarios wherein subjects had to ponder how they would solve financial problems, for example, whether they would handle a sudden car repair by paying in full, borrowing money or putting the repairs off. Participants were assigned either an "easy" or "hard" scenario in which the cost was low or high -- such as $150 or $1,500 for the car repair. While participants pondered these scenarios, they performed common fluid-intelligence and cognition tests.
Subjects were divided into a "poor" group and a "rich" group based on their income. The study showed that when the scenarios were easy -- the financial problems not too severe -- the poor and rich performed equally well on the cognitive tests. But when they thought about the hard scenarios, people at the lower end of the income scale performed significantly worse on both cognitive tests, while the rich participants were unfazed.
To better gauge the influence of poverty in natural contexts, between 2010 and 2011 the researchers also tested 464 sugarcane farmers in India who rely on the annual harvest for at least 60 percent of their income. Because sugarcane harvests occur once a year, these are farmers who find themselves rich after harvest and poor before it. Each farmer was given the same tests before and after the harvest, and performed better on both tests post-harvest compared to pre-harvest.
The cognitive effect of poverty the researchers found relates to the more general influence of "scarcity" on cognition, which is the larger focus of Shafir's research group. Scarcity in this case relates to any deficit -- be it in money, time, social ties or even calories -- that people experience in trying to meet their needs. Scarcity consumes "mental bandwidth" that would otherwise go to other concerns in life, Zhao said.
"These findings fit in with our story of how scarcity captures attention. It consumes your mental bandwidth," Zhao said. "Just asking a poor person to think about hypothetical financial problems reduces mental bandwidth. This is an acute, immediate impact, and has implications for scarcity of resources of any kind."
"We documented similar effects among people who are not otherwise poor, but on whom we imposed scarce resources," Shafir added. "It's not about being a poor person -- it's about living in poverty."
Many types of scarcity are temporary and often discretionary, said Shafir, who is co-author with Mullainathan of the book, "Scarcity: Why Having Too Little Means So Much," to be published in September. For instance, a person pressed for time can reschedule appointments, cancel something or even decide to take on less.
"When you're poor you can't say, 'I've had enough, I'm not going to be poor anymore.' Or, 'Forget it, I just won't give my kids dinner, or pay rent this month.' Poverty imposes a much stronger load that's not optional and in very many cases is long lasting," Shafir said. "It's not a choice you're making -- you're just reduced to few options. This is not something you see with many other types of scarcity."
The researchers suggest that services for the poor should accommodate the dominance that poverty has on a person's time and thinking. Such steps would include simpler aid forms and more guidance in receiving assistance, or training and educational programs structured to be more forgiving of unexpected absences, so that a person who has stumbled can more easily try again.
"You want to design a context that is more scarcity proof," said Shafir, noting that better-off people have access to regular support in their daily lives, be it a computer reminder, a personal assistant, a housecleaner or a babysitter.
"There's very little you can do with time to get more money, but a lot you can do with money to get more time," Shafir said. "The poor, who our research suggests are bound to make more mistakes and pay more dearly for errors, inhabit contexts often not designed to help."
On 10:20 by Asveth Sreiram   No comments

Aug. 29, 2013 — Data from a NASA airborne science mission reveals evidence of a large and previously unknown canyon hidden under a mile of Greenland ice.
The canyon has the characteristics of a winding river channel and is at least 460 miles (750 kilometers) long, making it longer than the Grand Canyon. In some places, it is as deep as 2,600 feet (800 meters), on scale with segments of the Grand Canyon. This immense feature is thought to predate the ice sheet that has covered Greenland for the last few million years.
"One might assume that the landscape of the Earth has been fully explored and mapped," said Jonathan Bamber, professor of physical geography at the University of Bristol in the United Kingdom, and lead author of the study. "Our research shows there's still a lot left to discover."
Bamber's team published its findings Thursday in the journalScience.
The scientists used thousands of miles of airborne radar data, collected by NASA and researchers from the United Kingdom and Germany over several decades, to piece together the landscape lying beneath the Greenland ice sheet.
A large portion of this data was collected from 2009 through 2012 by NASA's Operation IceBridge, an airborne science campaign that studies polar ice. One of IceBridge's scientific instruments, the Multichannel Coherent Radar Depth Sounder, can see through vast layers of ice to measure its thickness and the shape of bedrock below.
In their analysis of the radar data, the team discovered a continuous bedrock canyon that extends from almost the center of the island and ends beneath the Petermann Glacier fjord in northern Greenland.
At certain frequencies, radio waves can travel through the ice and bounce off the bedrock underneath. The amount of times the radio waves took to bounce back helped researchers determine the depth of the canyon. The longer it took, the deeper the bedrock feature.
"Two things helped lead to this discovery," said Michael Studinger, IceBridge project scientist at NASA's Goddard Space Flight Center in Greenbelt, Md. "It was the enormous amount of data collected by IceBridge and the work of combining it with other datasets into a Greenland-wide compilation of all existing data that makes this feature appear in front of our eyes."
The researchers believe the canyon plays an important role in transporting sub-glacial meltwater from the interior of Greenland to the edge of the ice sheet into the ocean. Evidence suggests that before the presence of the ice sheet, as much as 4 million years ago, water flowed in the canyon from the interior to the coast and was a major river system.
"It is quite remarkable that a channel the size of the Grand Canyon is discovered in the 21st century below the Greenland ice sheet," said Studinger. "It shows how little we still know about the bedrock below large continental ice sheets."
The IceBridge campaign will return to Greenland in March 2014 to continue collecting data on land and sea ice in the Arctic using a suite of instruments that includes ice-penetrating radar.
On 10:20 by Asveth Sreiram   No comments

Aug. 29, 2013 — Astronomers using NASA's Chandra X-ray Observatory have taken a major step in explaining why material around the giant black hole at the center of the Milky Way Galaxy is extraordinarily faint in X-rays. This discovery holds important implications for understanding black holes.
New Chandra images of Sagittarius A* (Sgr A*), which is located about 26,000 light-years from Earth, indicate that less than 1 percent of the gas initially within Sgr A*'s gravitational grasp ever reaches the point of no return, also called the event horizon. Instead, much of the gas is ejected before it gets near the event horizon and has a chance to brighten, leading to feeble X-ray emissions.
These new findings are the result of one of the longest observation campaigns ever performed with Chandra. The spacecraft collected five weeks' worth of data on Sgr A* in 2012. The researchers used this observation period to capture unusually detailed and sensitive X-ray images and energy signatures of super-heated gas swirling around Sgr A*, whose mass is about 4 million times that of the sun.
"We think most large galaxies have a supermassive black hole at their center, but they are too far away for us to study how matter flows near it," said Q. Daniel Wang of the University of Massachusetts in Amherst, who led of a study published Thursday in the journal Science. "Sgr A* is one of very few black holes close enough for us to actually witness this process."
The researchers found that the Chandra data from Sgr A* did not support theoretical models in which the X-rays are emitted from a concentration of smaller stars around the black hole. Instead, the X-ray data show the gas near the black hole likely originates from winds produced by a disk-shaped distribution of young massive stars.
"This new Chandra image is one of the coolest I've ever seen," said co-author Sera Markoff of the University of Amsterdam in the Netherlands. "We're watching Sgr A* capture hot gas ejected by nearby stars, and funnel it in towards its event horizon."
To plunge over the event horizon, material captured by a black hole must lose heat and momentum. The ejection of matter allows this to occur.
"Most of the gas must be thrown out so that a small amount can reach the black hole," said Feng Yuan of Shanghai Astronomical Observatory in China, the study's co-author. "Contrary to what some people think, black holes do not actually devour everything that's pulled towards them. Sgr A* is apparently finding much of its food hard to swallow."
The gas available to Sgr A* is very diffuse and super-hot, so it is hard for the black hole to capture and swallow it. The gluttonous black holes that power quasars and produce huge amounts of radiation have gas reservoirs much cooler and denser than that of Sgr A*.
The event horizon of Sgr A* casts a shadow against the glowing matter surrounding the black hole. This research could aid efforts using radio telescopes to observe and understand the shadow. It also will be useful for understanding the effect orbiting stars and gas clouds may have on matter flowing toward and away from the black hole.
NASA's Marshall Space Flight Center in Huntsville, Ala., manages the Chandra program for NASA's Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory controls Chandra's science and flight operations from Cambridge, Mass.
On 10:19 by Asveth Sreiram   No comments

Aug. 29, 2013 — If you're eating better and exercising regularly, but still aren't seeing improvements in your health, there might be a reason: pollution. According to a new research report published in the September issue of The FASEB Journal, what you are eating and doing may not be the problem, but what's in what you are eating could be the culprit.
"This study adds evidences for rethinking the way of addressing risk assessment especially when considering that the human population is widely exposed to low levels of thousands of chemicals, and that the health impact of realistic mixtures of pollutants will have to be tested as well," said Brigitte Le Magueresse-Battistoni, a researcher involved in the work from the French National Institute of Health and Medical Research (INSERM). "Indeed, one pollutant could have a different effect when in mixture with other pollutants. Thus, our study may have strong implications in terms of recommendations for food security. Our data also bring new light to the understanding of the impact of environmental food contaminants in the development of metabolic diseases."
To make this discovery, scientists used two groups of obese mice. Both were fed a high-fat, high-sucrose enriched diet, with one group receiving a cocktail of pollutants added to its diet at a very low dosage. These pollutants were given to the mice throughout -- from pre-conception to adulthood. Although the researchers did not observe toxicity or excess of weight gain in the group having received the cocktail of pollutants, they did see a deterioration of glucose tolerance in females, suggesting a defect in insulin signaling. Study results suggest that the mixture of pollutants reduced estrogen activity in the liver through enhancing an enzyme in charge of estrogen elimination. In contrast to females, glucose tolerance was not impacted in males exposed to the cocktail of pollutants. However, males did show some changes in liver related to cholesterol synthesis and transport. This study fuels the concept that pollutants may contribute to the current prevalence of chronic diseases including metabolic diseases and diabetes.
"This report that confirms something we've known for a long time: pollution is bad for us," said Gerald Weissmann, M.D., Editor-in-Chief of The FASEB Journal. "But, what's equally important, it shows that evaluating food contaminants and pollutants on an individual basis may be too simplistic. We can see that when "safe" levels of contaminants and pollutants act together, they have significant impact on public health."