Tuesday, 26 November 2013

On 21:12 by Asveth Sreiram   No comments
Nov. 25, 2013 — When a star explodes as a supernova, it shines brightly for a few weeks or months before fading away. Yet the material blasted outward from the explosion still glows hundreds or thousands of years later, forming a picturesque supernova remnant. What powers such long-lived brilliance?

In the case of Tycho's supernova remnant, astronomers have discovered that a reverse shock wave racing inward at Mach 1000 (1000 times the speed of sound) is heating the remnant and causing it to emit X-ray light.
"We wouldn't be able to study ancient supernova remnants without a reverse shock to light them up," says Hiroya Yamaguchi, who conducted this research at the Harvard-Smithsonian Center for Astrophysics (CfA).
Tycho's supernova was witnessed by astronomer Tycho Brahe in 1572. The appearance of this "new star" stunned those who thought the heavens were constant and unchanging. At its brightest, the supernova rivaled Venus before fading from sight a year later.
Modern astronomers know that the event Tycho and others observed was a Type Ia supernova, caused by the explosion of a white dwarf star. The explosion spewed elements like silicon and iron into space at speeds of more than 11 million miles per hour (5,000 km/s).
When that ejecta rammed into surrounding interstellar gas, it created a shock wave -- the equivalent of a cosmic "sonic boom." That shock wave continues to move outward today at about Mach 300. The interaction also created a violent "backwash" -- a reverse shock wave that speeds inward at Mach 1000.
"It's like the wave of brake lights that marches up a line of traffic after a fender-bender on a busy highway," explains CfA co-author Randall Smith.
The reverse shock wave heats gases inside the supernova remnant and causes them to fluoresce. The process is similar to what lights household fluorescent bulbs, except that the supernova remnant glows in X-rays rather than visible light. The reverse shock wave is what allows us to see supernova remnants and study them, hundreds of years after the supernova occurred.
"Thanks to the reverse shock, Tycho's supernova keeps on giving," says Smith.
The team studied the X-ray spectrum of Tycho's supernova remnant with the Suzaku spacecraft. They found that electrons crossing the reverse shock wave are rapidly heated by a still-uncertain process. Their observations represent the first clear evidence for such efficient, "collisionless" electron heating at the reverse shock of Tycho's supernova remnant.
The team plans to look for evidence of similar reverse shock waves in other young supernova remnants
.
On 21:10 by Asveth Sreiram   No comments
Nov. 25, 2013 — Archaeologists working in Nepal have uncovered evidence of a structure at the birthplace of the Buddha dating to the sixth century B.C. This is the first archaeological material linking the life of the Buddha -- and thus the first flowering of Buddhism -- to a specific century.

Pioneering excavations within the sacred Maya Devi Temple at Lumbini, Nepal, a UNESCO World Heritage site long identified as the birthplace of the Buddha, uncovered the remains of a previously unknown sixth-century B.C. timber structure under a series of brick temples. Laid out on the same design as those above it, the timber structure contains an open space in the center that links to the nativity story of the Buddha himself.
Until now, the earliest archaeological evidence of Buddhist structures at Lumbini dated no earlier than the third century B.C., the time of the patronage of the Emperor Asoka, who promoted the spread of Buddhism from present-day Afghanistan to Bangladesh.
"Very little is known about the life of the Buddha, except through textual sources and oral tradition," said archaeologist Professor Robin Coningham of Durham University, U.K., who co-led the investigation. Some scholars, he said, have maintained that the Buddha was born in the third century B.C. "We thought 'why not go back to archaeology to try to answer some of the questions about his birth?' Now, for the first time, we have an archaeological sequence at Lumbini that shows a building there as early as the sixth century B.C."
Early Buddhism revealed
The international team of archaeologists, led by Coningham and Kosh Prasad Acharya of the Pashupati Area Development Trust in Nepal, say the discovery contributes to a greater understanding of the early development of Buddhism as well as the spiritual importance of Lumbini. Their peer-reviewed findings are reported in the December 2013 issue of the international journal Antiquity. The research is partly supported by the National Geographic Society.
To determine the dates of the timber shrine and a previously unknown early brick structure above it, fragments of charcoal and grains of sand were tested using a combination of radiocarbon and optically stimulated luminescence techniques. Geoarchaeological research also confirmed the presence of ancient tree roots within the temple's central void.
"UNESCO is very proud to be associated with this important discovery at one of the most holy places for one of the world's oldest religions," said UNESCO Director-General Irina Bokova, who urged "more archaeological research, intensified conservation work and strengthened site management" to ensure Lumbini's protection.
"These discoveries are very important to better understand the birthplace of the Buddha," said Ram Kumar Shrestha, Nepal's minister of culture, tourism and civil aviation. "The government of Nepal will spare no effort to preserve this significant site."
Buddhist tradition records that Queen Maya Devi, the mother of the Buddha, gave birth to him while holding on to the branch of a tree within the Lumbini Garden, midway between the kingdoms of her husband and parents. Coningham and his colleagues postulate that the open space in the center of the most ancient, timber shrine may have accommodated a tree. Brick temples built later above the timber shrine also were arranged around the central space, which was unroofed.
Four main Buddhist sites
Lumbini is one of the key sites associated with the life of the Buddha; others are Bodh Gaya, where he became a Buddha or enlightened one; Sarnath, where he first preached; and Kusinagara, where he passed away. At his passing at the age of 80, the Buddha is recorded as having recommended that all Buddhists visit "Lumbini." The shrine was still popular in the middle of the first millennium A.D. and was recorded by Chinese pilgrims as having a shrine beside a tree.
The Maya Devi Temple at Lumbini remains a living shrine; the archaeologists worked alongside meditating monks, nuns and pilgrims.
In the scientific paper in Antiquity, the authors write: "The sequence (of archaeological remains) at Lumbini is a microcosm for the development of Buddhism from a localized cult to a global religion."
Lost and overgrown in the jungles of Nepal in the medieval period, ancient Lumbini was rediscovered in 1896 and identified as the birthplace of the Buddha on account of the presence of a third-century B.C. sandstone pillar. The pillar, which still stands, bears an inscription documenting a visit by Emperor Asoka to the site of the Buddha's birth as well as the site's name -- Lumbini.
Despite the rediscovery of the key Buddhist sites, their earliest levels were buried deep or destroyed by later construction, leaving evidence of the very earliest stages of Buddhism inaccessible to archaeological investigation, until now.
Half a billion people around the world are Buddhists, and many hundreds of thousands make a pilgrimage to Lumbini each year. The archaeological investigation there was funded by the government of Japan in partnership with the government of Nepal, under a UNESCO project aimed at strengthening the conservation and management of Lumbini. Along with the National Geographic Society, the research also was supported by Durham University and Stirling University.
Coningham and Acharya were joined on the Antiquity paper by coauthors K.M. Strickland, C.E. Davis, M.J. Manuel, I. A. Simpson, K. Gilliland, J. Tremblay, T.C. Kinnaird and D.C.W. Sanderson.
A documentary on Coningham's exploration of the Buddha's life, "Buried Secrets of the Buddha," will premiere in February internationally on National Geographic Channel.
For a National Geographic news video about the findings, see:http://video.nationalgeographic.com/video/news/history-archaeology-news/buddha-birth-vin
/
On 21:10 by Asveth Sreiram   No comments
Nov. 24, 2013 — One of the smallest parts of the brain is getting a second look after new research suggests it plays a crucial role in decision making.

A University of British Columbia study published in Nature Neuroscience says the lateral habenula, a region of the brain linked to depression and avoidance behaviours, has been largely misunderstood and may be integral in cost-benefit decisions.
"These findings clarify the brain processes involved in the important decisions that we make on a daily basis, from choosing between job offers to deciding which house or car to buy," says Prof. Stan Floresco of UBC's Dept. of Psychology and Brain Research Centre (BRC). "It also suggests that the scientific community has misunderstood the true functioning of this mysterious, but important, region of the brain."
In the study, scientists trained lab rats to choose between a consistent small reward (one food pellet) or a potentially larger reward (four food pellets) that appeared sporadically. Like humans, the rats tended to choose larger rewards when costs -- in this case, the amount of time they had to wait before receiving food-were low and preferred smaller rewards when such risks were higher.
Previous studies suggest that turning off the lateral habenula would cause rats to choose the larger, riskier reward more often, but that was not the case. Instead, the rats selected either option at random, no longer showing the ability to choose the best option for them.
The findings have important implications for depression treatment. "Deep brain stimulation -- which is thought to inactivate the lateral habenula -- has been reported to improve depressive symptoms in humans," Floresco says. "But our findings suggest these improvements may not be because patients feel happier. They may simply no longer care as much about what is making them feel depressed."
Background
Floresco, who conducted the study with PhD candidate Colin Stopper, says more investigation is needed to understand the complete brain functions involved in cost-benefit decision processes and related behaviour. A greater understanding of decision-making processes is also crucial, they say, because many psychiatric disorders, such as schizophrenia, stimulant abuse and depression, are associated with impairments in these processes.
The lateral habenula is considered one of the oldest regions of the brain, evolution-wise, the researchers say
.
On 21:09 by Asveth Sreiram   No comments
Nov. 24, 2013 — Even if carbon dioxide emissions came to a sudden halt, the carbon dioxide already in Earth's atmosphere could continue to warm our planet for hundreds of years, according to Princeton University-led research published in the journalNature Climate Change. The study suggests that it might take a lot less carbon than previously thought to reach the global temperature scientists deem unsafe.

The researchers simulated an Earth on which, after 1,800 billion tons of carbon entered the atmosphere, all carbon dioxide emissions suddenly stopped. Scientists commonly use the scenario of emissions screeching to a stop to gauge the heat-trapping staying power of carbon dioxide. Within a millennium of this simulated shutoff, the carbon itself faded steadily with 40 percent absorbed by Earth's oceans and landmasses within 20 years and 80 percent soaked up at the end of the 1,000 years.
By itself, such a decrease of atmospheric carbon dioxide should lead to cooling. But the heat trapped by the carbon dioxide took a divergent track.
After a century of cooling, the planet warmed by 0.37 degrees Celsius (0.66 Fahrenheit) during the next 400 years as the ocean absorbed less and less heat. While the resulting temperature spike seems slight, a little heat goes a long way here. Earth has warmed by only 0.85 degrees Celsius (1.5 degrees Fahrenheit) since pre-industrial times.
The Intergovernmental Panel on Climate Change estimates that global temperatures a mere 2 degrees Celsius (3.6 degrees Fahrenheit) higher than pre-industrial levels would dangerously interfere with the climate system. To avoid that point would mean humans have to keep cumulative carbon dioxide emissions below 1,000 billion tons of carbon, about half of which has already been put into the atmosphere since the dawn of industry.
The lingering warming effect the researchers found, however, suggests that the 2-degree point may be reached with much less carbon, said first author Thomas Frölicher, who conducted the work as a postdoctoral researcher in Princeton's Program in Atmospheric and Oceanic Sciences under co-author Jorge Sarmiento, the George J. Magee Professor of Geoscience and Geological Engineering.
"If our results are correct, the total carbon emissions required to stay below 2 degrees of warming would have to be three-quarters of previous estimates, only 750 billion tons instead of 1,000 billion tons of carbon," said Frölicher, now a researcher at the Swiss Federal Institute of Technology in Zurich. "Thus, limiting the warming to 2 degrees would require keeping future cumulative carbon emissions below 250 billion tons, only half of the already emitted amount of 500 billion tons."
The researchers' work contradicts a scientific consensus that the global temperature would remain constant or decline if emissions were suddenly cut to zero. But previous research did not account for a gradual reduction in the oceans' ability to absorb heat from the atmosphere, particularly the polar oceans, Frölicher said. Although carbon dioxide steadily dissipates, Frölicher and his co-authors were able to see that the oceans that remove heat from the atmosphere gradually take up less. Eventually, the residual heat offsets the cooling that occurred due to dwindling amounts of carbon dioxide.
Frölicher and his co-authors showed that the change in ocean heat uptake in the polar regions has a larger effect on global mean temperature than a change in low-latitude oceans, a mechanism known as "ocean-heat uptake efficacy." This mechanism was first explored in a 2010 paper by Frölicher's co-author, Michael Winton, a researcher at the National Oceanic and Atmospheric Administration's Geophysical Fluid Dynamics Laboratory (GFDL) on Princeton's Forrestal Campus.
"The regional uptake of heat plays a central role. Previous models have not really represented that very well," Frölicher said.
"Scientists have thought that the temperature stays constant or declines once emissions stop, but now we show that the possibility of a temperature increase can not be excluded," Frölicher said. "This is illustrative of how difficult it may be to reverse climate change -- we stop the emissions, but still get an increase in the global mean temperature.
"
On 21:09 by Asveth Sreiram   No comments
Nov. 22, 2013 — A new species of carnivorous dinosaur -- one of the three largest ever discovered in North America -- lived alongside and competed with small-bodied tyrannosaurs 98 million years ago. This newly discovered species, Siats meekerorum, (pronounced see-atch) was the apex predator of its time, and kept tyrannosaurs from assuming top predator roles for millions of years.

Named after a cannibalistic man-eating monster from Ute tribal legend, Siats is a species of carcharodontosaur, a group of giant meat-eaters that includes some of the largest predatory dinosaurs ever discovered. The only other carcharodontosaur known from North America is Acrocanthosaurus, which roamed eastern North America more than 10 million years earlier. Siats is only the second carcharodontosaur ever discovered in North America; Acrocanthosaurus, discovered in 1950, was the first.
"It's been 63 years since a predator of this size has been named from North America," says Lindsay Zanno, a North Carolina State University paleontologist with a joint appointment at the North Carolina Museum of Natural Sciences, and lead author of a Nature Communications paper describing the find. "You can't imagine how thrilled we were to see the bones of this behemoth poking out of the hillside."
Zanno and colleague Peter Makovicky, from Chicago's Field Museum of Natural History, discovered the partial skeleton of the new predator in Utah's Cedar Mountain Formation in 2008. The species name acknowledges the Meeker family for its support of early career paleontologists at the Field Museum, including Zanno.
The recovered specimen belonged to an individual that would have been more than 30 feet long and weighed at least four tons. Despite its giant size, these bones are from a juvenile. Zanno and Makovicky theorize that an adult Siats might have reached the size of Acrocanthosaurus, meaning the two species vie for the second largest predator ever discovered in North America. Tyrannosaurus rex, which holds first place, came along 30 million years later and weighed in at more than twice that amount.
Although Siats and Acrocanthosaurus are both carcharodontosaurs, they belong to different sub-groups. Siats is a member of Neovenatoridae, a more slender-bodied group of carcharodontosaurs. Neovenatorids have been found in Europe, South America, China, Japan and Australia. However, this is the first time a neovenatorid has ever been found in North America.
Siats terrorized what is now Utah during the Late Cretaceous period (100 million years ago to 66 million years ago). It was previously unknown who the top meat-eater was in North America during this period. "Carcharodontosaurs reigned for much longer in North America than we expected," says Zanno. In fact, Siats fills a gap of more than 30 million years in the fossil record, during which time the top predator role changed hands from carcharodontosaurs in the Early Cretaceous to tyrannosaurs in the Late Cretaceous.
The lack of fossils left paleontologists unsure about when this change happened and if tyrannosaurs outcompeted carcharodontosaurs, or were simply able to assume apex predator roles following carcharodontosaur extinction. It is now clear that Siats' large size would have prevented smaller tyrannosaurs from taking their place atop the food chain.
"The huge size difference certainly suggests that tyrannosaurs were held in check by carcharodontosaurs, and only evolved into enormous apex predators after the carcharodontosaurs disappeared," says Makovicky. Zanno adds, "Contemporary tyrannosaurs would have been no more than a nuisance to Siats, like jackals at a lion kill. It wasn't until carcharodontosaurs bowed out that the stage could be set for the evolution of T. rex."
At the time Siats reigned, the landscape was lush, with abundant vegetation and water supporting a variety of plant-eating dinosaurs, turtles, crocodiles, and giant lungfish. Other predators inhabited this ecosystem, including early tyrannosaurs and several species of other feathered dinosaurs that have yet to be described by the team. "We have made more exciting discoveries including two new species of dinosaur," Makovicky says.
"Stay tuned," adds Zanno. "There are a lot more cool critters where Siats came from."
All fieldwork was conducted under permits through the Bureau of Land Management and funded by the Field Museum. Research was funded by North Carolina State University, North Carolina Museum of Natural Sciences and the Field Museum
.
On 21:08 by Asveth Sreiram   No comments
Nov. 21, 2013 — Obesity may alter the way we taste at the most fundamental level: by changing how our tongues react to different foods.

In a Nov. 13 study in the journal PLOS ONE, University at Buffalo biologists report that being severely overweight impaired the ability of mice to detect sweets.
Compared with slimmer counterparts, the plump mice had fewer taste cells that responded to sweet stimuli. What's more, the cells that did respond to sweetness reacted relatively weakly.
The findings peel back a new layer of the mystery of how obesity alters our relationship to food.
"Studies have shown that obesity can lead to alterations in the brain, as well as the nerves that control the peripheral taste system, but no one had ever looked at the cells on the tongue that make contact with food," said lead scientist Kathryn Medler, PhD, UB associate professor of biological sciences.
"What we see is that even at this level -- at the first step in the taste pathway -- the taste receptor cells themselves are affected by obesity," Medler said. "The obese mice have fewer taste cells that respond to sweet stimuli, and they don't respond as well."
The research matters because taste plays an important role in regulating appetite: what we eat, and how much we consume.
How an inability to detect sweetness might encourage weight gain is unclear, but past research has shown that obese people yearn for sweet and savory foods though they may not taste these flavors as well as thinner people.
Medler said it's possible that trouble detecting sweetness may lead obese mice to eat more than their leaner counterparts to get the same payoff.
Learning more about the connection between taste, appetite and obesity is important, she said, because it could lead to new methods for encouraging healthy eating.
"If we understand how these taste cells are affected and how we can get these cells back to normal, it could lead to new treatments," Medler said. "These cells are out on your tongue and are more accessible than cells in other parts of your body, like your brain."
The new PLOS ONE study compared 25 normal mice to 25 of their littermates who were fed a high-fat diet and became obese.
To measure the animals' response to different tastes, the research team looked at a process called calcium signaling. When cells "recognize" a certain taste, there is a temporary increase in the calcium levels inside the cells, and the scientists measured this change.
The results: Taste cells from the obese mice responded more weakly not only to sweetness but, surprisingly, to bitterness as well. Taste cells from both groups of animals reacted similarly to umami, a flavor associated with savory and meaty foods.
Medler's co-authors on the study were former UB graduate student Amanda Maliphol and former UB undergraduate Deborah Garth
.
On 21:07 by Asveth Sreiram   No comments
Nov. 21, 2013 — Astrophysicists using a telescope embedded in Antarctic ice have succeeded in a quest to detect and record the mysterious phenomena known as cosmic neutrinos -- nearly massless particles that stream to Earth at the speed of light from outside our solar system, striking the surface in a burst of energy that can be as powerful as a baseball pitcher's fastball. Next, they hope to build on the early success of the IceCube Neutrino Observatory to detect the source of these high-energy particles, said Physics Professor Gregory Sullivan, who led the University of Maryland's 12-person team of contributors to the IceCube Collaboration.

"The era of neutrino astronomy has begun," Sullivan said as the IceCube Collaboration announced the observation of 28 very high-energy particle events that constitute the first solid evidence for astrophysical neutrinos from cosmic sources.
By studying the neutrinos that IceCube detects, scientists can learn about the nature of astrophysical phenomena occurring millions, or even billions of light years from Earth, Sullivan said. "The sources of neutrinos, and the question of what could accelerate these particles, has been a mystery for more than 100 years. Now we have an instrument that can detect astrophysical neutrinos. It's working beautifully, and we expect it to run for another 20 years."
The collaboration's report on the first cosmic neutrino records from the IceCube Neutrino Observatory, collected from instruments embedded in one cubic kilometer of ice at the South Pole, was published Nov. 22 in the journal Science.
"This is the first indication of very high-energy neutrinos coming from outside our solar system," said University of Wisconsin-Madison Physics Professor Francis Halzen, principal investigator of IceCube. "It is gratifying to finally see what we have been looking for. This is the dawn of a new age of astronomy."
"Neutrinos are one of the basic building blocks of our universe," said UMD Physics Associate Professor Kara Hoffman, an IceCube team member. Billions of them pass through our bodies unnoticed every second. These extremely high-energy particles maintain their speed and direction unaffected by magnetic fields. The vast majority of neutrinos originate either in the sun or in Earth's own atmosphere. Far more rare are astrophysical neutrinos, which come from the outer reaches of our galaxy or beyond.
The origin and cause of astrophysical neutrinos are unknown, though gamma ray bursts, active galactic nuclei and black holes are potential sources. Better understanding of these neutrinos is critically important in particle physics, astrophysics and astronomy, and scientists have worked for more than 50 years to design and build a high-energy neutrino detector of this type.
IceCube was designed to accomplish two major scientific goals: measure the flux, or rate, of high-energy neutrinos and try to identify some of their sources. The neutrino observatory was built and is operated by an international collaboration of more than 250 physicists and engineers. UMD physicists have been key collaborators on IceCube since 2002, when its unique design was devised and construction began.
IceCube is made up of 5,160 digital optical modules suspended along 86 strings embedded in ice beneath the South Pole. The National Science Foundation-supported observatory detects neutrinos through the tiny flashes of blue light, called Cherenkov light, produced when neutrinos interact in the ice. Computers at the IceCube laboratory collect near-real-time data from the optical sensors and send information about interesting events north via satellite. The UMD team designed the data collection system and much of IceCube's analytic software. Construction took nearly a decade, and the completed detector began gathering data in May 2011.
"IceCube is a wonderful and unique astrophysical telescope -- it is deployed deep in the Antarctic ice but looks over the entire Universe, detecting neutrinos coming through the Earth from the northern skies, as well as from around the southern skies," said Vladimir Papitashvili of the National Science Foundation (NSF) Division of Polar Programs.
In April 2012 IceCube detected two high-energy events above 1 petaelectronvolt (PeV), nicknamed Bert and Ernie, the first astrophysical neutrinos definitively recorded by a terrestrial detector. After Bert and Ernie were discovered, the IceCube team searched their records from May 2010 to May 2012 of events that fell slightly below the energy level of their original search. They discovered 26 more high-energy events, all at levels of 30 teraelectronvolts (TeV) or higher, indicative of astrophysical neutrinos. Preliminary results of this analysis were presented May 15 at the IceCube Particle Astrophysics Symposium at UW-Madison. The analysis presented in Science reveals a highly statistically significant signal (more than 4 sigma), providing solid evidence that IceCube has successfully detected high-energy extraterrestrial neutrinos, said UMD's Sullivan.
Since astrophysical neutrinos move in straight lines unimpeded by outside forces, they can act as pointers to the place in the galaxy where they originated. The 28 events recorded so far are too few to point to any one location, Sullivan said. Over the coming years, the IceCube team will watch, "like waiting for a long exposure photograph," as more measurements fill in a picture that may reveal the point of origin of these intriguing phenomena.
New detection systems for astrophysical neutrinos are also in the works. Hoffman is leading the development of the Askaryan Radio Array, a neutrino telescope that uses radio frequency, which transmits best through very cold ice, to detect the particles. Plans are underway for 37 subsurface clusters of radio antennae
The IceCube Neutrino Observatory was built under a NSF Major Research Equipment and Facilities Construction grant, with assistance from partner funding agencies around the world. The NSF's Division of Polar Programs and Physics Division continue to support the project with a Maintenance and Operations grant, along with international support from participating institutes and their funding agencies.
UMD contributors to the IceCube collaboration include Sullivan and Hoffman; UMD faculty and staff members Erik Blaufuss, John Felde, Henrike Wissing, Alex Olivas, Donald La Dieu, and Torsten Schmidt; and graduate students Elim Cheung, Robert Hellauer, Ryan Maunu, and Michael Richman
.
On 21:07 by Asveth Sreiram   No comments
Nov. 21, 2013 — The Y chromosome is a symbol of maleness, present only in males and encoding genes important for male reproduction. But live mouse offspring can be generated with assisted reproduction using germ cells from males with the Y chromosome contribution limited to only two genes: the testis determinant factor Sry and the spermatogonial proliferation factor Eif2s3y.

"Does this mean that the Y chromosome (or most of it) is no longer needed? Yes, given our current technological advances in assisted reproductive technologies," said Monika A. Ward, Associate Professor at the Institute for Biogenesis Research, John A. Burns School of Medicine, University of Hawai'i. At the same time, however, she also emphasized the importance of the Y chromosome for normal, unassisted fertilization and other aspects of male reproduction.
In a new manuscript scheduled for online publication in the journalScience on November 21, 2013, Ward and her UH colleagues describe their effort to identify the minimum Y chromosome contribution required to generate a healthy first generation mouse, capable of reproducing a second generation on its own without further technological intervention.
For this study, Ward and her colleagues used transgenic male mice with only two Y genes, Sry and Eif2s3y. The mice were considered infertile because they had meiotic and postmeiotic arrests -- that is, the germ cells that should have normally developed into sperm did not fully mature in these mice -- but researchers were able to find few usable cells. Yasuhiro Yamauchi, a post-doctoral scholar on Ward's team, harvested these immature spermatids and used a technique called round spermatid injection (ROSI) to successfully fertilize oocytes in the laboratory. When the developed embryos were transferred to female mouse surrogate mothers, live offspring were obtained.
Because the overall efficiency of ROSI with two Y genes was lower than with regular, fertile mice, the researchers then looked to see whether the addition of other Y genes could improve it. They increased the live offspring rate by about two-fold when Sry was replaced with the sex reversal factor Sxrb, which encodes three additional Y genes. These results demonstrated that Sxrb encodes a gene or genes that enhance the progression of spermatogenesis.
The study's findings are relevant but not directly translatable to human male infertility cases. In the era of assisted reproduction technologies, it is now possible to bypass several steps of normal human fertilization using immotile, non-viable, or immature sperm. At present, ROSI is still considered experimental due to concerns regarding the safety of injecting immature germ cells and other technical difficulties. The researchers hope that the success of ROSI in mouse studies may serve to support this approach as a viable option for overcoming infertility in men in the future.
As for the human Y chromosome, the researchers agree that it's not on its way to oblivion. Its genetic information is important for developing mature sperm and for its function in normal fertilization. The same is true for mice.
"Most of the mouse Y chromosome genes are necessary for normal fertilization," Ward said. "However, when it comes to assisted reproduction, our mouse study proves that the Y chromosome contribution can be brought to a bare minimum. It may be possible to eliminate the mouse Y chromosome altogether if appropriate replacements are made for those two genes.
"
On 21:06 by Asveth Sreiram   No comments
Nov. 21, 2013 — Gamma-ray bursts are violent bursts of gamma radiation associated with exploding massive stars. For the first time ever, researchers from the Niels Bohr Institute, among others, have observed an unusually powerful gamma-ray burst in the relatively nearby universe -- a monster gamma-ray burst. The results are published in the scientific journal, Science.

When astronomers observe gamma-ray bursts, they never see the original star itself. It is far too dim to be seen from their distance in the universe. But when the star dies, they can see the exploding star as a supernova.
When the star explodes as a supernova, there might be a violent burst of gamma radiation. The burst is very short and is called a gamma-ray burst. Gamma-ray bursts are extremely bright and can be seen across the entire universe, but they cannot be seen from telescopes from Earth, because Earth's atmosphere absorbs the gamma radiation. So in order to see gamma-ray bursts, astronomers use telescopes in space.
The Swift satellite, which was launched in 2004, monitors space and discovers about 100 gamma-ray bursts each year. Gamma-ray bursts are thus quite common occurrences, but in April they spotted something quite unusual.
"We suddenly saw a gamma-ray burst that was extremely bright -- a monster gamma-ray burst. This one of the most powerful gamma-ray bursts we have ever observed with the Swift satellite," explains astrophysicist Daniele Malesani, Dark Cosmology Centre at the Niels Bohr Institute at the University of Copenhagen.
He is affiliated with the research group at NASA's Swift satellite and he explains that as soon as the gamma-ray burst is spotted, the satellite redirects instruments to measure X-rays, ultraviolet radiation and optical light in the visible field. It all happens very quickly, because the gamma-ray burst is over in under a minute. Then they also observe the event from telescopes on Earth.
Afterglow reveals star type
"We follow the so-called afterglow, which usually lasts a few days or for several weeks, from both Swift and from the ground-based telescopes. In this case, the burst was so powerful that we could observe the afterglow for several months. By analysing the light from the afterglow, we can study its spectral composition, which can tell us about the properties of the original star. What we have discovered is that it is a giant star with a mass that is 20-30 times the mass of the Sun, and rapidly rotatng. But its size is only 3-4 times that of the Sun, so it is extremely compact. These kinds of stars are called Wolf-Rayet stars," explains Daniele Malesani.
They have also been able to localise the star in a galaxy in the relatively near universe. The gamma-ray burst exploded when the universe was 9.9 billion years old and it has taken the light 3.75 billion years to reach us on Earth in our galaxy, the Milky Way
.
On 21:05 by Asveth Sreiram   No comments
Nov. 21, 2013 — Researchers have identified a genomic variant strongly associated with sensitivity to the sun, brown hair, blue eyes -- and freckles. In the study of Icelanders the researchers uncovered an intricate pathway involving the interspersed DNA sequence, or non-coding region, of a gene that is among a few dozen that are associated with human pigmentation traits.

The study by an international team including researchers from the National Institutes of Health was reported in the Nov. 21, 2013, online edition of the journalCell.
It is more common to find people with ancestors from geographic locations farther from the equator, such as Iceland, who have less pigment in their skin, hair and eyes. People with reduced pigment are more sensitive to the sun, but can more easily draw upon sunlight to generate vitamin D3, a nutrient essential for healthy bones.
The researchers, including scientists from the National Human Genome Research Institute (NHGRI), a part of NIH, analyzed data from a genome-wide association study (GWAS) of 2,230 Icelanders. A GWAS compares hundreds of thousands of common differences across individuals' DNA to see if any of those variants are associated with a known trait.
"Genes involved in skin pigmentation also have important roles in human health and disease," said NHGRI Scientific Director Dan Kastner, M.D., Ph.D. "This study explains a complex molecular pathway that may also contribute insights into skin diseases, such as melanoma, which is caused by the interaction of genetic susceptibility with environmental factors."
The GWAS led the researchers to focus on the interferon regulatory factor 4 (IRF4) gene, previously associated with immunity. IRF4 makes a protein that spurs production of interferons, proteins that fight off viruses or harmful bacteria. The researchers noted from genomic databases that the IRF4 gene is expressed at high levels only in lymphocytes, a type of white blood cell important in the immune system, and in melanocytes, specialized skin cells that make the pigment melanin. The new study established an association between the IRF4 gene and the pigmentation trait.
"Genome-wide association studies are uncovering many genomic variants that are associated with human traits and most of them are found in non-protein-coding regions of the genome," said William Pavan, Ph.D., co-author and senior investigator, Genetic Disease Research Branch, NHGRI. "Exploring the biological pathways and molecular mechanisms that involve variants in these under-explored portions of the genome is a challenging part of our work. This is one of a few cases where scientists have been able to associate a variant in a non-coding genomic region with a functional mechanism."
The Icelandic GWAS yielded millions of variants among individuals in the study. The researchers narrowed their study to 16,280 variants located in the region around the IRF4 gene. Next, they used an automated fine-mapping process to explore the set of variants in IRF4 in 95,085 people from Iceland. A silicon chip used in the automated process enables a large number of variants to be included in the analysis.
The data revealed that a variant in a non-coding, enhancer region that regulates the IRF4 gene is associated with the combined trait of sunlight sensitivity, brown hair, blue eyes and freckles. The finding places IRF4 among more than 30 genes now associated with pigmentation, including a gene variant previously found in people with freckles and red hair.
Part of the research team, including the NHGRI co-authors, studied the IRF4's role in the pigment-related regulatory pathway. They demonstrated through cell-culture studies and tests in mice and zebrafish that two transcription factors -- proteins that turn genes on or off -- interact in the gene pathway with IRF4, ultimately activating expression of an enzyme called tyrosinase. One of the pathway transcription factors, MITF, is known as the melanocyte master regulator. It activates expression of IRF4, but only in the presence of the TFAP2A transcription factor. A greater expression of tyrosinase yields a higher production of the pigment melanin in melanocytes.
"This non-coding sequence harboring the variant displayed many hallmarks of having a function and being involved in gene regulation within melanocyte populations," said Andy McCallion, Ph.D., a co-author at Johns Hopkins University, Baltimore, and collaborator with the NHGRI group.
The newly discovered variant acts like a dimmer switch. When the switch in the IRF4 enhancer is in the on position, ample pigment is made. Melanin pigment gets transferred from melanocytes to keratinocytes, a type of skin cell near the surface of the skin, and protects the skin from UV radiation in sunlight. If the switch is turned down, as is the case when it contains the discovered variant, the pathway is less effective, resulting in reduced expression of tyrosinase and melanin production. The exact mechanism that generates freckling is not yet known, but Dr. Pavan suggests that epigenetic variation -- a layer of instructions in addition to sequence variation -- may play a role in the freckling trait.
More research is needed to determine the mechanism by which IRF4 is involved in how melanocytes respond to UV damage, which can induce freckling and is linked to melanoma, the type of skin cancer associated with the highest mortality.

Thursday, 21 November 2013

On 16:11 by Asveth Sreiram   No comments
Oct. 30, 2013 — Video gaming causes increases in the brain regions responsible for spatial orientation, memory formation and strategic planning as well as fine motor skills. This has been shown in a new study conducted at the Max Planck Institute for Human Development and Charité University Medicine St. Hedwig-Krankenhaus. The positive effects of video gaming may also prove relevant in therapeutic interventions targeting psychiatric disorders.

In order to investigate how video games affect the brain, scientists in Berlin have asked adults to play the video game "Super Mario 64" over a period of two months for 30 minutes a day. A control group did not play video games. Brain volume was quantified using magnetic resonance imaging (MRI). In comparison to the control group the video gaming group showed increases of grey matter, in which the cell bodies of the nerve cells of the brain are situated. These plasticity effects were observed in the right hippocampus, right prefrontal cortex and the cerebellum. These brain regions are involved in functions such as spatial navigation, memory formation, strategic planning and fine motor skills of the hands. Most interestingly, these changes were more pronounced the more desire the participants reported to play the video game.
"While previous studies have shown differences in brain structure of video gamers, the present study can demonstrate the direct causal link between video gaming and a volumetric brain increase. This proves that specific brain regions can be trained by means of video games," says study leader Simone Kühn, senior scientist at the Center for Lifespan Psychology at the Max Planck Institute for Human Development. Therefore Simone Kühn and her colleagues assume that video games could be therapeutically useful for patients with mental disorders in which brain regions are altered or reduced in size, e.g. schizophrenia, post-traumatic stress disorder or neurodegenerative diseases such as Alzheimer's dementia.
"Many patients will accept video games more readily than other medical interventions," adds the psychiatrist Jürgen Gallinat, co-author of the study at Charité University Medicine St. Hedwig-Krankenhaus. Further studies to investigate the effects of video gaming in patients with mental health issues are planned. A study on the effects of video gaming in the treatment of post-traumatic stress disorder is currently ongoing
.
On 16:10 by Asveth Sreiram   No comments
Nov. 20, 2013 — A computer program called the Never Ending Image Learner (NEIL) is running 24 hours a day at Carnegie Mellon University, searching the Web for images, doing its best to understand them on its own and, as it builds a growing visual database, gathering common sense on a massive scale.
NEIL leverages recent advances in computer vision that enable computer programs to identify and label objects in images, to characterize scenes and to recognize attributes, such as colors, lighting and materials, all with a minimum of human supervision. In turn, the data it generates will further enhance the ability of computers to understand the visual world.
But NEIL also makes associations between these things to obtain common sense information that people just seem to know without ever saying -- that cars often are found on roads, that buildings tend to be vertical and that ducks look sort of like geese. Based on text references, it might seem that the color associated with sheep is black, but people -- and NEIL -- nevertheless know that sheep typically are white.
"Images are the best way to learn visual properties," said Abhinav Gupta, assistant research professor in Carnegie Mellon's Robotics Institute. "Images also include a lot of common sense information about the world. People learn this by themselves and, with NEIL, we hope that computers will do so as well."
A computer cluster has been running the NEIL program since late July and already has analyzed three million images, identifying 1,500 types of objects in half a million images and 1,200 types of scenes in hundreds of thousands of images. It has connected the dots to learn 2,500 associations from thousands of instances.
The public can now view NEIL's findings at the project website,http://www.neil-kb.com.
The research team, including Xinlei Chen, a Ph.D. student in CMU's Language Technologies Institute, and Abhinav Shrivastava, a Ph.D. student in robotics, will present its findings on Dec. 4 at the IEEE International Conference on Computer Vision in Sydney, Australia.
One motivation for the NEIL project is to create the world's largest visual structured knowledge base, where objects, scenes, actions, attributes and contextual relationships are labeled and catalogued.
"What we have learned in the last 5-10 years of computer vision research is that the more data you have, the better computer vision becomes," Gupta said.
Some projects, such as ImageNet and Visipedia, have tried to compile this structured data with human assistance. But the scale of the Internet is so vast -- Facebook alone holds more than 200 billion images -- that the only hope to analyze it all is to teach computers to do it largely by themselves.
Shrivastava said NEIL can sometimes make erroneous assumptions that compound mistakes, so people need to be part of the process. A Google Image search, for instance, might convince NEIL that "pink" is just the name of a singer, rather than a color.
"People don't always know how or what to teach computers," he observed. "But humans are good at telling computers when they are wrong."
People also tell NEIL what categories of objects, scenes, etc., to search and analyze. But sometimes, what NEIL finds can surprise even the researchers. It can be anticipated, for instance, that a search for "apple" might return images of fruit as well as laptop computers. But Gupta and his landlubbing team had no idea that a search for F-18 would identify not only images of a fighter jet, but also of F18-class catamarans.
As its search proceeds, NEIL develops subcategories of objects -- tricycles can be for kids, for adults and can be motorized, or cars come in a variety of brands and models. And it begins to notice associations -- that zebras tend to be found in savannahs, for instance, and that stock trading floors are typically crowded.
NEIL is computationally intensive, the research team noted. The program runs on two clusters of computers that include 200 processing cores.
On 16:09 by Asveth Sreiram   No comments
Nov. 20, 2013 — The Sun is our most promising source of clean and renewable energy. The energy that reaches Earth from the Sun in an hour is almost equivalent to that consumed by humans over a year. Solar cells can tap this massive source of energy by converting light into an electrical current. However, these devices still require significant improvements in efficiency before they can compete with more traditional energy sources.

Xiaogang Liu, Alfred Ling Yoong Tok and their co-workers at the A*STAR Institute of Materials Research and Engineering, the National University of Singapore and Nanyang Technological University, Singapore, have now developed a method for using nanostructures to increase the fraction of incoming light that is absorbed by a light-harvesting material1. The method is ideal for use with high-efficiency solar cells.
Solar cells absorb packets of optical energy called photons and then use the photons to generate electrons. The energy of some photons from the Sun, however, is too small to create electrons in this way and so is lost. Liu, Tok and their co-workers circumvented this loss using an effect known as upconversion. In this process, two low-energy photons are combined to produce a single high-energy photon. This energetic photon can then be absorbed by the active region of the solar cell.
The researchers' device comprised a titanium oxide frame filled with a regular arrangement of air pores roughly half a micrometer across -- a structure called an inverse opal (see image). Spheres of the upconversion material, which were 30 nanometers in diameter, sat on the surface of these pores. Tiny light-sensitive quantum dots made of crystals of cadmium selenide coated these nanospheres.
The quantum dots efficiently absorbed incoming light, either directly from an external source or from unconverted photons from the nanospheres, and converted it to electrons. This charge then flowed into the titanium oxide frame. "The titanium oxide inverse opal creates a continuous electron-conducting pathway and provides a large interfacial surface area to support the upconversion nanoparticles and the quantum dots," explains Liu.
Liu, Tok and the team tested the device by firing laser light at it with a wavelength of 980 nanometers, which is not normally absorbed by cadmium selenide quantum dots. As expected, they were able to measure a much higher electrical current than the same experiment performed with a device without the upconversion nanospheres. "We believe that the enhanced energy transfer and light harvesting may afford a highly competitive advantage over conventional silicon solar cells," says Liu
.
On 16:08 by Asveth Sreiram   No comments
Nov. 8, 2013 — Throughout our universe, tucked inside galaxies far, far away, giant black holes are pairing up and merging. As the massive bodies dance around each other in close embraces, they send out gravitational waves that ripple space and time themselves, even as the waves pass right through our planet Earth.

Scientists know these waves, predicted by Albert Einstein's theory of relativity, exist but have yet to directly detect one. In the race to catch the waves, one strategy -- called pulsar-timing arrays -- has reached a milestone not through detecting any gravitational waves, but in revealing new information about the frequency and strength of black hole mergers.
"We expect that many gravitational waves are passing through us all the time, and now we have a better idea of the extent of this background activity," said Sarah Burke-Spolaor, co-author of a new Science paper published Oct. 18, which describes research she contributed to while based at NASA's Jet Propulsion Laboratory in Pasadena, Calif. Burke-Spolaor is now at the California Institute of Technology in Pasadena.
Gravitational waves, if detected, would reveal more information about black holes as well as one of the four fundamental forces of nature: gravity.
The team's inability to detect any gravitational waves in the recent search actually has its own benefits, because it reveals new information about supermassive black hole mergers -- their frequency, distance from Earth and masses. One theory of black hole growth to hit the theorists' cutting room floors had stated that mergers alone are responsible for black holes gaining mass.
The results come from the Commonwealth Scientific and Industrial Research Organization's (CSIRO) Parkes radio telescope in eastern Australia. The study was jointly led by Ryan Shannon of CSIRO, and Vikram Ravi, of the University of Melbourne and CSIRO.
Pulsar-timing arrays are designed to catch the subtle gravitational waves using telescopes on the ground, and spinning stars called pulsars. Pulsars are the burnt-out cores of exploded stars that send out beams of radio waves like lighthouse beacons. The timing of the pulsars' rotation is so precise that researchers say they are akin to atomic clocks.
When gravitational waves pass through an array of multiple pulsars, 20 in the case of the new study, they set the pulsars bobbing like buoys. Researchers recording the radio waves from the pulsars can then piece together the background hum of waves.
"The gravitational waves cause the space between Earth and pulsars to stretch and squeeze," said Burke-Spolaor.
The new study used the Parkes Pulsar Timing Array, which got its start in the 1990s. According to the research team, the array, at its current sensitivity, will be able to detect a gravitational wave within 10 years.
Researchers at JPL are currently developing a similar precision pulsar-timing capability for NASA's Deep Space Network, a system of large dish antennas located around Earth that tracks and communicates with deep-space spacecraft. During gaps in the network's tracking schedules, the antennas can be used to precisely measure the timing of pulsars' radio waves. Because the Deep Space Network's antennas are distributed around the globe, they can see pulsars across the whole sky, which improves sensitivity to gravitational waves.
"Right now, the focus in the pulsar-timing array communities is to develop more sensitive technologies and to establish long-term monitoring programs of a large ensemble of the pulsars," said Walid Majid, the principal investigator of the Deep Space Network pulsar-timing program at JPL. "All the strategies for detecting gravitational waves, including LIGO [Laser Interferometer Gravitational-Wave Observatory], are complementary, since each technique is sensitive to detection of gravitational waves at very different frequencies. While some might characterize this as a race, in the end, the goal is to detect gravitational waves, which will usher in the beginning of gravitational wave astronomy. That is the real exciting part of this whole endeavor."
The ground-based LIGO observatory is based in Louisiana and Washington. It is a joint project of Caltech and the Massachusetts Institute of Technology, Cambridge, Mass., with funding from the National Science Foundation. The European Space Agency is developing the space-based LISA Pathfinder (Laser Interferometer Space Antenna), a proof-of-concept mission for a future space observatory to detect gravitational waves. LIGO, LISA and pulsar-timing arrays would all detect different frequencies of gravitational waves and thus are sensitive to various types of merger events.
A video about the new Parkes findings from Swinburne University of Technology in Melbourne, Australia, is online at:http://astronomy.swin.edu.au/production/blackhole/
.
On 16:06 by Asveth Sreiram   No comments
Nov. 19, 2013 — Ancient viruses from Neanderthals have been found in modern human DNA by researchers at Oxford University and Plymouth University.

The researchers compared genetic data from fossils of Neanderthals and another group of ancient human ancestors called Denisovans to data from modern-day cancer patients. They found evidence of Neanderthal and Denisovan viruses in the modern human DNA, suggesting that the viruses originated in our common ancestors more than half a million years ago.
This latest finding, reported in Current Biology, will enable scientists to further investigate possible links between ancient viruses and modern diseases including HIV and cancer, and was supported by the Wellcome Trust and Medical Research Council (MRC).
Around 8% of human DNA is made up of 'endogenous retroviruses' (ERVs), DNA sequences from viruses which pass from generation to generation. This is part of the 90% of our DNA with no known function, sometimes called 'junk' DNA.
'I wouldn't write it off as "junk" just because we don't know what it does yet,' said Dr Gkikas Magiorkinis, an MRC Fellow at Oxford University's Department of Zoology. 'Under certain circumstances, two "junk" viruses can combine to cause disease -- we've seen this many times in animals already. ERVs have been shown to cause cancer when activated by bacteria in mice with weakened immune systems.'
Dr Gkikas and colleagues are now looking to further investigate these ancient viruses, belonging to the HML2 family of viruses, for possible links with cancer and HIV.
'How HIV patients respond to HML2 is related to how fast a patient will progress to AIDS, so there is clearly a connection there,' said Dr Magiorkinis, co-author of the latest study. 'HIV patients are also at much higher risk of developing cancer, for reasons that are poorly-understood. It is possible that some of the risk factors are genetic, and may be shared with HML2. They also become reactivated in cancer and HIV infection, so might prove useful as a therapy target in the future.'
The team are now investigating whether these ancient viruses affect a person's risk of developing diseases such as cancer. Combining evolutionary theory and population genetics with cutting-edge genetic sequencing technology, they will test if these viruses are still active or cause disease in modern humans.
'Using modern DNA sequencing of 300 patients, we should be able to see how widespread these viruses are in the modern population. We would expect viruses with no negative effects to have spread throughout most of the modern population, as there would be no evolutionary pressure against it. If we find that these viruses are less common than expected, this may indicate that the viruses have been inactivated by chance or that they increase mortality, for example through increased cancer risk,' said Dr Robert Belshaw, formerly of Oxford University and now a lecturer at Plymouth University, who led the research.
'Last year, this research wouldn't have been possible. There were some huge technological breakthroughs made this summer, and I expect we'll see even greater advances in 2014. Within the next 5 years, we should be able to say for sure whether these ancient viruses play a role in modern human diseases.
'
On 16:05 by Asveth Sreiram   No comments
Nov. 20, 2013 — Results from a DNA study of a young boy's skeletal remains believed to be 24,000 years old could turn the archaeological world upside down -- it's been demonstrated that nearly 30 percent of modern Native American's ancestry came from this youngster's gene pool, suggesting First Americans came directly from Siberia, according to a research team that includes a Texas A&M University professor.

Kelly Graf, assistant professor in the Center for the Study of First Americans and Department of Anthropology at Texas A&M, is part of an international team spearheaded by Eske Willerslev and Maanasa Raghaven from the Centre for GeoGenetics at the University of Copenhagen, Denmark and additional researchers from Sweden, Russia, United Kingdom, University of Chicago and University of California-Berkeley. Their work, funded by the Danish National Science Foundation, Lundbeck Foundation, and the National Science Foundation, is published in the current issue of Nature magazine.
Graf and Willerslev conceived the project and traveled to the Hermitage State Museum in St. Petersburg, Russia, where the remains are now housed to collect samples for ancient DNA. The skeleton was first discovered in the late 1920s near the village of Mal'ta in south-central Siberia, and since then it has been referred to as "the Mal'ta child" because until this DNA study the biological sex of the skeleton was unknown.
"Now we can say with confidence that this individual was a male" says Graf.
Graf helped extract DNA material from the boy's upper arm and "the results surprised all of us quite a bit," she explains.
"It shows he had close genetic ties to today's Native Americans and some western Eurasians, specifically some groups living in central Asia, South Asia, and Europe. Also, he shared close genetic ties with other Ice-Age western Eurasians living in European Russia, Czech Republic and even Germany. We think these Ice-Age people were quite mobile and capable of maintaining a far-reaching gene pool that extended from central Siberia all the way west to central Europe."
Another significant result of the study is that the Mal'ta boy's people were also ancestors of Native Americans, explaining why some early Native American skeletons such as Kennewick Man were interpreted to have some European traits.
"Our study proves that Native Americans ancestors migrated to the Americas from Siberia and not directly from Europe as some have recently suggested," Graf explains.
The DNA work performed on the boy is the oldest complete genome of a human sequenced so far, the study shows. Also found near the boy's remains were flint tools, a beaded necklace and what appears to be pendant-like items, all apparently placed in the burial as grave goods.
The discovery raises new questions about the timing of human entry in Alaska and ultimately North America, a topic hotly debated in First Americans studies.
"Though our results cannot speak directly to this debate, they do indicate Native American ancestors could have been in Beringia -- extreme northeastern Russia and Alaska -- any time after 24,000 years ago and therefore could have colonized Alaska and the Americas much earlier than 14,500 years ago, the age suggested by the archaeological record."
"What we need to do is continue searching for earlier sites and additional clues to piece together this very big puzzle.
"
On 16:04 by Asveth Sreiram   No comments
Nov. 20, 2013 — A Florida State University scientist has uncovered what may be the first recognized example of ancient Martian crust.

The work of Munir Humayun -- a professor in FSU's Department of Earth, Ocean and Atmospheric Science and a researcher at the National High Magnetic Field Laboratory (MagLab) -- is based on an analysis of a 4.4 billion-year-old Martian meteorite that was unearthed by Bedouin tribesmen in the Sahara desert. The rock (NWA 7533) may be the first recognized sample of ancient Martian crust and holds a wealth of information about the origin and age of the Red Planet's crust.
Humayun's groundbreaking discoveries about the crust and what it reveals about the Red Planet's origins will be published in the journal Nature.
In order to detect minute amounts of chemicals in this meteorite, Humayun and his collaborators performed complex analysis on the meteorite using an array of highly sophisticated mass spectrometers in the MagLab's geochemistry department. High concentrations of trace metals such as iridium, an element that indicates meteoritic bombardment, showed that this meteorite came from the elusive cratered area of Mars' southern highlands.
"This cratered terrain has been long thought to hold the keys to Mars' birth and early childhood," Humayun said.
While craters cover more than half of Mars, this is the first meteoric sample to come from this area and the first time researchers are able to understand Mars' early crustal growth.
Using the chemical information found in pieces of soil contained in the meteorite, the researchers were able to calculate the thickness of Mars' crust. Their calculation aligned with estimates from independent spacecraft measurements and confirms that Mars did not experience a giant impact that melted the entire planet in its early history.
Using a powerful microprobe at Curtin University in Perth, Australia, the team dated special crystals within the meteorite -- called zircons -- at an astounding 4.4 billion years old.
"This date is about 100 million years after the first dust condensed in the solar system," Humayun said. "We now know that Mars had a crust within the first 100 million years of the start of planet building, and that Mars' crust formed concurrently with the oldest crusts on Earth and the Moon."
Humayun and his collaborators hypothesize that these trailblazing discoveries are just the tip of the iceberg of what continued research on this unique meteorite will uncover. Further studies may reveal more clues about the impact history of Mars, the nature of Martian zircons and the makeup of the earliest sediments on the Red Planet.
Humayun's international team of collaborators include curator of meteorites Brigitte Zanda with the National Museum of Natural History (the Muséum National d'Histoire Naturelle) in Paris; A. Nemchin, M. Grange and A. Kennedy with Curtin University's Department of Applied Geology in Perth, Australia; and scientists R.H. Hewins, J.P. Lorand, C. Göpel, C. Fieni, S. Pont and D. Deldicque
.
On 16:03 by Asveth Sreiram   No comments
Nov. 20, 2013 — Data from computed tomography (CT) scans can be used with three-dimensional (3-D) printers to make accurate copies of fossilized bones, according to new research published online in the journal Radiology.

Fossils are often stored in plaster casts, or jackets, to protect them from damage. Getting information about a fossil typically requires the removal of the plaster and all the sediment surrounding it, which can lead to loss of material or even destruction of the fossil itself.
German researchers studied the feasibility of using CT and 3-D printers to nondestructively separate fossilized bone from its surrounding sediment matrix and produce a 3-D print of the fossilized bone itself.
"The most important benefit of this method is that it is non-destructive, and the risk of harming the fossil is minimal," said study author Ahi Sema Issever, M.D., from the Department of Radiology at Charité Campus Mitte in Berlin. "Also, it is not as time-consuming as conventional preparation."
Dr. Issever and colleagues applied the method to an unidentified fossil from the Museum für Naturkunde, a major natural history museum in Berlin. The fossil and others like it were buried under rubble in the basement of the museum after a World War II bombing raid. Since then, museum staff members have had difficulty sorting and identifying some of the plaster jackets.
Researchers performed CT on the unidentified fossil with a 320-slice multi-detector system. The different attenuation, or absorption of radiation, through the bone compared with the surrounding matrix enabled clear depiction of a fossilized vertebral body.
After studying the CT scan and comparing it to old excavation drawings, the researchers were able to trace the fossil's origin to the Halberstadt excavation, a major dig from 1910 to 1927 in a clay pit south of Halberstadt, Germany. In addition, the CT study provided valuable information about the condition and integrity of the fossil, showing multiple fractures and destruction of the front rim of the vertebral body.
Furthermore, the CT dataset helped the researchers build an accurate reconstruction of the fossil with selective laser sintering, a technology that uses a high-powered laser to fuse together materials to make a 3-D object.
Dr. Issever noted that the findings come at a time when advances in technology and cheaper availability of 3-D printers are making them more common as a tool for research. Digital models of the objects can be transferred rapidly among researchers, and endless numbers of exact copies may be produced and distributed, greatly advancing scientific exchange, Dr. Issever said. The technology also potentially enables a global interchange of unique fossils with museums, schools and other settings.
"The digital dataset and, ultimately, reproductions of the 3-D print may easily be shared, and other research facilities could thus gain valuable informational access to rare fossils, which otherwise would have been restricted," Dr. Issever said. "Just like Gutenberg's printing press opened the world of books to the public, digital datasets and 3-D prints of fossils may now be distributed more broadly, while protecting the original intact fossil.
"
On 16:03 by Asveth Sreiram   No comments
Nov. 18, 2013 — A team of Columbia Engineering researchers, led by Mechanical Engineering Professor James Hone and Electrical Engineering Professor Kenneth Shepard, has taken advantage of graphene's special properties -- its mechanical strength and electrical conduction -- and created a nano-mechanical system that can create FM signals, in effect the world's smallest FM radio transmitter.

"This work is significant in that it demonstrates an application of graphene that cannot be achieved using conventional materials," Hone says. "And it's an important first step in advancing wireless signal processing and designing ultrathin, efficient cell phones. Our devices are much smaller than any other sources of radio signals, and can be put on the same chip that's used for data processing."
Graphene, a single atomic layer of carbon, is the strongest material known to man, and also has electrical properties superior to the silicon used to make the chips found in modern electronics. The combination of these properties makes graphene an ideal material for nanoelectromechanical systems (NEMS), which are scaled-down versions of the microelectromechanical systems (MEMS) used widely for sensing of vibration and acceleration. For example, Hone explains, MEMS sensors figure out how your smartphone or tablet is tilted to rotate the screen.
In this new study, the team took advantage of graphene's mechanical 'stretchability' to tune the output frequency of their custom oscillator, creating a nanomechanical version of an electronic component known as a voltage controlled oscillator (VCO). With a VCO, explains Hone, it is easy to generate a frequency-modulated (FM) signal, exactly what is used for FM radio broadcasting. The team built a graphene NEMS whose frequency was about 100 megahertz, which lies right in the middle of the FM radio band (87.7 to 108 MHz). They used low-frequency musical signals (both pure tones and songs from an iPhone) to modulate the 100 MHz carrier signal from the graphene, and then retrieved the musical signals again using an ordinary FM radio receiver.
"This device is by far the smallest system that can create such FM signals," says Hone.
While graphene NEMS will not be used to replace conventional radio transmitters, they have many applications in wireless signal processing. Explains Shepard, "Due to the continuous shrinking of electrical circuits known as 'Moore's Law', today's cell phones have more computing power than systems that used to occupy entire rooms. However, some types of devices, particularly those involved in creating and processing radio-frequency signals, are much harder to miniaturize. These 'off-chip' components take up a lot of space and electrical power. In addition, most of these components cannot be easily tuned in frequency, requiring multiple copies to cover the range of frequencies used for wireless communication."
Graphene NEMS can address both problems: they are very compact and easily integrated with other types of electronics, and their frequency can be tuned over a wide range because of graphene's tremendous mechanical strength.
"There is a long way to go toward actual applications in this area," notes Hone, "but this work is an important first step. We are excited to have demonstrated successfully how this wonder material can be used to achieve a practical technological advancement -- something particularly rewarding to us as engineers."
The Hone and Shepard groups are now working on improving the performance of the graphene oscillators to have lower noise. At the same time, they are also trying to demonstrate integration of graphene NEMS with silicon integrated circuits, making the oscillator design even more compact.
For this study, the team worked with research groups from the School's Departments of Mechanical Engineering, Electrical Engineering, and Physics. This work is supported by Qualcomm Innovation Fellowship 2012 and the U.S. Air Force, using facilities at the Cornell Nano-Scale Facility and the Center for Engineering and Physical Science Research (CEPSR) Clean Room at Columbia University
.
On 16:02 by Asveth Sreiram   No comments
Nov. 18, 2013 — Why do the faces of some primates contain so many different colors -- black, blue, red, orange and white -- that are mixed in all kinds of combinations and often striking patterns while other primate faces are quite plain?
UCLA biologists reported last year on the evolution of 129 primate faces in species from Central and South America. This research team now reports on the faces of 139 Old World African and Asian primate species that have been diversifying over some 25 million years.
With these Old World monkeys and apes, the species that are more social have more complex facial patterns, the biologists found. Species that have smaller group sizes tend to have simpler faces with fewer colors, perhaps because the presence of more color patches in the face results in greater potential for facial variation across individuals within species. This variation could aid in identification, which may be a more difficult task in larger groups.
Species that live in the same habitat with other closely related species tend to have more complex facial patterns, suggesting that complex faces may also aid in species recognition, the life scientists found.
"Humans are crazy for Facebook, but our research suggests that primates have been relying on the face to tell friends from competitors for the last 50 million years and that social pressures have guided the evolution of the enormous diversity of faces we see across the group today," said Michael Alfaro, an associate professor of ecology and evolutionary biology in the UCLA College of Letters and Science and senior author of the study.
"Faces are really important to how monkeys and apes can tell one another apart," he said. "We think the color patterns have to do both with the importance of telling individuals of your own species apart from closely related species and for social communication among members of the same species."
Most Old World monkeys and apes are social, and some species, like the mandrills, can live in groups with up to 800 members, said co-author Jessica Lynch Alfaro, an adjunct assistant professor in the UCLA Department of Anthropology and UCLA's Institute for Society and Genetics. At the other extreme are solitary species, like the orangutans. In most orangutan populations, adult males travel and sleep alone, and females are accompanied only by their young, she said. Some primates, like chimpanzees, have "fission-fusion societies," where they break up into small sub-groups and come together occasionally in very large communities. Others, like the hamadryas baboons, have tiered societies with harems, clans, bands and troops, she said.
"Our research suggests increasing group size puts more pressure on the evolution of coloration across different sub-regions of the face," Michael Alfaro said.
This allows members of a species to have "more communication avenues, a greater repertoire of facial vocabulary, which is advantageous if you're interacting with many members of your species," he said.
The research, federally funded by the National Science Foundation and supported through a postdoctoral fellowship from the UCLA Institute for Society and Genetics, was published Nov. 11 in the journal Nature Communications.
Lead study author Sharlene Santana used photographs of primate faces for her analysis and devised a new method to quantify the complex patterns of primate faces. She divided each face into several regions; classified the color of each part of the face, including the hair and skin; and assigned a score based on the total number of different colors across the facial regions. This numerical score is called the "facial complexity" score. The life scientists then studied how the complexity scores of primate faces were related to primates' social systems.
The habitat where species live presents many potential pressures that could have influenced the evolution of facial coloration. To assess how facial colors are related to physical environments, the researchers analyzed environmental variables such as geographic location, canopy density, rainfall and temperature. They also used statistical methods that took into account the evolutionary history and relationships among the primate groups to better understand the evolution of facial diversity and complexity.
While facial complexity was related to social variables, such as group size and the number of closely related species in the same habitat, facial pigmentation was best explained by ecological and spatial factors. Where a species lives is a good predictor of its degree of facial pigmentation -- how light or dark the face is.
"Our map shows clearly the geographic trend in Africa of primate faces getting darker nearer to the equator and lighter as we move farther away from the equator," Lynch Alfaro said. "This is the same trend we see on an intra-species level for human skin pigmentation around the globe."
Species living in more tropical and more densely forested habitats also tend to have darker, more pigmented faces. But the complexity of facial color patterns is not related to habitat type.
"We found that for African primates, faces tend to be light or dark depending on how open or closed the habitat is and on how much light the habitat receives," Alfaro said. "We also found that no matter where you live, if your species has a large social group, then your face tends to be more complex. It will tend to be darker and more complex if you're in a closed habitat in a large social group, and it will tend to be lighter and more complex if you're in an open habitat with a large social group. Darkness or lightness is explained by geography and habitat type. Facial complexity is better explained by the size of your social group."
In their research on primates from Central and South America published last year, the scientists were surprised to find a different pattern. For these primates, species that lived in larger groups had more plain facial patterns.
"We expected to find similar trends across all primate radiations -- that is, that the faces of highly social species would have more complex patterning," said Santana, who conducted the research as a postdoctoral fellow with the UCLA Department of Ecology and Evolutionary Biology and UCLA's Institute for Society and Genetics and who is now an assistant professor at the University of Washington and curator of mammals at the Burke Museum of Natural History and Culture. "We were surprised by the results in our original study on neotropical (Central and South American) primates."
In the new study, they did find the predicted trends, but they also found differences across primate groups -- differences they said they found intriguing. Are primate groups using their faces differently?
"In the present study, great apes had significantly lower facial complexity compared to monkeys," Lynch Alfaro said. "This may be because apes are using their faces for highly complex facial expressions and these expressions would be obscured by more complex facial color patterns. There may be competing pressures for and against facial pattern complexity in large groups, and different lineages may solve this problem in different ways."
"Our research shows that being more or less social is a key explanation for the facial diversity that we see," Alfaro said. "Ecology is also important, such as camouflage and thermal regulation, but our research suggests that faces have evolved along with the diversity of social behaviors in primates, and that is the big cause of facial diversity."
Alfaro and his colleagues serve as "evolutionary detectives," asking what factors produced the patterns of species richness and diversity of traits.
"When evolutionary biologists see these striking patterns of richness, we want to understand the underlying causes," he said.
Human faces were not part of the analysis, although humans also belong to the "clade Catarrhini, which includes Old World monkeys and apes.
Andrew Noonan, a former UCLA undergraduate student who conducted research in Alfaros laboratory, was also a co-author of this research.