HELP SAVE THE WORLD TODAY

EDUCATION IS THE MOST POWERFUL WEAPON WHICH YOU CAN USE TO CHANGE THE WORLD.

RICH_SCI_DATES

I LOVE YOU BECAUSE YOU'RE AWESOME JUST LIKE ME!

GIVE'S YOU THE BETTER...

FRIENDSHIP... IS NOT SOMETHING YOU LEARN IN SCHOOL. BUT IF YOU HAVEN'T LEARNED THE MEANING OF FRIENDSHIP, YOU REALLY HAVEN'T LEARNED ANYTHING.

DO YOU KNOW...

THE PERSON WHO YOU'RE WITH MOST IN LIFE IS YOURSELF AND IF YOU DON'T LIKE YOURSELF YOU'RE ALWAYS WITH SOMEBODY YOU DON'T LIKE.

MAKING IT HAPPEN

WHERE JUSTICE IS DENIED, WHERE POVERTY IS ENFORCED, WHERE IGNORANCE PREVAILS, AND WHERE ANY ONE CLASS IS MADE TO FEEL THAT SOCIETY IS AN ORGANIZED CONSPIRACY TO OPPRESS, ROB AND DEGRADE THEM, NEITHER PERSONS NOR PROPERTY WILL BE SAFE.

Monday, November 26, 2018

Why we shouldn't like coffee, but we do


The more sensitive people are to the bitter taste of caffeine, the more coffee they drink, reports a new study. The sensitivity is based on genetics. Bitterness is natural warning system to protect us from harmful substances, so we really shouldn't like coffee. Scientists say people with heightened ability to detect coffee's bitterness learn to associate good things with it.

But, it turns out, the more sensitive people are to the bitter taste of caffeine, the more coffee they drink, reports a new study from Northwestern Medicine and QIMR Berghofer Medical Research Institute in Australia. The sensitivity is caused by a genetic variant.
"You'd expect that people who are particularly sensitive to the bitter taste of caffeine would drink less coffee," said Marilyn Cornelis, assistant professor of preventive medicine at Northwestern University Feinberg School of Medicine. "The opposite results of our study suggest coffee consumers acquire a taste or an ability to detect caffeine due to the learned positive reinforcement (i.e. stimulation) elicited by caffeine."
In other words, people who have a heightened ability to taste coffee's bitterness -- and particularly the distinct bitter flavor of caffeine -- learn to associate "good things with it," Cornelis said.
Thus, a bigger tab at Starbucks.
The study will be published Nov. 15 in Scientific Reports.
In this study population, people who were more sensitive to caffeine and were drinking a lot of coffee consumed low amounts of tea. But that could just be because they were too busy drinking coffee, Cornelis noted.
The study also found people sensitive to the bitter flavors of quinine and of PROP, a synthetic taste related to the compounds in cruciferous vegetables, avoided coffee. For alcohol, a higher sensitivity to the bitterness of PROP resulted in lower alcohol consumption, particularly of red wine.
"The findings suggest our perception of bitter tastes, informed by our genetics, contributes to the preference for coffee, tea and alcohol," Cornelis said.
For the study, scientists applied Mendelian randomization, a technique commonly used in disease epidemiology, to test the causal relationship between bitter taste and beverage consumption in more than 400,000 men and women in the United Kingdom. The genetic variants linked to caffeine, quinine and PROP perception were previously identified through genome-wide analysis of solution taste-ratings collected from Australian twins. These genetic variants were then tested for associations with self-reported consumption of coffee, tea and alcohol in the current study.
"Taste has been studied for a long time, but we don't know the full mechanics of it," Cornelis said. "Taste is one of the senses. We want to understand it from a biological standpoint."

To predict the future, the brain uses two clocks

One type of anticipatory timing relies on memories from past experiences. The other on rhythm. Both are critical to our ability to navigate and enjoy the world, and scientists have found they are handled in two different parts of the brain.

That moment when you step on the gas pedal a split second before the light changes, or when you tap your toes even before the first piano note of Camila Cabello's "Havana" is struck. That's anticipatory timing.
One type relies on memories from past experiences. The other on rhythm. Both are critical to our ability to navigate and enjoy the world.

New University of California, Berkeley, research shows the neural networks supporting each of these timekeepers are split between two different parts of the brain, depending on the task at hand.

"Whether it's sports, music, speech or even allocating attention, our study suggests that timing is not a unified process, but that there are two distinct ways in which we make temporal predictions and these depend on different parts of the brain," said study lead author Assaf Breska, a postdoctoral researcher in neuroscience at UC Berkeley.


The findings, published online in the Proceedings of the National Academy of Sciences journal, offer a new perspective on how humans calculate when to make a move.

"Together, these brain systems allow us to not just exist in the moment, but to also actively anticipate the future," said study senior author Richard Ivry, a UC Berkeley neuroscientist.

Breska and Ivry studied the anticipatory timing strengths and deficits of people with Parkinson's disease and people with cerebellar degeneration.

They connected rhythmic timing to the basal ganglia, and interval timing -- an internal timer based largely on our memory of prior experiences -- to the cerebellum. Both are primal brain regions associated with movement and cognition.

Moreover, their results suggest that if one of these neural clocks is misfiring, the other could theoretically step in.

"Our study identifies not only the anticipatory contexts in which these neurological patients are impaired, but also the contexts in which they have no difficulty, suggesting we could modify their environments to make it easier for them to interact with the world in face of their symptoms," Breska said.

Non-pharmaceutical fixes for neurological timing deficits could include brain-training computer games and smartphone apps, deep brain stimulation and environmental design modifications, he said.

To arrive at their conclusion, Breska and Ivry compared how well Parkinson's and cerebellar degeneration patients used timing or "temporal" cues to focus their attention.

Both groups viewed sequences of red, white and green squares as they flashed by at varying speeds on a computer screen, and pushed a button the moment they saw the green square. The white squares alerted them that the green square was coming up.

In one sequence, the red, white and green squares followed a steady rhythm, and the cerebellar degeneration patients responded well to these rhythmic cues.

In another, the colored squares followed a more complex pattern, with differing intervals between the red and green squares. This sequence was easier for the Parkinson's patients to follow, and succeed at.

"We show that patients with cerebellar degeneration are impaired in using non-rhythmic temporal cues while patients with basal ganglia degeneration associated with Parkinson's disease are impaired in using rhythmic cues," Ivry said.

Ultimately, the results confirm that the brain uses two different mechanisms for anticipatory timing, challenging theories that a single brain system handles all our timing needs, researchers said.

"Our results suggest at least two different ways in which the brain has evolved to anticipate the future," said Breska.

"A rhythm-based system is sensitive to periodic events in the world such as is inherent in speech and music," he added. "And an interval system provides a more general anticipatory ability, sensitive to temporal regularities even in the absence of a rhythmic signal."

Engineers fly first-ever plane with no moving parts

 

Engineers have built and flown the first-ever plane with no moving parts. Instead of propellers or turbines, the light aircraft is powered by an 'ionic wind' -- a silent but mighty flow of ions that is produced aboard the plane, and that generates enough thrust to propel the plane over a sustained, steady flight.

Since the first airplane took flight over 100 years ago, virtually every aircraft in the sky has flown with the help of moving parts such as propellers, turbine blades, and fans, which are powered by the combustion of fossil fuels or by battery packs that produce a persistent, whining buzz.
Now MIT engineers have built and flown the first-ever plane with no moving parts. Instead of propellers or turbines, the light aircraft is powered by an "ionic wind" -- a silent but mighty flow of ions that is produced aboard the plane, and that generates enough thrust to propel the plane over a sustained, steady flight.

Unlike turbine-powered planes, the aircraft does not depend on fossil fuels to fly. And unlike propeller-driven drones, the new design is completely silent.

"This is the first-ever sustained flight of a plane with no moving parts in the propulsion system," says Steven Barrett, associate professor of aeronautics and astronautics at MIT. "This has potentially opened new and unexplored possibilities for aircraft which are quieter, mechanically simpler, and do not emit combustion emissions."

He expects that in the near-term, such ion wind propulsion systems could be used to fly less noisy drones. Further out, he envisions ion propulsion paired with more conventional combustion systems to create more fuel-efficient, hybrid passenger planes and other large aircraft.

Barrett and his team at MIT have published their results in the journal Nature.

Hobby crafts


Barrett says the inspiration for the team's ion plane comes partly from the movie and television series, "Star Trek," which he watched avidly as a kid. He was particularly drawn to the futuristic shuttlecrafts that effortlessly skimmed through the air, with seemingly no moving parts and hardly any noise or exhaust.

"This made me think, in the long-term future, planes shouldn't have propellers and turbines," Barrett says. "They should be more like the shuttles in 'Star Trek,' that have just a blue glow and silently glide."

About nine years ago, Barrett started looking for ways to design a propulsion system for planes with no moving parts. He eventually came upon "ionic wind," also known as electroaerodynamic thrust -- a physical principle that was first identified in the 1920s and describes a wind, or thrust, that can be produced when a current is passed between a thin and a thick electrode. If enough voltage is applied, the air in between the electrodes can produce enough thrust to propel a small aircraft.

For years, electroaerodynamic thrust has mostly been a hobbyist's project, and designs have for the most part been limited to small, desktop "lifters" tethered to large voltage supplies that create just enough wind for a small craft to hover briefly in the air. It was largely assumed that it would be impossible to produce enough ionic wind to propel a larger aircraft over a sustained flight.

"It was a sleepless night in a hotel when I was jet-lagged, and I was thinking about this and started searching for ways it could be done," he recalls. "I did some back-of-the-envelope calculations and found that, yes, it might become a viable propulsion system," Barrett says. "And it turned out it needed many years of work to get from that to a first test flight."

Ions take flight


The team's final design resembles a large, lightweight glider. The aircraft, which weighs about 5 pounds and has a 5-meter wingspan, carries an array of thin wires, which are strung like horizontal fencing along and beneath the front end of the plane's wing. The wires act as positively charged electrodes, while similarly arranged thicker wires, running along the back end of the plane's wing, serve as negative electrodes.

The fuselage of the plane holds a stack of lithium-polymer batteries. Barrett's ion plane team included members of Professor David Perreault's Power Electronics Research Group in the Research Laboratory of Electronics, who designed a power supply that would convert the batteries' output to a sufficiently high voltage to propel the plane. In this way, the batteries supply electricity at 40,000 volts to positively charge the wires via a lightweight power converter.

Once the wires are energized, they act to attract and strip away negatively charged electrons from the surrounding air molecules, like a giant magnet attracting iron filings. The air molecules that are left behind are newly ionized, and are in turn attracted to the negatively charged electrodes at the back of the plane.

As the newly formed cloud of ions flows toward the negatively charged wires, each ion collides millions of times with other air molecules, creating a thrust that propels the aircraft forward.

The team, which also included Lincoln Laboratory staff Thomas Sebastian and Mark Woolston, flew the plane in multiple test flights across the gymnasium in MIT's duPont Athletic Center -- the largest indoor space they could find to perform their experiments. The team flew the plane a distance of 60 meters (the maximum distance within the gym) and found the plane produced enough ionic thrust to sustain flight the entire time. They repeated the flight 10 times, with similar performance.

"This was the simplest possible plane we could design that could prove the concept that an ion plane could fly," Barrett says. "It's still some way away from an aircraft that could perform a useful mission. It needs to be more efficient, fly for longer, and fly outside."

Barrett's team is working on increasing the efficiency of their design, to produce more ionic wind with less voltage. The researchers are also hoping to increase the design's thrust density -- the amount of thrust generated per unit area. Currently, flying the team's lightweight plane requires a large area of electrodes, which essentially makes up the plane's propulsion system. Ideally, Barrett would like to design an aircraft with no visible propulsion system or separate controls surfaces such as rudders and elevators.

"It took a long time to get here," Barrett says. "Going from the basic principle to something that actually flies was a long journey of characterizing the physics, then coming up with the design and making it work. Now the possibilities for this kind of propulsion system are viable."

This research was supported, in part, by MIT Lincoln Laboratory Autonomous Systems Line, the Professor Amar G. Bose Research Grant, and the Singapore-MIT Alliance for Research and Technology (SMART). The work was also funded through the Charles Stark Draper and Leonardo career development chairs at MIT.

Wednesday, May 16, 2018

The first wireless flying robotic insect takes off


Engineers have created RoboFly, the first wireless flying robotic insect. RoboFly is slightly heavier than a toothpick and is powered by a laser beam.

Insects sized flying robots could help with times consuming tasks like surveying crop growth on large farms or sniffing out gas leaks. These robots soar by fluttering tiny wings because they are too small to use propellers, like those seen on their larger drone cousins. Small size is advantageous: These robots are cheap to make and can easily slip into tight places that are inaccessible to big drones.
But current flying robo-insects are still tethered to the ground. The electronics they need to power and control their wings are too heavy for these miniature robots to carry.
Now, engineers at the University of Washington have for the first time cut the cord and added a brain, allowing their RoboFly to take its first independent flaps. This might be one small flap for a robot, but it's one giant leap for robot-kind. The team will present its findings May 23 at the International Conference on Robotics and Automation in Brisbane, Australia.
RoboFly is slightly heavier than a toothpick and is powered by a laser beam. It uses a tiny onboard circuit that converts the laser energy into enough electricity to operate its wings.
"Before now, the concept of wireless insect-sized flying robots was science fiction. Would we ever be able to make them work without needing a wire?" said co-author Sawyer Fuller, an assistant professor in the UW Department of Mechanical Engineering. "Our new wireless RoboFly shows they're much closer to real life."
The engineering challenge is the flapping. Wing flapping is a power-hungry process, and both the power source and the controller that directs the wings are too big and bulky to ride aboard a tiny robot. So Fuller's previous robo-insect, the RoboBee, had a leash -- it received power and control through wires from the ground.
But a flying robot should be able to operate on its own. Fuller and team decided to use a narrow invisible laser beam to power their robot. They pointed the laser beam at a photovoltaic cell, which is attached above RoboFly and converts the laser light into electricity.
"It was the most efficient way to quickly transmit a lot of power to RoboFly without adding much weight," said co-author Shyam Gollakota, an associate professor in the UW's Paul G. Allen School of Computer Science & Engineering.
Still, the laser alone does not provide enough voltage to move the wings. That's why the team designed a circuit that boosted the seven volts coming out of the photovoltaic cell up to the 240 volts needed for flight.
To give RoboFly control over its own wings, the engineers provided a brain: They added a microcontroller to the same circuit.
"The microcontroller acts like a real fly's brain telling wing muscles when to fire," said co-author Vikram Iyer, a doctoral student in the UW Department of Electrical Engineering. "On RoboFly, it tells the wings things like 'flap hard now' or 'don't flap.'"
Specifically, the controller sends voltage in waves to mimic the fluttering of a real insect's wings.
"It uses pulses to shape the wave," said Johannes James, the lead author and a mechanical engineering doctoral student. "To make the wings flap forward swiftly, it sends a series of pulses in rapid succession and then slows the pulsing down as you get near the top of the wave. And then it does this in reverse to make the wings flap smoothly in the other direction."
For now, RoboFly can only take off and land. Once its photovoltaic cell is out of the direct line of sight of the laser, the robot runs out of power and lands. But the team hopes to soon be able to steer the laser so that RoboFly can hover and fly around.
While RoboFly is currently powered by a laser beam, future versions could use tiny batteries or harvest energy from radio frequency signals, Gollakota said. That way, their power source can be modified for specific tasks.
Future RoboFlies can also look forward to more advanced brains and sensor systems that help the robots navigate and complete tasks on their own, Fuller said.
"I'd really like to make one that finds methane leaks," he said. "You could buy a suitcase full of them, open it up, and they would fly around your building looking for plumes of gas coming out of leaky pipes. If these robots can make it easy to find leaks, they will be much more likely to be patched up, which will reduce greenhouse emissions. This is inspired by real flies, which are really good at flying around looking for smelly things. So we think this is a good application for our RoboFly."

Monday, May 14, 2018

Nouns slow down our speech


Speakers hesitate or make brief pauses filled with sounds like 'uh' or 'uhm' mostly before nouns. Such slowdown effects are far less frequent before verbs, as researchers working together with an international team have now discovered by looking at examples from different languages.

When we speak, we unconsciously pronounce some words more slowly than others, and sometimes we make brief pauses or throw in meaningless sounds like "uhm." Such slowdown effects provide key evidence on how our brains process language. They point to difficulties when planning the utterance of a specific word.
To find out how such slowdown effects work, a team of researchers led by Frank Seifart from the University of Amsterdam and Prof. Balthasar Bickel from UZH analyzed thousands of recordings of spontaneous speech from linguistically and culturally diverse populations from around the world, including the Amazon rainforest, Siberia, the Himalayas, and the Kalahari desert, but also English and Dutch.
Nouns are more difficult to plan
In these recordings the researchers looked at slow-down effects before nouns (like "friend") and verbs (like "come"). They measured the speed of utterance in sounds per second and noted whether speakers made short pauses. "We discovered that in this diverse sample of languages, there is a robust tendency for slow-down effects before nouns as compared to verbs," explain Bickel and Seifart. "The reason is that nouns are more difficult to plan because they're usually only used when they represent new information." Otherwise they are replaced with pronouns (e.g., "she") or omitted, as in the following example: "My friend came back. She (my friend) took a seat" or "My friend came back and took a seat." No such replacement principles apply to verbs -- they are generally used regardless of whether they represent new or old information.
Widen the net of languages
This discovery has important implications for our understanding of how the human brain processes language. Future neuroscience research needs to look more systematically at the information value of words used in conversation, and how the brain reacts to differences in these values. Also, future research needs to broaden its data. "We found that English, on which most research is based, displayed the most exceptional behavior in our study," says Bickel. It is thus important to widen the net of languages considered in processing research, including rare, often endangered languages from around the world, to inform our understanding of human language.
The findings also shed new light on long-standing puzzles in linguistics. For example, the findings suggest universal long-term effects on how grammar evolves over time: The slow-down effects before nouns make it more difficult for nouns to develop complex forms through contraction with words that precede them. In German, for example, prefixes are far more common in verbs (ent-kommen, ver-kommen, be-kommen, vor-kommen, etc.) than in nouns.
At a more general level, the study contributes to a deeper understanding of how languages work in their natural environment. Such an understanding becomes increasingly important given the challenges that linguistic communication faces in the digital age, where we communicate more and more with artificial systems -- systems that might not slow down before nouns as humans naturally do

Eye, hair and skin color from a DNA sample of an unidentified individual


New tool will be used when standard forensic profiling is not helpful

An international team has developed a novel tool to accurately predict eye, hair and skin color from human biological material -- even a small DNA sample -- left, for example, at a crime scene or obtained from archeological remains. This all in one pigmentation profile tool provides a physical description of the person in a way that has not previously been possible by generating all three pigment traits together using a freely available web tool.

An international team, led by scientists from the School of Science at IUPUI and Erasmus MC University Medical Center Rotterdam in the Netherlands, has developed a novel tool to accurately predict eye, hair and skin color from human biological material -- even a small DNA sample -- left, for example, at a crime scene or obtained from archeological remains. This all in one pigmentation profile tool provides a physical description of the person in a way that has not previously been possible by generating all three pigment traits together using a freely available web tool.
The tool is designed to be used when standard forensic DNA profiling is not helpful because no reference DNA exists against which to compare the evidence sample.
The HIrisPlex-S DNA test system is capable of simultaneously predicting eye, hair and skin color phenotypes from DNA. Users, such as law enforcement officials or anthropologists, can enter relevant data using a laboratory DNA analysis tool, and the web tool will predict the pigment profile of the DNA donor.
"We have previously provided law enforcement and anthropologists with DNA tools for eye color and for combined eye and hair color, but skin color has been more difficult," said forensic geneticist Susan Walsh from IUPUI, who co-directed the study. "Importantly, we are directly predicting actual skin color divided into five subtypes -- very pale, pale, intermediate, dark and dark to black -- using DNA markers from the genes that determine an individual's skin coloration. This is not the same as identifying genetic ancestry. You might say it's more similar to specifying a paint color in a hardware store rather than denoting race or ethnicity.
"If anyone asks an eyewitness what they saw, the majority of time they mention hair color and skin color. What we are doing is using genetics to take an objective look at what they saw," Walsh said.
The innovative high-probability and high-accuracy complete pigmentation profile webtool is available online without charge.
The study, "HIrisPlex-S System for Eye, Hair and Skin Colour Prediction from DNA: Introduction and Forensic Developmental Validation," is published in the peer-reviewed journal Forensic Science International: Genetics.
"With our new HIrisPlex-S system, for the first time, forensic geneticists and genetic anthropologists are able to simultaneously generate eye, hair and skin color information from a DNA sample, including DNA of the low quality and quantity often found in forensic casework and anthropological studies," said Manfred Kayser of Erasmus MC, co-leader of the study.

Thursday, May 10, 2018

Discovery of episodic memory replay in rats could lead to better treatments for Alzheimer's disease


Researchers have reported the first evidence that non human animals can mentally replay past events from memory. The discovery could help improve the development of drugs to treat Alzheimer's disease by providing a way to study memory in animals that more closely addresses how memory works in people.

The study, led by IU professor Jonathon Crystal, appears today in the journal Current Biology.
"The reason we're interested in animal memory isn't only to understand animals, but rather to develop new models of memory that match up with the types of memory impaired in human diseases such as Alzheimer's disease," said Crystal, a professor in the IU Bloomington College of Arts and Sciences' Department of Psychological and Brain Sciences and director of the IU Bloomington Program in Neuroscience.
Under the current paradigm, Crystal said most preclinical studies on potential new Alzheimer's drugs examine how these compounds affect spatial memory, one of the easiest types of memory to assess in animals. But spatial memory is not the type of memory whose loss causes the most debilitating effects of Alzheimer's disease.
"If your grandmother is suffering from Alzheimer's, one of the most heartbreaking aspects of the disease is that she can't remember what you told her about what's happening in your life the last time you saw her," said Danielle Panoz-Brown, an IU Ph.D. student who is the first author on the study. "We're interested in episodic memory -- and episodic memory replay -- because it declines in Alzheimer's disease, and in aging in general."
Episodic memory is the ability to remember specific events. For example, if a person loses their car keys, they might try to recall every single step -- or "episode" -- in their trip from the car to their current location. The ability to replay these events in order is known as "episodic memory replay." People wouldn't be able to make sense of most scenarios if they couldn't remember the order in which they occurred, Crystal said.
To assess animals' ability to replay past events from memory, Crystal's lab spent nearly a year working with 13 rats, which they trained to memorize a list of up to 12 different odors. The rats were placed inside an "arena" with different odors and rewarded when they identified the second-to-last odor or fourth-to-last odor in the list.
The team changed the number of odors in the list prior to each test to confirm the odors were identified based upon their position in the list, not by scent alone, proving the animals were relying on their ability to recall the whole list in order. Arenas with different patterns were used to communicate to the rats which of the two options was sought.
After their training, Crystal said, the animals successfully completed their task about 87 percent of the time across all trials. The results are strong evidence the animals were employing episodic memory replay.
Additional experiments confirmed the rats' memories were long-lasting and resistant to "interference" from other memories, both hallmarks of episodic memory. They also ran tests that temporarily suppressed activity in the hippocampus -- the site of episodic memory -- to confirm the rats were using this part of their brain to perform their tasks.
Crystal said the need to find reliable ways to test episodic memory replay in rats is urgent since new genetic tools are enabling scientists to create rats with neurological conditions similar to Alzheimer's disease. Until recently, only mice were available with the genetic modifications needed to study the effect of new drugs on these symptoms.
"We're really trying push the boundaries of animal models of memory to something that's increasingly similar to how these memories work in people," he said. "If we want to eliminate Alzheimer's disease, we really need to make sure we're trying to protect the right type of memory."

Genetic clues reveal origins of the killer fungus behind the 'amphibian plague'


New research has revealed a deadly disease that threatens the survival of the world's frogs originated from East Asia, and global trade was almost certainly responsible for the disease's spread.

The frog chytrid fungus (Batrachochytrium dendrobatidis) has long been identified as a cause of the decline and extinction of species of amphibians across several continents since the 1970s.
It has spread around the world but until now it has remained unclear where killer strains of the pathogen first emerged.
An international team of researchers led by Imperial College London, including four scientists from the One Health Research Group at James Cook University, traced the ancestor of the pathogen to a single strain in East Asia.
Their findings support the idea that rather than dating back thousands of years, as previously thought, the range of the disease expanded greatly between 50 and 120 years ago, coinciding with the rapid global expansion of intercontinental trade.
According to the researchers, human movement of amphibians -- such as through the pet trade -- has directly contributed to spreading the pathogen around the world.
JCU's Dr Lee Skerratt, one of the authors of the paper, said the findings highlight the importance of global biosecurity measures.
"Australia has strict rules and regulations surrounding biosecurity and this finding confirms why regulations are so important," Dr Skerratt said.
"We hope this news will push policy change in countries with less strict biosecurity measures."
The team also uncovered additional strains of the fungus that could cause further species decline, highlighting the importance of strict biosecurity policies.
"If more strains are allowed to spread we could see additional extinctions," Dr Skerratt said.
"Countries need to act now to improve regulations before these additional strains spread."
Chytrid fungus causes a disease called chytridiomycosis that leads to heart failure, and is responsible for the decline or extinction of hundreds of species of frogs.
The paper, Recent Asian origin of chytrid fungi causing global amphibian declines, was published in Science today.
These findings come on the 20th anniversary of Dr Lee Berger's discovery during her PhD that the chytrid fungus is the cause of global amphibian species decline.
Dr Berger led the Australian contribution and was an Australian Research Council Future Fellow and Postdoctoral Fellow at James Cook University from 2004 to 2016.

Tuesday, May 8, 2018

Large predators once hunted to near-extinction are showing up in unexpected places


Sightings of alligators and other large predators in places where conventional wisdom says they 'shouldn't be' have increased in recent years, in large part because local populations, once hunted to near-extinction, are rebounding. A new article finds that far from being outliers, these sightings signify the return of highly adaptable predators to prime hunting grounds they occupied long ago -- a trend that opens new opportunities for future conservation.

Alligators on the beach. Killer whales in rivers. Mountain lions miles from the nearest mountain.
In recent years, sightings of large predators in places where conventional wisdom says they "shouldn't be" have increased, in large part because local populations, once hunted to near-extinction, are rebounding -- thanks to conservation.
Many observers have hypothesized that as these populations recover the predators are expanding their ranges and colonizing new habitats in search of food.
A Duke University-led paper published today in the journal Current Biology suggests otherwise.
It finds that, rather than venturing into new and alien habitats for the first time, alligators, sea otters and many other large predators -- marine and terrestrial species alike -- are re-colonizing ecosystems that used to be prime hunting grounds for them before humans decimated their populations and well before scientists started studying them.
"We can no longer chock up a large alligator on a beach or coral reef as an aberrant sighting," said Brian Silliman, Rachel Carson Associate Professor of Marine Conservation Biology at Duke's Nicholas School of the Environment. "It's not an outlier or short-term blip. It's the old norm, the way it used to be before we pushed these species onto their last legs in hard-to-reach refuges. Now, they are returning."
By synthesizing data from recent scientific studies and government reports, Silliman and his colleagues found that alligators, sea otters, river otters, gray whales, gray wolfs, mountain lions, orangutans and bald eagles, among other large predators, may now be as abundant or more abundant in "novel" habitats than in traditional ones.
Their successful return to ecosystems and climatic zones long considered off-limits or too stressful for them upends one of the most widely held paradigms of large animal ecology, Silliman said.
"The assumption, widely reinforced in both the scientific and popular media, is that these animals live where they live because they are habitat specialists. Alligators love swamps; sea otters do best in saltwater kelp forests; orangutans need undisturbed forests; marine mammals prefer polar waters. But this is based on studies and observations made while these populations were in sharp decline. Now that they are rebounding, they're surprising us by demonstrating how adaptable and cosmopolitan they really are," Silliman said.
For instance, marine species such as sting rays, sharks, shrimps, horseshoe crabs and manatees now make up 90 percent of alligators' diet when they're in seagrass or mangrove ecosystems, showing that gators adapt very well to life in a saltwater habitat.
The unanticipated adaptability of these returning species presents exciting new conservation opportunities, Silliman stressed.
"It tells us these species can thrive in a much greater variety of habitats. Sea otters, for instance, can adapt and thrive if we introduce them into estuaries that don't have kelp forests. So even if kelp forests disappear because of climate change, the otters won't," he said. "Maybe they can even live in rivers. We will find out soon enough."
As top predators return, the habitats they re-occupy also see benefits, he said. For instance, introducing sea otters to estuarine seagrass beds helps protect the beds from being smothered by epiphytic algae that feed on excess nutrient runoff from inland farms and cities. The otters do this by eating Dungeness crabs, which otherwise eat too many algae-grazing sea slugs that form the bed's front line of defense.
"It would cost tens of millions of dollars to protect these beds by re-constructing upstream watersheds with proper nutrient buffers," Silliman said, "but sea otters are achieving a similar result on their own, at little or no cost to taxpayers."

25 years of fossil collecting yields clearest picture of extinct 12-foot aquatic predator


More than two decades of exploration at a Pennsylvania fossil site have given paleontologists their best idea of how a giant, prehistoric predator would have looked and behaved.

After 25 years of collecting fossils at a Pennsylvania site, scientists at the Academy of Natural Sciences of Drexel University now have a much better picture of an ancient, extinct 12-foot fish and the world in which it lived.
Although Hyneria lindae was initially described in 1968, it was done without a lot of fossil material to go on. But since the mid-1990s, dedicated volunteers, students, and paleontologists digging at the Red Hill site in northern Pennsylvania's Clinton County have turned up more -- and better quality -- fossils of the fish's skeleton that have led to new insights.
Academy researchers Ted Daeschler, PhD, and Jason Downs, PhD, who specialize in the Devonian time period (a time before dinosaurs and even land animals) when Hyneria lived, have been able to reconstruct that the predator had a blunt, wide snout, reached 10-12 feet in length, had small eyes and featured a sensory system that allowed it to hunt prey by feeling pressure waves around it.
"Dr. Keith Thomson, the man who first described Hyneria in 1968, did not have enough fossil material to reconstruct the anatomy that we have now been able to document with more extensive collections," explained Daeschler, curator of Vertebrate Zoology at the Academy, as well as a professor in Drexel's College of Arts and Sciences.
Originally, pieces of the fish were collected in the 1950s. Thomson described and officially named Hyneria lindae in 1968, but he had just a few pieces of a crushed skull and some scales to work with.
The new discoveries that Daeschler and Downs (who is an assistant professor at Delaware Valley University) wrote about in the Journal of Vertebrate Paleontology were made possible by years of collecting that turned up, "well-preserved, well-prepared three-dimensional material of almost all of the [bony] parts of the skeleton," according to Downs.
No single complete skeleton exists of this giant, but enough is there to show that Hyneriawould have truly been a monster to the other animals in the subtropical streams of the Devonian Period, roughly 365 million years ago. An apex predator, Hyneria's mouth was bristling with two-inch fangs. For reference, that's bigger than most modern Great White Shark's teeth.
Due to its sheer size, weaponry, and sensory abilities, Hyneria may have preyed upon anything from ancient placoderms (armored fish), to acanthodians (related to sharks) and sarcopterygians (lobe-finned fish, the group Hyneria belongs to) -- including early tetrapods (limbed vertebrates) that are also found at the site.
Since the streams Hyneria lived in were likely murky and not conducive to hunting by eyesight, sensory canals allowed it to detect fish swimming near it and attack them.
"We discovered that the skull roof elements have openings on their surfaces that connect up, forming a network of tubes that would function like the sensory line system in some modern aquatic vertebrates," Daeschler said. "Similarly, we found a network of connected pores on the parts of the scales that would be exposed on the body of Hyneria."
All of the new information gleaned about Hyneria is doubly valuable because it provides more information about the ecosystem -- and time period -- it lived in. The Devonian was a pivotal time in vertebrate evolution, especially since some of Hyneria's fellow lobe-finned fish developed specialized fins that would take them onto land and eventually give rise to all limbed verterbates including reptiles, amphibians and mammals.
"Hyneria lived in a time and place that is of incredible interest to those of us studying the vertebrate fin-to-limb transition," Downs commented. "Each study like this one contributes more to our understanding of these ecosystems and what may have played a part in the successful transition to land."

Wednesday, May 2, 2018

Scientists find the first bird beak, right under their noses

Researchers have pieced together the three-dimensional skull of an iconic, toothed bird that represents a pivotal moment in the transition from dinosaurs to modern-day birds.

 Ichthyornis dispar holds a key position in the evolutionary trail that leads from dinosaurian species to today's avians. It lived nearly 100 million years ago in North America, looked something like a toothy seabird, and drew the attention of such famous naturalists as Yale's O.C. Marsh (who first named and described it) and Charles Darwin.
Yet despite the existence of partial specimens of Ichthyornis dispar, there has been no significant new skull material beyond the fragmentary remains first found in the 1870s. Now, a Yale-led team reports on new specimens with three-dimensional cranial remains -- including one example of a complete skull and two previously overlooked cranial elements that were part of the original specimen at Yale -- that reveal new details about one of the most striking transformations in evolutionary history.
"Right under our noses this whole time was an amazing, transitional bird," said Yale paleontologist Bhart-Anjan Bhullar, principal investigator of a study published in the journal Nature. "It has a modern-looking brain along with a remarkably dinosaurian jaw muscle configuration."
Perhaps most interesting of all, Bhullar said, is that Ichthyornis dispar shows us what the bird beak looked like as it first appeared in nature.
"The first beak was a horn-covered pincer tip at the end of the jaw," said Bhullar, who is an assistant professor and assistant curator in geology and geophysics. "The remainder of the jaw was filled with teeth. At its origin, the beak was a precision grasping mechanism that served as a surrogate hand as the hands transformed into wings."
The research team conducted its analysis using CT-scan technology, combined with specimens from the Yale Peabody Museum of Natural History; the Sternberg Museum of Natural History in Fort Hays, Kan.; the Alabama Museum of Natural History; the University of Kansas Biodiversity Institute; and the Black Hills Institute of Geological Research.
Co-lead authors of the new study are Daniel Field of the Milner Centre for Evolution at the University of Bath and Michael Hanson of Yale. Co-authors are David Burnham of the University of Kansas, Laura Wilson and Kristopher Super of Fort Hays State University, Dana Ehret of the Alabama Museum of Natural History, and Jun Ebersole of the McWane Science Center.
"The fossil record provides our only direct evidence of the evolutionary transformations that have given rise to modern forms," said Field. "This extraordinary new specimen reveals the surprisingly late retention of dinosaur-like features in the skull of Ichthyornis -- one of the closest-known relatives of modern birds from the Age of Reptiles."
The researchers said their findings offer new insight into how modern birds' skulls eventually formed. Along with its transitional beak, Ichthyornis dispar had a brain similar to modern birds but a temporal region of the skull that was strikingly like that of a dinosaur -- indicating that during the evolution of birds, the brain transformed first while the remainder of the skull remained more primitive and dinosaur-like.
"Ichthyornis would have looked very similar to today's seabirds, probably very much like a gull or tern," said Hanson. "The teeth probably would not have been visible unless the mouth was open but covered with some sort of lip-like, extra-oral tissue."
In recent years Bhullar's lab has produced a large body of research on various aspects of vertebrate skulls, often zeroing in on the origins of the avian beak. "Each new discovery has reinforced our previous conclusions. The skull of Ichthyornis even substantiates our molecular finding that the beak and palate are patterned by the same genes," Bhullar said. "The story of the evolution of birds, the most species-rich group of vertebrates on land, is one of the most important in all of history. It is, after all, still the age of dinosaurs."

Thursday, March 1, 2018

Hidden secret of immortality enzyme telomeras

 

Can we stay young forever, or even recapture lost youth?

Research has recently uncovered a crucial step in the telomerase enzyme catalytic cycle. This catalytic cycle determines the ability of the human telomerase enzyme to synthesize DNA.

Research from the laboratory of Professor Julian Chen in the School of Molecular Sciences at Arizona State University recently uncovered a crucial step in the telomerase enzyme catalytic cycle. This catalytic cycle determines the ability of the human telomerase enzyme to synthesize DNA "repeats" (specific DNA segments of six nucleotides) onto chromosome ends, and so afford immortality in cells. Understanding the underlying mechanism of telomerase action offers new avenues toward effective anti-aging therapeutics. illustration depicting the enzyme telomerase This figure depicts the enzyme telomerase as well as telomeres relative to a chromosome.
Typical human cells are mortal and cannot forever renew themselves. As demonstrated by Leonard Hayflick a half-century ago, human cells have a limited replicative lifespan, with older cells reaching this limit sooner than younger cells. This "Hayflick limit" of cellular lifespan is directly related to the number of unique DNA repeats found at the ends of the genetic material-bearing chromosomes. These DNA repeats are part of the protective capping structures, termed "telomeres," which safeguard the ends of chromosomes from unwanted and unwarranted DNA rearrangements that destabilize the genome.
Each time the cell divides, the telomeric DNA shrinks and will eventually fail to secure the chromosome ends. This continuous reduction of telomere length functions as a "molecular clock" that counts down to the end of cell growth. The diminished ability for cells to grow is strongly associated with the aging process, with the reduced cell population directly contributing to weakness, illness, and organ failure.
The fountain of youth at molecular level
Counteracting the telomere shrinking process is the enzyme, telomerase, that uniquely holds the key to delaying or even reversing the cellular aging process. Telomerase offsets cellular aging by lengthening the telomeres, adding back lost DNA repeats to add time onto the molecular clock countdown, effectively extending the lifespan of the cell. Telomerase lengthens telomeres by repeatedly synthesizing very short DNA repeats of six nucleotides -- the building blocks of DNA -- with the sequence "GGTTAG" onto the chromosome ends from an RNA template located within the enzyme itself. However, the activity of the telomerase enzyme is insufficient to completely restore the lost telomeric DNA repeats, nor to stop cellular aging.
The gradual shrinking of telomeres negatively affects the replicative capacity of human adult stem cells, the cells that restore damaged tissues and/or replenish aging organs in our bodies. The activity of telomerase in adult stem cells merely slows down the countdown of the molecular clock and does not completely immortalize these cells. Therefore, adult stem cells become exhausted in aged individuals due to telomere length shortening that results in increased healing times and organ tissue degradation from inadequate cell populations.
Tapping the full potential of telomeraseUnderstanding the regulation and limitation of the telomerase enzyme holds the promise of reversing telomere shortening and cellular aging with the potential to extend human lifespan and improve the health and wellness of elderly individuals. Research from the laboratory of Chen and his colleagues, Yinnan Chen, Joshua Podlevsky and Dhenugen Logeswaran, recently uncovered a crucial step in the telomerase catalytic cycle that limits the ability of telomerase to synthesize telomeric DNA repeats onto chromosome ends.
"Telomerase has a built-in braking system to ensure precise synthesis of correct telomeric DNA repeats. This safe-guarding brake, however, also limits the overall activity of the telomerase enzyme," said Professor Chen. "Finding a way to properly release the brakes on the telomerase enzyme has the potential to restore the lost telomere length of adult stem cells and to even reverse cellular aging itself."
This intrinsic brake of telomerase refers to a pause signal, encoded within the RNA template of telomerase itself, for the enzyme to stop DNA synthesis at the end of the sequence 'GGTTAG'. When telomerase restarts DNA synthesis for the next DNA repeat, this pause signal is still active and limits DNA synthesis. Moreover, the revelation of the braking system finally solves the decades-old mystery of why a single, specific nucleotide stimulates telomerase activity. By specifically targeting the pause signal that prevents restarting DNA repeat synthesis, telomerase enzymatic function can be supercharged to better stave off telomere length reduction, with the potential to rejuvenate aging human adult stem cells.
Human diseases that include dyskeratosis congenita, aplastic anemia, and idiopathic pulmonary fibrosis have been genetically linked to mutations that negatively affect telomerase activity and/or accelerate the loss of telomere length. This accelerated telomere shortening closely resembles premature aging with increased organ deterioration and a shortened patient lifespan from critically insufficient cell populations. Increasing telomerase activity is the seemingly most promising means of treating these diseases.
While increased telomerase activity could bring youth to aging cells and cure premature aging-like diseases, too much of a good thing can be damaging for the individual. Just as youthful stem cells use telomerase to offset telomere length loss, cancer cells employ telomerase to maintain their aberrant and destructive growth. Augmenting and regulating telomerase function will have to be performed with precision, walking a narrow line between cell rejuvenation and a heightened risk for cancer development.
Distinct from human stem cells, somatic cells constitute the vast majority of the cells in the human body and lack telomerase activity. The telomerase deficiency of human somatic cells reduces the risk of cancer development, as telomerase fuels uncontrolled cancer cell growth. Therefore, drugs that increase telomerase activity indiscriminately in all cell types are not desired. Toward the goal of precisely augmenting telomerase activity selectively within adult stem cells, this discovery reveals the crucial step in telomerase catalytic cycle as an important new drug target. Small molecule drugs can be screened or designed to increase telomerase activity exclusively within stem cells for disease treatment as well as anti-aging therapies without increasing the risk of cancer.

Soil cannot halt climate change

 
Long-term field experiments, dating back as far as 1843, demonstrate that modern carbon emissions cannot be locked in the ground to halt global warming

Unique soils data from long-term experiments, stretching back to the middle of the nineteenth century, confirm the practical implausibility of burying carbon in the ground to halt climate change. The idea of using crops to collect more atmospheric carbon and locking it into soil organic matter to offset fossil fuel emissions was launched at COP21, the 21st annual Conference of Parties to review the United Nations Framework Convention on Climate Change in Paris in 2015.

Unique soils data from long-term experiments, stretching back to the middle of the nineteenth century, confirm the practical implausibility of burying carbon in the ground to halt climate change, an option once heralded as a breakthrough.
The findings come from an analysis of the rates of change of carbon in soil by scientists at Rothamsted Research where samples have been collected from fields since 1843. They are published today in Global Change Biology.
The idea of using crops to collect more atmospheric carbon and locking it into soil's organic matter to offset fossil fuel emissions was launched at COP21, the 21st annual Conference of Parties to review the United Nations Framework Convention on Climate Change in Paris in 2015.
The aim was to increase carbon sequestration by "four parts per 1000 (4P1000)" per year for 20 years. "The initiative was generally welcomed as laudable," says David Powlson, a soils specialist and Lawes Trust Senior Fellow at Rothamsted.
"Any contribution to climate change mitigation is to be welcomed and, perhaps more significantly, any increases in soil organic carbon will improve the quality and functioning of soil," he adds. "The initiative has been adopted by many governments, including the UK."
But there have been serious criticisms of the initiative. Many scientists argue that this rate of soil carbon sequestration is unrealistic over large areas of the planet, notes Powlson: "Also, increases in soil carbon do not continue indefinitely: they move towards a new equilibrium value and then cease."
The Rothamsted scientists used data from 16 experiments on three different soil types, giving over 110 treatment comparisons. "The results showed that the "4 per 1000" rate of increase in soil carbon can be achieved in some cases but usually only with extreme measures that would mainly be impractical or unacceptable," says Paul Poulton, lead author and an emeritus soils specialist.
"For example, large annual applications of animal manure led to increases in soil carbon that continued over many years but the amounts of manure required far exceeded acceptable limits under EU regulations and would cause massive nitrate pollution," notes Poulton.
Removing land from agriculture led to large rates of soil carbon increase in the Rothamsted experiments but doing this over large areas would be highly damaging to global food security, record the researchers.
Similarly, they add, returning crop residues to soil was effective at increasing carbon sequestration but, in some countries, this is already done so cannot be regarded as a totally new practice.
"For example, in the UK about 50% of cereal straw is currently returned to soil and much of the remainder is used for animal feed or bedding, at least some of which is later returned to soil as manure," says Poulton. "In many other countries, however, crop residues are often used as a source of fuel for cooking."
Moving from continuous arable cropping to a long-term rotation of arable crops interspersed with pasture led to significant soil carbon increases, but only where there was at least 3 years of pasture in every 5 or 6 years, record the researchers.
"Although there can be environmental benefits from such a system, most farmers find that it is uneconomic under present circumstances," says Powlson. "To make this change on a large scale would require policy decisions regarding changes to subsidy and farm support. Such a change would also have impacts on total food production."
The authors of this study conclude that promoting the "4 per 1000" initiative as a major contribution to climate change mitigation is unrealistic and potentially misleading.
They suggest that a more logical rationale for promoting practices that increase soil organic carbon is the urgent need to preserve and improve the functioning of soils, both for sustainable food security and wider ecosystem services.
For climate change mitigation through changes in agricultural practices, they point out that measures to decrease emission of nitrous oxide, a greenhouse gas almost 300 times more powerful than carbon dioxide, may be more effective.

 

The moon formed inside a vaporized Earth synestia

 

A new explanation for the Moon origin has it forming inside the Earth when our planet was a seething, spinning cloud of vaporized rock, called a synestia. The new model resolves several problems in lunar formation.

A new explanation for the Moon origin has it forming inside the Earth when our planet was a seething, spinning cloud of vaporized rock, called a synestia. The new model led by researchers at the University of California, Davis and Harvard University resolves several problems in lunar formation and is published Feb. 28 in theJournal of Geophysical Research -- Planets
"The new work explains features of the Moon that are hard to resolve with current ideas," said Sarah Stewart, professor of Earth and Planetary Sciences at UC Davis. "The Moon is chemically almost the same as the Earth, but with some differences," she said. "This is the first model that can match the pattern of the Moon composition."
Current models of lunar formation suggest that the Moon formed as a result of a glancing blow between the early Earth and a Mars-size body, commonly called Theia. According to the model, the collision between Earth and Theia threw molten rock and metal into orbit that collided together to make the Moon.
The new theory relies instead on a synestia, a new type of planetary object proposed by Stewart and Simon Lock, graduate student at Harvard and visiting student at UC Davis, in 2017. A synestia forms when a collision between planet-sized objects results in a rapidly spinning mass of molten and vaporized rock with part of the body in orbit around itself. The whole object puffs out into a giant donut of vaporized rock.
Synestias likely don't last long -- perhaps only hundreds of years. They shrink rapidly as they radiate heat, causing rock vapor to condense into liquid, finally collapsing into a molten planet.
"Our model starts with a collision that forms a synestia," Lock said. "The Moon forms inside the vaporized Earth at temperatures of four to six thousand degrees Fahrenheit and pressures of tens of atmospheres."
An advantage of the new model, Lock said, is that there are multiple ways to form a suitable synestia -- it doesn't have to rely on a collision with the right sized object happening in exactly the right way.
Once the Earth-synestia formed, chunks of molten rock injected into orbit during the impact formed the seed for the Moon. Vaporized silicate rock condensed at the surface of the synestia and rained onto the proto-Moon, while the Earth-synestia itself gradually shrank. Eventually, the Moon would have emerged from the clouds of the synestia trailing its own atmosphere of rock vapor. The Moon inherited its composition from the Earth, but because it formed at high temperatures it lost the easily vaporized elements, explaining the Moon's distinct composition.
Additional authors on the paper are Michail Petaev and Stein Jacobsen at Harvard University, Zoe Leinhardt and Mia Mace at the University of Bristol, England and Matija Cuk, SETI Institute, Mountain View, Calif. The work was supported by grants from NASA, the U.S. Department of Energy and the UK's Natural Environment Research Council.

Friday, February 23, 2018

Surprising new study redraws family tree of domesticated and 'wild' horses

 

New research overturns a long-held assumption that Przewalski's horses, native to the Eurasian steppes, are the last wild horse species on Earth.

 Research published in Science today overturns a long-held assumption that Przewalski's horses, native to the Eurasian steppes, are the last wild horse species on Earth. Instead, phylogenetic analysis shows Przewalski's horses are feral, descended from the earliest-known instance of horse domestication by the Botai people of northern Kazakhstan some 5,500 years ago.
Further, the new paper finds that modern domesticated horses didn't descend from the Botai horses, an assumption previously held by many scientists.
"This was a big surprise," said co-author Sandra Olsen, curator-in-charge of the archaeology division of the Biodiversity Institute and Natural History Museum at the University of Kansas, who led archaeological work at known Botai villages. "I was confident soon after we started excavating Botai sites in 1993 that we had found the earliest domesticated horses. We went about trying to prove it, but based on DNA results Botai horses didn't give rise to today's modern domesticated horses -- they gave rise to the Przewalski's horse."
The findings signify there are no longer true "wild" horses left, only feral horses that descend from horses once domesticated by humans, including Przewalski's horses and mustangs that descend from horses brought to North America by the Spanish.
"This means there are no living wild horses on Earth -- that's the sad part," said Olsen. "There are a lot of equine biologists who have been studying Przewalskis, and this will be a big shock to them. They thought they were studying the last wild horses. It's not a real loss of biodiversity -- but in our minds, it is. We thought there was one last wild species, and we're only just now aware that all wild horses went extinct."
Many of the horse bones and teeth Olsen excavated at two Botai sites in Kazakhstan, called Botai and Krasnyi Yar, were used in the phylogenetic analysis. The international team of researchers behind the paper sequenced the genomes of 20 horses from the Botai and 22 horses from across Eurasia that spanned the last 5,500 years. They compared these ancient horse genomes with already published genomes of 18 ancient and 28 modern horses.
"Phylogenetic reconstruction confirmed that domestic horses do not form a single monophyletic group as expected if descending from Botai," the authors wrote. "Earliest herded horses were the ancestors of feral Przewalski's horses but not of modern domesticates."



What Your Opinion ?

Watching too much television could cause fatal blood clots

  Spending too much time in front of the television could increase your chance of developing potentially fatal blood clots known as ve...