HELP SAVE THE WORLD TODAY

EDUCATION IS THE MOST POWERFUL WEAPON WHICH YOU CAN USE TO CHANGE THE WORLD.

RICH_SCI_DATES

I LOVE YOU BECAUSE YOU'RE AWESOME JUST LIKE ME!

GIVE'S YOU THE BETTER...

FRIENDSHIP... IS NOT SOMETHING YOU LEARN IN SCHOOL. BUT IF YOU HAVEN'T LEARNED THE MEANING OF FRIENDSHIP, YOU REALLY HAVEN'T LEARNED ANYTHING.

DO YOU KNOW...

THE PERSON WHO YOU'RE WITH MOST IN LIFE IS YOURSELF AND IF YOU DON'T LIKE YOURSELF YOU'RE ALWAYS WITH SOMEBODY YOU DON'T LIKE.

MAKING IT HAPPEN

WHERE JUSTICE IS DENIED, WHERE POVERTY IS ENFORCED, WHERE IGNORANCE PREVAILS, AND WHERE ANY ONE CLASS IS MADE TO FEEL THAT SOCIETY IS AN ORGANIZED CONSPIRACY TO OPPRESS, ROB AND DEGRADE THEM, NEITHER PERSONS NOR PROPERTY WILL BE SAFE.

Monday, December 26, 2016

TOP STORY



  1. White woman spent N600,000 on christmas presents for her dogs
  2. Celebrating Multi Award Winning Pop Singer Dido
  3. Wizkid, Falz, Olamide, Psquare, Omawumi, others thrill fans at #PepsiRhythmUnplugged

Friday, December 23, 2016

Deadly sleeping sickness set to be eliminated in six years

Gambian sleeping sickness -- a deadly parasitic disease spread by tsetse flies -- could be eliminated in six years in key regions in the Democratic Republic of Congo (DRC), according to new research by the University of Warwick.

Kat Rock and Matt Keeling at the School of Life Sciences, with colleagues in DRC and the Liverpool School of Tropical Medicine, have calculated the impact of different intervention strategies on the population dynamics of tsetse flies and humans -- establishing which strategies show the most promise to control and eliminate the disease.
They found that a two-pronged approach -- integrating active screening and vector control -- could substantially speed up the elimination of Gambian sleeping sickness in high burden areas of DRC.
Without changing current strategy, elimination might not happen until the 22nd century.
The researchers, who work as part of the Neglected Tropical Disease Modelling Consortium, used complex mathematical models to compare the efficacy of six key strategies and twelve variations within two areas of Kwilu province (within former Bandundu province), DRC.
Previous work by the same group indicates that high-risk people are often missed from active screening. The new model concludes that improved active screening -- making sure that all people are screened equally, regardless of risk factor -- may allow elimination as a public health problem between 2023 and 2031.
If vector control strategies -- using "tsetse targets" coated with insecticide to attract and kill flies -- are added, this elimination goal is likely to be achieved within four years when coupled with any screening approach.
If DRC adopts any of the new strategies with vector control, transmission would probably be broken within six years of launching the new program in these areas -- and over 6000 new infections could be averted between 2017 and 2030.
Strategies which rely only on self-reporting of illness and screening of low-risk individuals are unlikely to lead to elimination of sleeping sickness transmission by 2030, and delay elimination until the next century.
Gambian sleeping sickness, or Gambian human African trypanosomiasis, is caused by a parasite called Trypanosoma brucei gambiense, carried by tsetse flies in Central and West Africa. Without treatment, the disease usually results in death.
In recent years, programmes have performed intense active and passive screening to decrease disease incidence. A few areas have also combined these medical interventions with vector control. But some high prevalence regions of DRC have not achieved the reductions in disease seen in other parts of Africa. Other strategies for elimination will also include reinforced passive screening, new diagnostic tools and improved drugs.
In 2012, the World Health Organization set two public health goals for the control of Gambian sleeping sickness, a parasitic disease spread by the tsetse fly. The first is to eliminate the disease as a public health problem and have fewer than 2000 cases by 2020; while the second goal is to achieve zero transmission around the globe by 2030.
Kat Rock comments:
"We found that vector control has great potential to reduce transmission and, even if it is less effective at reducing tsetse numbers as in other regions, the full elimination goal could still be achieved by 2030.
"We recommend that control programmes use a combined medical and vector control strategy to help combat sleeping sickness."

Thursday, December 22, 2016

Global climate target could net additional six million tons of fish annually


If countries abide by the Paris Agreement global warming target of 1.5 degrees Celsius, potential fish catches could increase by six million metric tons per year, according to a new study.

The researchers also found that some oceans are more sensitive to changes in temperature and will have substantially larger gains from achieving the Paris Agreement.
"The benefits for vulnerable tropical areas is a strong reason why 1.5 C is an important target to meet," said lead author William Cheung, director of science at the Nippon Foundation-Nereus Program and associate professor at UBC's Institute for the Oceans and Fisheries.
"Countries in these sensitive regions are highly dependent on fisheries for food and livelihood, but all countries will be impacted as the seafood supply chain is now highly globalized. Everyone would benefit from meeting the Paris Agreement."
The authors compared the Paris Agreement 1.5 C warming scenario to the currently pledged 3.5 C by using computer models to simulate changes in global fisheries and quantify losses or gains. They found that for every degree Celsius decrease in global warming, potential fish catches could increase by more than three metric million tons per year. Previous UBC research shows that today's global fish catch is roughly 109 million metric tons.
"Changes in ocean conditions that affect fish stocks, such as temperature and oxygen concentration, are strongly related to atmospheric warming and carbon emissions," said author Thomas Frölicher, principal investigator at the Nippon Foundation-Nereus Program and senior scientist at ETH Zürich. "For every metric ton of carbon dioxide emitted into the atmosphere, the maximum catch potential decreases by a significant amount."
Climate change is expected to force fish to migrate towards cooler waters. The amount and species of fish caught in different parts of the world will impact local fishers and make fisheries management more difficult.
The findings suggest that the Indo-Pacific area would see a 40 per cent increase in fisheries catches at 1.5 C warming versus 3.5 C. Meanwhile the Arctic region would have a greater influx of fish under the 3.5 C scenario but would also lose more sea ice and face pressure to expand fisheries.
The authors hope these results will provide further incentives for countries and the private sector to substantially increase their commitments and actions to reduce greenhouse gas emissions.
"If one of the largest carbon dioxide emitting countries gets out of the Paris Agreement, the efforts of the others will be clearly reduced," says author Gabriel Reygondeau, Nippon Foundation-Nereus Program senior fellow at UBC. "It's not a question of how much we can benefit from the Paris Agreement, but how much we don't want to lose."

People who care for others live longer


Older people who help and support others live longer, a new study has concluded. The results of these findings show that this kind of care giving can have a positive effect on the mortality of the carers.

Older people who help and support others live longer. These are the findings of a study published in the journal Evolution and Human Behavior, conducted by researchers from the University of Basel, Edith Cowan University, the University of Western Australia, the Humboldt University of Berlin, and the Max Planck Institute for Human Development in Berlin.
Older people who help and support others are also doing themselves a favor. An international research team has found that grandparents who care for their grandchildren on average live longer than grandparents who do not. The researchers conducted survival analyses of over 500 people aged between 70 and 103 years, drawing on data from the Berlin Aging Study collected between 1990 and 2009.
In contrast to most previous studies on the topic, the researchers deliberately did not include grandparents who were primary or custodial caregivers. Instead, they compared grandparents who provided occasional childcare with grandparents who did not, as well as with older adults who did not have children or grandchildren but who provided care for others in their social network.
Emotional support
The results of their analyses show that this kind of caregiving can have a positive effect on the mortality of the carers. Half of the grandparents who took care of their grandchildren were still alive about ten years after the first interview in 1990. The same applied to participants who did not have grandchildren, but who supported their children -- for example, by helping with housework. In contrast, about half of those who did not help others died within five years.
The researchers were also able to show that this positive effect of caregiving on mortality was not limited to help and caregiving within the family. The data analysis showed that childless older adults who provided others with emotional support, for example, also benefited. Half of these helpers lived for another seven years, whereas non-helpers on average lived for only another four years.
Too intense involvement causes stress
"But helping shouldn't be misunderstood as a panacea for a longer life," says Ralph Hertwig, Director of the Center for Adaptive Rationality at the Max Planck Institute for Human Development. "A moderate level of caregiving involvement does seem to have positive effects on health. But previous studies have shown that more intense involvement causes stress, which has negative effects on physical and mental health," says Hertwig. As it is not customary for grandparents in Germany and Switzerland to take custodial care of their grandchildren, primary and custodial caregivers were not included in the analyses.
The researchers think that prosocial behavior was originally rooted in the family. "It seems plausible that the development of parents' and grandparents' prosocial behavior toward their kin left its imprint on the human body in terms of a neural and hormonal system that subsequently laid the foundation for the evolution of cooperation and altruistic behavior towards non-kin," says first author Sonja Hilbrand, doctoral student in the Department of Psychology at the University of Basel.

Scientists build bacteria-powered battery on single sheet of paper


Researchers have created a bacteria-powered battery on a single sheet of paper that can power disposable electronics. The manufacturing technique reduces fabrication time and cost, and the design could revolutionize the use of bio-batteries as a power source in remote, dangerous and resource-limited areas.

Instead of ordering batteries by the pack, we might get them by the ream in the future. Researchers at Binghamton University, State University of New York have created a bacteria-powered battery on a single sheet of paper that can power disposable electronics. The manufacturing technique reduces fabrication time and cost, and the design could revolutionize the use of bio-batteries as a power source in remote, dangerous and resource-limited areas.
"Papertronics have recently emerged as a simple and low-cost way to power disposable point-of-care diagnostic sensors," said Assistant Professor Seokheun "Sean" Choi, who is in the Electrical and Computer Engineering Department within the Thomas J. Watson School of Engineering and Applied Science. He is also the director of the Bioelectronics and Microsystems Lab at Binghamton.
"Stand-alone and self-sustained, paper-based, point-of-care devices are essential to providing effective and life-saving treatments in resource-limited settings," said Choi.
On one half of a piece of chromatography paper, Choi and PhD candidate Yang Gao, who is a co-author of the paper, placed a ribbon of silver nitrate underneath a thin layer of wax to create a cathode. The pair then made a reservoir out of a conductive polymer on the other half of the paper, which acted as the anode. Once properly folded and a few drops of bacteria-filled liquid are added, the microbes' cellular respiration powers the battery.
"The device requires layers to include components, such as the anode, cathode and PEM (proton exchange membrane)," said Choi. "[The final battery] demands manual assembly, and there are potential issues such as misalignment of paper layers and vertical discontinuity between layers, which ultimately decrease power generation."
Different folding and stacking methods can significantly improve power and current outputs. Scientists were able to generate 31.51 microwatts at 125.53 microamps with six batteries in three parallel series and 44.85 microwatts at 105.89 microamps in a 6x6 configuration.
It would take millions of paper batteries to power a common 40-watt light bulb, but on the battlefield or in a disaster situation, usability and portability is paramount. Plus, there is enough power to run biosensors that monitor glucose levels in diabetes patients, detect pathogens in a body or perform other life-saving functions.
"Among many flexible and integrative paper-based batteries with a large upside, paper-based microbial fuel cell technology is arguably the most underdeveloped," said Choi. "We are excited about this because microorganisms can harvest electrical power from any type of biodegradable source, like wastewater, that is readily available. I believe this type of paper biobattery can be a future power source for papertronics."
The innovation is the latest step in paper battery development by Choi. His team developed its first paper prototype in 2015, which was a foldable battery that looked much like a matchbook. Earlier this year they unveiled a design that was inspired by a ninja throwing star.

Wednesday, December 21, 2016

Impact of climate change on microbial biodiversity


The scientists discovered that climate change affects biodiversity most strongly in the most natural environments, as well as the most nutrient enriched environments. This means that these extremes are most susceptible to future changes in temperatures.

The results are just published in the journal Nature Communications.
We still know fairly little about the specific impacts of climate change and human activity, such as nutrient enrichment of waterways, on broad geographical scales. Researchers from the Department of Geosciences and Geography at the University of Helsinki, the Finnish Environment Institute, and the Nanjing Institute of Geography and Limnology, Chinese Academy of Sciences have studied hundreds of microcosms in mountainous regions with the aid of natural temperature gradients in the studied areas, while modifying the enrichment level in field tests.
The results indicate that the bacteria in elevated tropical areas are similar to e.g. those in arctic areas. As a result of changes in temperature and aquatic enrichment, significant alterations occur in the microcosms, and as the enrichment increases, biodiversity reduces, says Associate Professor Janne Soininen.
Species adapted to austere conditions in danger
Experiments in mountainous regions indicated that differentiating between the effects of temperature variations and aquatic nutrient enrichment can help us understand the possible effects of climate change in different environments. The typically austere, i.e. nutrient-poor, waters in the north, for example, are extremely susceptible to temperature variations, and as the climate warms up, species that have adapted to the cold will decline. The only good news is that biodiversity may improve at first, as the climate warms up, as species that thrive in warmer areas increase, until biodiversity again starts to decline when the temperature continues to rise.
Another significant finding in this research was that, like plants and animals, different species of bacteria clearly live at different levels of elevation, and the bacteria in high mountain areas in the tropics are similar to the bacteria in arctic areas, due to the similar cold climate.

Music in the brain: The first imaging genetic study linking dopaminergic genes to music


Sounds, such as music and noise, are capable of reliably affecting individuals' moods and emotions, possibly by regulating brain dopamine, a neurotransmitter strongly involved in emotional behavior and mood regulation.

However, the relationship of sound environments with mood and emotions is highly variable across individuals. A putative source of variability is genetic background.
In this regard, a new imaging genetics study directed by Professor Elvira Brattico from Aarhus University and conducted in two Italian hospitals in collaboration with the University of Helsinki (Finland) has provided the first evidence that the effects of music and noise on affective behavior and brain physiology are associated with genetically determined dopamine functionality.
In particular, this study, published in the journal Neuroscience, revealed that a functional variation in dopamine D2 receptor gene (DRD2 rs1076560) modulates the impact of music as opposed to noise on mood states and emotion-related prefrontal and striatal brain activity, evidencing a differential susceptibility for the affect-modulatory effects of music and noise on the GG and GT genotypes.
In more details, results showed mood improvement after music exposure in GG subjects and mood deterioration after noise exposure in GT subjects. Moreover, the music as opposed to noise environment decreased the striatal activity of GT subjects as well as the prefrontal activity of GG subjects while processing emotional faces.
These results are novel in identifying a biological source of variability in the impact of sound environments on emotional responses. The first author of the study, Tiziana Quarto, Ph.D. student at University of Helsinki under supervision of Prof. Brattico, further comments:
"Our approach allowed the observation of the link between genes and phenotypes via a true biological path that goes from functional genetic variations (for which the effects on molecular function is known) to brain physiology subtending behavior. The use of this approach is especially important when the investigated behavior is complex and very variable across subjects, because this means that many biological factors are involved."
"This study represents the first use of the imaging genetics approach in the field of music and sounds in general. We are really excited about our results because they suggest that even a non-pharmacological intervention such as music, might regulate mood and emotional responses at both the behavioral and neuronal level," says Professor Elvira Brattico.
"More importantly, these findings encourage the search for personalized music-based interventions for the treatment of brain disorders associated with aberrant dopaminergic neurotransmission as well as abnormal mood and emotion-related brain activity."

A fertilizer dearth foiled animal evolution for eons?


For three billion years or more, the evolution of the first animal life on Earth was ready to happen, practically waiting in the wings. But the breathable oxygen it required wasn't there, and a lack of simple nutrients may have been to blame.

Then came a fierce planetary metamorphosis. Roughly 800 million years ago, in the late Proterozoic Eon, phosphorus, a chemical element essential to all life, began to accumulate in shallow ocean zones near coastlines widely considered to be the birthplace of animals and other complex organisms, according to a new study by geoscientists from the Georgia Institute of Technology and Yale University.
Along with phosphorus accumulation came a global chemical chain reaction, which included other nutrients, that powered organisms to pump oxygen into the atmosphere and oceans. Shortly after that transition, waves of climate extremes swept the globe, freezing it over twice for tens of millions of years each time, a highly regarded theory holds. The elevated availability of nutrients and bolstered oxygen also likely fueled evolution's greatest lunge forward.
After billions of years, during which life consisted almost entirely of single-celled organisms, animals evolved. At first, they were extremely simple, resembling today's sponges or jellyfish, but Earth was on its way from being, for eons, a planet less than hospitable to complex life to becoming one bursting with it.
Earth's true genesis

In the last few hundred million years, biodiversity has blossomed, leading to dense jungles and grasslands echoing with animal calls, and waters writhing with every shape of fin and color of scale. And most every stage of development has left its mark on the fossil record.
The researchers are careful not to imply that phosphorus necessarily caused the chain reaction, but in sedimentary rock taken from coastal areas, the nutrient has marked the spot where that burst of life and climate change took off. "The timing is definitely conspicuous," said Chris Reinhard, an assistant professor in Georgia Tech's School of Earth and Atmospheric Sciences.
Reinhard and Noah Planavsky, a geochemist from Yale University, who headed up the research together, have mined records of sedimentary rock that formed in ancient coastal zones, going down layer by layer to 3.5 billion years ago, to compute how the cycle of the essential fertilizer phosphorus evolved and how it appeared to play a big part in a veritable genesis.
They noticed a remarkable congruency as they moved upward through the layers of shale into the time period where animal life began, in the late Proterozoic Eon.
"The most basic change was from very limited phosphorus availability to much higher phosphorus availability in surface waters of the ocean," Reinhard said. "And the transition seemed to occur right around the time that there were very large changes in ocean-atmosphere oxygen levels and just before the emergence of animals."
Phosphorus at the beach
Reinhard and Planavsky, together with an international team, have proposed that a scavenging of nutrients in an anoxic (nearly O2-free) world stunted photosynthetic organisms that otherwise had been poised for at least two billion years to make stockpiles of oxygen. Then that balanced system was upset and oceanic phosphorus made its way to coastal waters.
The scientists published their findings in the journal Nature. Their research was funded by the National Science Foundation, the NASA Astrobiology Institute, the Sloan Foundation and the Japan Society for the Promotion of Science.
The work provides a new view into what factors allowed life to reshape Earth's atmosphere. It helps lay a foundation that scientists can apply to make predictions about what would allow life to alter exoplanets' atmospheres, and may inspire deeper studies, here on Earth, of how oceanic-atmospheric chemistry drives climate instability and influences the rise and fall of life through the ages.
Cyanobacteria, the mother of O2
Complex living things, including animals, usually have an immense metabolism and require ample O2 to drive it. The evolution of animals is unthinkable without it.
The path to understanding how a nutrient dearth would starve out breathable oxygen production leads back to a very special kind of bacteria called cyanobacteria, the mother of oxygen on Earth.
"The only reason we have a well-oxygenated planet we can live on is because of oxygenic photosynthesis," Planavsky said. "O2 is the waste product of photosynthesizing cells, like cyanobacteria, combining CO2 and water to build sugars."
And photosynthesis is an evolutionary singularity, meaning it only evolved once in Earth's history -- in cyanobacteria.
Some other biological phenomena evolved repeatedly in dozens or hundreds of unrelated incidents across the ages, such as the transition from single-celled organisms to rudimentary multicellular organisms. But scientists are confident that oxygenic photosynthesis evolved only this one time in Earth's history, only in cyanobacteria, and all plants and other beings on Earth that photosynthesize coopted the development.
The iron anchor
Cyanobacteria are credited with filling Earth's atmosphere with O2, and they've been around for 2.5 billion years or more.
That begs the question: What took so long? Basic nutrients that fed the bacteria weren't readily available, the scientist hypothesize. The phosphorus, which Planavsky and Reinhard specifically tracked, was in the ocean for billions of years, too, but it was tied up in the wrong places.
For eons, the mineral iron, which once saturated oceans, likely bonded with phosphorus, and sank it down to dark ocean depths, far away from those shallows -- also called continental margins -- where cyanobacteria would have needed it to thrive and make oxygen. Even today, iron is used to treat waters polluted with fertilizer to remove phosphorus by sinking it as deep sediment.
The researchers also used a geochemical model to show how a global system with high iron concentration and low phosphorus availability combined with low nitrogen availability in ocean shallows could perpetuate itself in a low-oxygen world.
"It looks to have been such a stable planetary system," Reinhard said. "But it's obviously not the planet we live on now, so the question is, how did we transition from this low-oxygen state to where we are now?"
What ultimately caused that change is a question for future research.
Phosphorus starting pistol
But something did change about 800 million years ago, and cyanobacteria and other minute organisms in continental margin ecosystems got more phosphorus, the backbone of DNA and RNA, and a main actor in cell metabolism. The bacteria became more active, reproduced more quickly, ate lots more phosphorus and made loads more O2.
"Phosphorus is not only essential for life," Planavsky said. "What's implicit in all this is: It can control the amount of life on our planet."
When the newly multiplied bacteria died, they fell to the floor of those ocean shallows, stacking up layer by layer to decay and enrich the mud with phosphorus. The mud eventually compressed to stone.
"As the biomass increased in phosphorus content, the more of it landed in layers of sedimentary rock," Reinhard said. "To scientists, that shale is the pages of the sea floor's history book."
Scientists have thumbed through them for decades, compiling data. Planavsky and Reinhard analyzed some 15,000 rock records for their study.
"The first compilation we had of this was only 600 samples," Planavsky said. Reinhard added, "But you could already see it then. The phosphorus jolt was as clear as day. And as the database grew in size, the phenomenon became more entrenched."
That first signal of phosphorus in Earth's coast shallows pops up in the shale record like a shot from a starting pistol in the race for abundant life.

Sunday, December 18, 2016

Research team sets new mark for 'deep learning'

Neuroscience and artificial intelligence experts from Rice University and Baylor College of Medicine have taken inspiration from the human brain in creating a new "deep learning" method that enables computers to learn about the visual world largely on their own, much as human babies do.

In tests, the group's "deep rendering mixture model" largely taught itself how to distinguish handwritten digits using a standard dataset of 10,000 digits written by federal employees and high school students. In results presented this month at the Neural Information Processing Systems (NIPS) conference in Barcelona, Spain, the researchers described how they trained their algorithm by giving it just 10 correct examples of each handwritten digit between zero and nine and then presenting it with several thousand more examples that it used to further teach itself. In tests, the algorithm was more accurate at correctly distinguishing handwritten digits than almost all previous algorithms that were trained with thousands of correct examples of each digit.
"In deep-learning parlance, our system uses a method known as semisupervised learning," said lead researcher Ankit Patel, an assistant professor with joint appointments in neuroscience at Baylor and electrical and computer engineering at Rice. "The most successful efforts in this area have used a different technique called supervised learning, where the machine is trained with thousands of examples: This is a one. This is a two.
"Humans don't learn that way," Patel said. "When babies learn to see during their first year, they get very little input about what things are. Parents may label a few things: 'Bottle. Chair. Momma.' But the baby can't even understand spoken words at that point. It's learning mostly unsupervised via some interaction with the world."
Patel said he and graduate student Tan Nguyen, a co-author on the new study, set out to design a semisupervised learning system for visual data that didn't require much "hand-holding" in the form of training examples. For instance, neural networks that use supervised learning would typically be given hundreds or even thousands of training examples of handwritten digits before they would be tested on the database of 10,000 handwritten digits in the Mixed National Institute of Standards and Technology (MNIST) database.
The semisupervised Rice-Baylor algorithm is a "convolutional neural network," a piece of software made up of layers of artificial neurons whose design was inspired by biological neurons. These artificial neurons, or processing units, are organized in layers, and the first layer scans an image and does simple tasks like searching for edges and color changes. The second layer examines the output from the first layer and searches for more complex patterns. Mathematically, this nested method of looking for patterns within patterns within patterns is referred to as a nonlinear process.
"It's essentially a very simple visual cortex," Patel said of the convolutional neural net. "You give it an image, and each layer processes the image a little bit more and understands it in a deeper way, and by the last layer, you've got a really deep and abstract understanding of the image. Every self-driving car right now has convolutional neural nets in it because they are currently the best for vision."
Like human brains, neural networks start out as blank slates and become fully formed as they interact with the world. For example, each processing unit in a convolutional net starts the same and becomes specialized over time as they are exposed to visual stimuli.
"Edges are very important," Nguyen said. "Many of the lower layer neurons tend to become edge detectors. They're looking for patterns that are both very common and very important for visual interpretation, and each one trains itself to look for a specific pattern, like a 45-degree edge or a 30-degree red-to-blue transition.
"When they detect their particular pattern, they become excited and pass that on to the next layer up, which looks for patterns in their patterns, and so on," he said. "The number of times you do a nonlinear transformation is essentially the depth of the network, and depth governs power. The deeper a network is, the more stuff it's able to disentangle. At the deeper layers, units are looking for very abstract things like eyeballs or vertical grating patterns or a school bus."
Nguyen began working with Patel in January as the latter began his tenure-track academic career at Rice and Baylor. Patel had already spent more than a decade studying and applying machine learning in jobs ranging from high-volume commodities training to strategic missile defense, and he'd just wrapped up a four-year postdoctoral stint in the lab of Rice's Richard Baraniuk, another co-author on the new study. In late 2015, Baraniuk, Patel and Nguyen published the first theoretical framework that could both derive the exact structure of convolutional neural networks and provide principled solutions to alleviate some of their limitations.
Baraniuk said a solid theoretical understanding is vital for designing convolutional nets that go beyond today's state-of-the-art.
"Understanding video images is a great example," Baraniuk said. "If I am looking at a video, frame by frame by frame, and I want to understand all the objects and how they're moving and so on, that is a huge challenge. Imagine how long it would take to label every object in every frame of a video. No one has time for that. And in order for a machine to understand what it's seeing in a video, it has to understand what objects are, the concept of three-dimensional space and a whole bunch of other really complicated stuff. We humans learn those things on our own and take them for granted, but they are totally missing in today's artificial neural networks."
Patel said the theory of artificial neural networks, which was refined in the NIPS paper, could ultimately help neuroscientists better understand the workings of the human brain.
"There seem to be some similarities about how the visual cortex represents the world and how convolutional nets represent the world, but they also differ greatly," Patel said. "What the brain is doing may be related, but it's still very different. And the key thing we know about the brain is that it mostly learns unsupervised.
"What I and my neuroscientist colleagues are trying to figure out is, What is the semisupervised learning algorithm that's being implemented by the neural circuits in the visual cortex? and How is that related to our theory of deep learning?" he said. "Can we use our theory to help elucidate what the brain is doing? Because the way the brain is doing it is far superior to any neural network that we've designed.

Thursday, December 15, 2016

Underwater volcano's eruption captured in exquisite detail by seafloor observatory


Seismic data from the 2015 eruption of Axial Volcano, an underwater volcano about 300 miles off the Oregon coast, has provided the clearest look at the inner workings of a volcano where two ocean plates are moving apart.

The cracking, bulging and shaking from the eruption of a mile-high volcano where two tectonic plates separate has been captured in more detail than ever before. A University of Washington study published this week shows how the volcano behaved during its spring 2015 eruption, revealing new clues about the behavior of volcanoes where two ocean plates are moving apart.

"The new network allowed us to see in incredible detail where the faults are, and which were active during the eruption," said lead author William Wilcock, a UW professor of oceanography. The new paper in Science is one of three studies published together that provide the first formal analyses of the seismic vibrations, seafloor movements and rock created during an April 2015 eruption off the Oregon coast. "We have a new understanding of the behavior of caldera dynamics that can be applied to other volcanoes all over the world."
The studies are based on data collected by the Cabled Array, a National Science Foundation-funded project that brings electrical power and internet to the seafloor. The observatory, completed just months before the eruption, provides new tools to understand one of the test sites for understanding Earth's volcanism.
"Axial volcano has had at least three eruptions, that we know of, over the past 20 years," said Rick Murray, director of the NSF's Division of Ocean Sciences, which also funded the research. "Instruments used by Ocean Observatories Initiative scientists are giving us new opportunities to understand the inner workings of this volcano, and of the mechanisms that trigger volcanic eruptions in many environments.
"The information will help us predict the behavior of active volcanoes around the globe," Murray said.
It's a little-known fact that most of Earth's volcanism takes place underwater. Axial Volcano rises 0.7 miles off the seafloor some 300 miles off the Pacific Northwest coast, and its peak lies about 0.85 miles below the ocean's surface. Just as on land, we learn about ocean volcanoes by studying vibrations to see what is happening deep inside as plates separate and magma rushes up to form new crust.
The submarine location has some advantages. Typical ocean crust is just 4 miles (6 km) thick, roughly five times thinner than the crust that lies below land-based volcanoes. The magma chamber is not buried as deeply, and the hard rock of ocean crust generates crisper seismic images.
"One of the advantages we have with seafloor volcanoes is we really know very well where the magma chamber is," Wilcock said.
"The challenge in the oceans has always been to get good observations of the eruption itself."
All that changed when the Cabled Array was installed and instruments were turned on. Analysis of vibrations leading up to and during the event show an increasing number of small earthquakes, up to thousands a day, in the previous months. The vibrations also show strong tidal triggering, with six times as many earthquakes during low tides as high tides while the volcano approached its eruption.
Once lava emerged, movement began along a newly formed crack, or dike, that sloped downward and outward inside the 2-mile-wide by 5-mile-long caldera.
"There has been a longstanding debate among volcanologists about the orientation of ring faults beneath calderas: Do they slope toward or away from the center of the caldera?" Wilcock said. "We were able to detect small earthquakes and locate them very accurately, and see that they were active while the volcano was inflating."
The two previous eruptions sent lava south of the volcano's rectangular crater. This eruption produced lava to the north. The seismic analysis shows that before the eruption, the movement was on the outward-dipping ring fault. Then a new crack or dike formed, initially along the same outward-dipping fault below the eastern wall of the caldera. The outward-sloping fault has been predicted by so-called "sandbox models," but these are the most detailed observations to confirm that they happen in nature. That crack moved southward along this plane until it hit the northern limit of the previous 2011 eruption.
"In areas that have recently erupted, the stress has been relieved," Wilcock said. "So the crack stopped going south and then it started going north." Seismic evidence shows the crack went north along the eastern edge of the caldera, then lava pierced the crust's surface and erupted inside and then outside the caldera's northeastern edge.
The dike, or crack, then stepped to the west and followed a line north of the caldera to about 9 miles (15 km) north of the volcano, with thousands of small explosions on the way.
"At the northern end there were two big eruptions and those lasted nearly a month, based on when the explosions were happening and when the magma chamber was deflating," Wilcock said.
The activity continued throughout May, then lava stopped flowing and the seismic vibrations shut off. Within a month afterward the earthquakes dropped to just 20 per day.
The volcano has not yet started to produce more earthquakes as it gradually rebuilds toward another eruption, which typically happen every decade or so. The observatory centered on Axial Volcano is designed to operate for at least 25 years. "The cabled array offers new opportunities to study volcanism and really learn how these systems work," Wilcock said. "This is just the beginning."

Wednesday, December 14, 2016

New study doubles the estimate of bird species in the world


New research led by the American Museum of Natural History suggests that there are about 18,000 bird species in the world -- nearly twice as many as previously thought. The work focuses on "hidden" avian diversity -- birds that look similar to one another, or were thought to interbreed, but are actually different species. Recently published in the journal PLOS ONE, the study has serious implications for conservation practices.

"We are proposing a major change to how we count diversity," said Joel Cracraft, an author of the study and a curator in the American Museum of Natural History's Department of Ornithology. "This new number says that we haven't been counting and conserving species in the ways we want."
Birds are traditionally thought of as a well-studied group, with more than 95 percent of their global species diversity estimated to have been described. Most checklists used by bird watchers as well as by scientists say that there are roughly between 9,000 and 10,000 species of birds. But those numbers are based on what's known as the "biological species concept," which defines species in terms of what animals can breed together.
"It's really an outdated point of view, and it's a concept that is hardly used in taxonomy outside of birds," said lead author George Barrowclough, an associate curator in the Museum's Department of Ornithology.
For the new work, Cracraft, Barrowclough, and their colleagues at the University of Nebraska, Lincoln, and the University of Washington examined a random sample of 200 bird species through the lens of morphology -- the study of the physical characteristics like plumage pattern and color, which can be used to highlight birds with separate evolutionary histories. This method turned up, on average, nearly two different species for each of the 200 birds studied. This suggests that bird biodiversity is severely underestimated, and is likely closer to 18,000 species worldwide.
The researchers also surveyed existing genetic studies of birds, which revealed that there could be upwards of 20,000 species. But because the birds in this body of work were not selected randomly -- and, in fact, many were likely chosen for study because they were already thought to have interesting genetic variation -- this could be an overestimate. The authors argue that future taxonomy efforts in ornithology should be based on both methods.
"It was not our intent to propose new names for each of the more than 600 new species we identified in the research sample," Cracraft said. "However, our study provides a glimpse of what a future taxonomy should encompass."
Increasing the number of species has implications for preserving biodiversity and other conservation efforts.
"We have decided societally that the target for conservation is the species," said Robert Zink, a co-author of the study and a biologist at the University of Nebraska, Lincoln. "So it follows then that we really need to be clear about what a species is, how many there are, and where they're found."

Teen use of any illicit drug other than marijuana at new low, same true for alcohol


Teenagers' use of drugs, alcohol and tobacco declined significantly in 2016 at rates that are at their lowest since the 1990s, a new national study showed.

But University of Michigan researchers cautioned that while these developments are "trending in the right direction," marijuana use still remains high for 12th-graders.
The results derive from the annual Monitoring the Future study, now in its 42nd year. About 45,000 students in some 380 public and private secondary schools have been surveyed each year in this national study, designed and conducted by research scientists at U-M's Institute for Social Research and funded by the National Institute on Drug Abuse. Students in grades 8, 10 and 12 are surveyed.
Overall, the proportion of secondary school students in the country who used any illicit drug in the prior year fell significantly between 2015 and 2016. The decline in narcotic drugs is of particular importance, the researchers say. This year's improvements were particularly concentrated among 8th- and 10th-graders.
Considerably fewer teens reported using any illicit drug other than marijuana in the prior 12 months -- 5 percent, 10 percent and 14 percent in grades 8, 10 and 12, respectively -- than at any time since 1991. These rates reflect a decline of about one percentage point in each grade in 2016, but a much larger decline over the longer term.
In fact, the overall percentage of teens using any of the illicit drugs other than marijuana has been in a gradual, long-term decline since the last half of the 1990s, when their peak rates reached 13 percent, 18 percent and 21 percent, respectively.
Marijuana, the most widely used of the illicit drugs, dropped sharply in 2016 in use among 8th-graders to 9.4 percent, or about one in every 11 indicating any use in the prior 12 months. Use also declined among 10th-graders as well, though not by a statistically significant amount, to 24 percent or about one in every four 10th-graders.
The annual prevalence of marijuana use (referring to the percentage using any marijuana in the prior 12 months) has been declining gradually among 8th-graders since 2010, and more sharply among 10th-graders since 2013. Among 12th-graders, however, the prevalence of marijuana use is higher (36 percent) and has held steady since 2011. These periods of declining use (or in the case of 12th-graders, stabilization) followed several years of increasing use by each of these age groups.
Daily or near-daily use of marijuana -- defined as use on 20 or more occasions in the previous 30 days -- also declined this year among the younger teens (significantly so in 8th grade to 0.7 percent and to 2.5 percent among 10th-graders). However, there was no change among 12th-graders in daily use, which remains quite high at 6 percent or roughly one in every 17 12th-graders -- about where it has been since 2010.
Prescription amphetamines and other stimulants used without medical direction have constituted the second-most widely used class of illicit drugs used by teens. Their use has fallen considerably, however. In 2016, 3.5 percent, 6.1 percent and 6.7 percent of 8th-, 10th- and 12th-graders, respectively, say they have used any in the prior 12 months -- down from recent peak levels of 9 percent, 12 percent and 11 percent, respectively, reached during the last half of the 1990s.
Prescription narcotic drugs have presented a serious problem for the country in recent years, with increasing numbers of overdose deaths and emergencies resulting from their use. Fortunately, the use of these drugs outside of medical supervision has been in decline, at least among high school seniors -- the only ones for whom narcotics use is reported. In 2004, a high proportion of 12th-graders -- 9.5 percent, or nearly one in 10 -- indicated using a prescription narcotic in the prior 12 months, but today that percentage is down by half to 4.8 percent.
"That's still a lot of young people using these dangerous drugs without medical supervision, but the trending is in the right direction," said Lloyd Johnston, the study's principal investigator. "Fewer are risking overdosing as teenagers, and hopefully more will remain abstainers as they pass into their twenties, thereby reducing the number who become casualties in those high-risk years."
Users of narcotic drugs without medical supervision were asked where they get the drugs they use. About four in every 10 of the past-year users indicated that they got them "from a prescription I had."
"That suggests that physicians and dentists may want to consider reducing the number of doses they routinely prescribe when giving these drugs to their patients, and in particular to teenagers," Johnston said.
Heroin is another narcotic drug of obvious importance. There is no evidence in the study that the use of heroin has risen as the use of prescription narcotics has fallen -- at least not in this population of adolescents still in school, who represent over 90 percent of their respective age groups.
In fact, heroin use among secondary school students also has declined substantially since recent peak levels reached in the late 1990s. Among 8th-graders, the annual prevalence of heroin use declined from 1.6 percent in 1996 to 0.3 percent in 2016. And among 12th-graders, the decline was from 1.5 percent in 2000 to 0.3 percent in 2016.
"So, among secondary school students, at least, there is no evidence of heroin coming to substitute for prescription narcotic drugs -- a dynamic that apparently has occurred in other populations," Johnston said. "Certainly there will be individual cases where that happens, but overall the use of heroin and prescription narcotics both have declined appreciably and largely in parallel among secondary school students."
The ecstasy epidemic, which peaked at about 2001, was a substantial one for teens and young adults, Johnston said. Ecstasy is a form of MDMA (methylenedioxy-methamphetamine) as is the much newer form on the scene, "Molly."
"The use of MDMA has generally been declining among teens since about 2010 or 2011, and it continued to decrease significantly in 2016 in all three grades even with the inclusion of Molly in the question in more recent years," Johnston said.
MDMA's annual prevalence now stands at about 1 percent, 2 percent and 3 percent in grades 8, 10 and 12, respectively.
Synthetic marijuana (often sold over the counter as "K-2" or "Spice") continued its rapid decline in use among teens since its use was first measured in 2011. Among 12th-graders, for example, annual prevalence has fallen by more than two-thirds, from 11.4 percent in 2011 to 3.5 percent in 2016. Twelfth-graders have been showing an increased appreciation of the dangers associated with these drugs. It also seems likely that fewer students have access to these synthetic drugs, as many states and communities have outlawed their sale by retail outlets.
Bath salts constitute another class of synthetic drugs sold over the counter. Their annual prevalence has remained quite low -- at 1.3 percent or less in all grades -- since they were first included in the study in 2012. One of the very few statistically significant increases in use of a drug this year was for 8th-graders' use of bath salts (which are synthetic stimulants), but their annual prevalence is still only 0.9 percent with no evidence of a progressive increase.
A number of other illicit drugs have shown declining use, as well. Among them are cocaine, crack, sedatives and inhalants (the declining prevalence rates for these drugs may be seen in the tables and figures associated with this release.)
Alcohol
The use of alcohol by adolescents is even more prevalent than the use of marijuana, but it, too, is trending downward in 2016, continuing a longer-term decline. For all three grades, both annual and monthly prevalence of alcohol use are at historic lows over the life of the study. Both measures continued to decline in all three grades in 2016.
Of even greater importance, measures of heavy alcohol use are also down considerably, including self-reports of having been drunk in the previous 30 days and of binge drinking in the prior two weeks (defined as having five or more drinks in a row on at least one occasion).
Binge drinking has fallen by half or more at each grade level since peak rates were reached at the end of the 1990s. Today, the proportions who binge drink are 3 percent, 10 percent and 16 percent in grades 8, 10 and 12, respectively.
"Since 2005, 12th-graders have also been asked about what we call 'extreme binge drinking,' defined as having 10 or more drinks in a row or even 15 or more, on at least one occasion in the prior two weeks," Johnston said. "Fortunately, the prevalence of this particularly dangerous behavior has been declining as well."
In 2016, 4.4 percent of 12th-graders reported drinking at the level of 10 or more drinks in a row, down by about two-thirds from 13 percent in 2006.
Rates of daily drinking among teens has also fallen considerably over the same intervals. Flavored alcoholic beverages and alcoholic beverages containing caffeine have both declined appreciably in use since each was first measured -- again, particularly among the younger teens, where significant declines in annual prevalence continued into 2016.
Tobacco
Declines in cigarette smoking and certain other forms of tobacco use also occurred among teens in 2016, continuing an important and now long-term trend in the use of cigarettes.

Tuesday, December 13, 2016

New robot has a human touch


Most robots achieve grasping and tactile sensing through motorized means, which can be excessively bulky and rigid. Scientists have now devised a way for a soft robot to feel its surroundings internally, in much the same way humans do. Stretchable optical waveguides act as curvature, elongation and force sensors in a soft robotic hand.

A group led by Robert Shepherd, assistant professor of mechanical and aerospace engineering and principal investigator of Organic Robotics Lab, has published a paper describing how stretchable optical waveguides act as curvature, elongation and force sensors in a soft robotic hand.
Doctoral student Huichan Zhao is lead author of "Optoelectronically Innervated Soft Prosthetic Hand via Stretchable Optical Waveguides," which is featured in the debut edition of Science Robotics.
"Most robots today have sensors on the outside of the body that detect things from the surface," Zhao said. "Our sensors are integrated within the body, so they can actually detect forces being transmitted through the thickness of the robot, a lot like we and all organisms do when we feel pain, for example."
Optical waveguides have been in use since the early 1970s for numerous sensing functions, including tactile, position and acoustic. Fabrication was originally a complicated process, but the advent over the last 20 years of soft lithography and 3-D printing has led to development of elastomeric sensors that are easily produced and incorporated into a soft robotic application.
Shepherd's group employed a four-step soft lithography process to produce the core (through which light propagates), and the cladding (outer surface of the waveguide), which also houses the LED (light-emitting diode) and the photodiode.
The more the prosthetic hand deforms, the more light is lost through the core. That variable loss of light, as detected by the photodiode, is what allows the prosthesis to "sense" its surroundings.
"If no light was lost when we bend the prosthesis, we wouldn't get any information about the state of the sensor," Shepherd said. "The amount of loss is dependent on how it's bent."
The group used its optoelectronic prosthesis to perform a variety of tasks, including grasping and probing for both shape and texture. Most notably, the hand was able to scan three tomatoes and determine, by softness, which was the ripest.

Why we walk on our heels instead of our toes: Longer virtual limbs


Walking heel-to-toe gives humans the mechanical advantage of longer 'virtual limbs'

James Webber took up barefoot running 12 years ago. He needed to find a new passion after deciding his planned career in computer-aided drafting wasn't a good fit. Eventually, his shoeless feet led him to the University of Arizona, where he enrolled as a doctoral student in the School of Anthropology.

Webber was interested in studying the mechanics of running, but as the saying goes, one must learn to walk before they can run, and that -- so to speak -- is what Webber has been doing in his research.
His most recent study on walking, published in the Journal of Experimental Biology, specifically explores why humans walk with a heel-to-toe stride, while many other animals -- such as dogs and cats -- get around on the balls of their feet.
It was an especially interesting question from Webber's perspective, because those who do barefoot running, or "natural running," land on the middle or balls of their feet instead of the heels when they run -- a stride that would feel unnatural when walking.
Indeed, humans are pretty set in our ways with how we walk, but our heel-first style could be considered somewhat curious.
"Humans are very efficient walkers, and a key component of being an efficient walker in all kind of mammals is having long legs," Webber said. "Cats and dogs are up on the balls of their feet, with their heel elevated up in the air, so they've adapted to have a longer leg, but humans have done something different. We've dropped our heels down on the ground, which physically makes our legs shorter than they could be if were up on our toes, and this was a conundrum to us (scientists)."
Webber's study, however, offers an explanation for why our heel-strike stride works so well, and it still comes down to limb length: Heel-first walking creates longer "virtual legs," he says.
We Move Like a Human Pendulum
When humans walk, Webber says, they move like an inverted swinging pendulum, with the body essentially pivoting above the point where the foot meets the ground below. As we take a step, the center of pressure slides across the length of the foot, from heel to toe, with the true pivot point for the inverted pendulum occurring midfoot and effectively several centimeters below the ground. This, in essence, extends the length of our "virtual legs" below the ground, making them longer than our true physical legs.
As Webber explains: "Humans land on their heel and push off on their toes. You land at one point, and then you push off from another point eight to 10 inches away from where you started. If you connect those points to make a pivot point, it happens underneath the ground, basically, and you end up with a new kind of limb length that you can understand. Mechanically, it's like we have a much longer leg than you would expect."
Webber and his adviser and co-author, UA anthropologist David Raichlen, came to the conclusion after monitoring study participants on a treadmill in the University's Evolutionary Biomechanics Lab. They looked at the differences between those asked to walk normally and those asked to walk toe-first. They found that toe-first walkers moved slower and had to work 10 percent harder than those walking with a conventional stride, and that conventional walkers' limbs were, in essence, 15 centimeters longer than toe-first walkers.
"The extra 'virtual limb' length is longer than if we had just had them stand on their toes, so it seems humans have found a novel way of increasing our limb length and becoming more efficient walkers than just standing on our toes," Webber said. "It still all comes down to limb length, but there's more to it than how far our hip is from the ground. Our feet play an important role, and that's often something that's been overlooked."
When the researchers sped up the treadmill to look at the transition from walking to running, they also found that toe-first walkers switched to running at lower speeds than regular walkers, further showing that toe-first walking is less efficient for humans.
Ancient Human Ancestors Had Extra-Long Feet
It's no wonder humans are so set in our ways when it comes to walking heel-first -- we've been doing it for a long time. Scientists know from footprints found preserved in volcanic ash in Latoli, Tanzania, that ancient hominins practiced heel-to-toe walking as early as 3.6 million years ago.
Our feet have changed over the years, however. Early bipeds (animals that walk on two feet) appear to have had rigid feet that were proportionally much longer than ours today -- about 70 percent the length of their femur, compared to 54 percent in modern humans. This likely helped them to be very fast and efficient walkers. While modern humans held on to the heel-first style of walking, Webber suggests our toes and feet may have gotten shorter, proportionally, as we became better runners in order to pursue prey.
"When you're running, if you have a really long foot and you need to push off really hard way out at the end of your foot, that adds a lot of torque and bending," Webber said. "So the idea is that as we shifted into running activities, our feet started to shrink because it maybe it wasn't as important to be super-fast walkers. Maybe it became important to be really good runners."

What Your Opinion ?

Watching too much television could cause fatal blood clots

  Spending too much time in front of the television could increase your chance of developing potentially fatal blood clots known as ve...