What can we learn about Mercury’s surface during the eclipse?

On the morning of August 21, a pair of jets will take off from NASA’s Johnson Space Center in Houston to chase the shadow of the moon. They will climb to 15 kilometers in the stratosphere and fly in the path of the total solar eclipse over Missouri, Illinois and Tennessee at 750 kilometers per hour.

But some of the instruments the jets carry won’t be looking at the sun, or even at Earth. They’ll be focused on a different celestial body: Mercury. In the handful of minutes that the planes zip along in darkness, the instruments could collect enough data to answer this Mercury mystery: What is the innermost planet’s surface made of?
Because it’s so close to the sun, Mercury is tough to study from Earth. It’s difficult to observe close up, too. Extreme heat and radiation threaten to fry any spacecraft that gets too close. And the sun’s brightness can swamp a hardy spacecraft’s efforts to send signals back to Earth.

NASA’s Messenger spacecraft orbited Mercury from 2011 to 2015 and revealed a battered, scarred landscape made of different material than the rest of the terrestrial planets (SN: 11/19/11, p. 17).
But Messenger only scratched the surface, so to speak. It analyzed the planet’s composition with an instrument called a reflectance spectrometer, which collects light and then splits that light into its component wavelengths to figure out which elements the light was reflected from.
Messenger took measurements of reflected light from Mercury’s surface at wavelengths shorter than 1 micrometer, which revealed, among other things, that Mercury contains a surprising amount of sulfur and potassium (SN: 7/16/11, p. 12). Those wavelengths come only from the top few micrometers of Mercury. What lies below is still unknown.

To dig a few centimeters deeper into Mercury’s surface, solar physicist Amir Caspi and planetary scientist Constantine Tsang of the Southwest Research Institute in Boulder, Colo., and colleagues will use an infrared camera, specially built by Alabama-based Southern Research, that detects wavelengths between 3 and 5 micrometers.

Copies of the instrument will fly on the two NASA WB-57 research jets, whose altitude and speed will give the observers two advantages: less atmospheric interference and more time in the path of the eclipse. Chasing the moon’s shadow will let the planes stay in totality — the region where the sun’s bright disk is completely blocked by the moon — for a combined 400 seconds (6.67 minutes). That’s nearly three times longer than they would get by staying in one spot.
Mercury’s dayside surface is 425° Celsius, and it actually emits light at 4.1 micrometers — right in the middle of the range of Caspi’s instrument. As any given spot on Mercury rotates away from the sun, its temperature drops as low as ‒179° C. Measuring how quickly the planet loses heat can help researchers figure out what the subsurface material is made of and how densely it’s packed. Looser sand will give up its heat more readily, while more close-packed rock will hold heat in longer.

“This is something that has never been done before,” Caspi says. “We’re going to try to make the first thermal image heat map of the surface of Mercury.”

Unfortunately for Caspi, only two people can fly on the jet: The pilot and someone to run the instrument. Caspi will remain on the ground in Houston, out of the path of totality. “So I will get to watch the eclipse on TV,” Caspi says.

Eclipses show wrong physics can give right results

Every few years, for a handful of minutes or so, science shines while the sun goes dark.

A total eclipse of the sun is, for those who witness it, something like a religious experience. For those who understand it, it is symbolic of science’s triumph over mythology as a way to understand the heavens.

In ancient Greece, the pioneer philosophers realized that eclipses illustrate how fantastic phenomena do not require phantasmagoric explanation. An eclipse was not magic or illusion; it happened naturally when one celestial body got in the way of another one. In the fourth century B.C., Aristotle argued that lunar eclipses provided strong evidence that the Earth was itself a sphere (not flat as some primitive philosophers had believed). As the eclipsed moon darkened, the edge of the advancing shadow was a curved line, demonstrating the curvature of the Earth’s surface intervening between the moon and sun.

Oft-repeated legend proclaims that the first famous Greek natural philosopher, Thales of Miletus, even predicted a solar eclipse that occurred in Turkey in 585 B.C. But the only account of that prediction comes from the historian Herodotus, writing more than a century later. He claimed that during a fierce battle “day suddenly became night,” just as Thales had forecast would happen sometime during that year.

There was an eclipse in 585 B.C., but it’s unlikely that Thales could have predicted it. He might have known that the moon blocks the sun in an eclipse. But no mathematical methods then available would have allowed him to say when — except, perhaps, a lucky coincidence based on the possibility that solar eclipses occurred at some regular cycle after lunar eclipses. Yet even that seems unlikely, a new analysis posted online last month finds.

“Some scholars … have flatly denied the prediction, while others have struggled to find a numerical cycle by means of which the prediction could have been carried out,” writes astronomer Miguel Querejeta. Many such cycles have already been ruled out, he notes. And his assessment of two other cycles concludes “that none of those conjectures can be regarded as serious explanations of the problematic prediction of Thales: in addition to requiring the existence of long and precise eclipse records … both cycles that have been examined overlook a number of eclipses which match the visibility criteria and, consequently, the patterns suggested seem to disappear.”

It’s true that the ancient Babylonians worked out methods for predicting lunar eclipses based on patterns in the intervals between them. And the famous Greek Antikythera mechanism from the second century B.C. seems to have used such cycle data to predict some eclipses.

Ancient Greek astronomers, such as Hipparchus (c. 190–120 B.C.), studied eclipses and the geometrical relationships of the Earth, moon and sun that made them possible. Understanding those relationships well enough to make reasonably accurate predictions became possible, though, only with the elaborate mathematical description of the cosmos developed (drawing on Hipparchus’ work) by Claudius Ptolemy. In the second century A.D., he worked out the math for explaining the movements of heavenly bodies, assuming the Earth sat motionless in the center of the universe.

His system specified the basic requirements for a solar eclipse: It must be the time of the new moon — when moon and sun are on the same side of the Earth — and the positions of their orbits must also be crossing the ecliptic, the plane of the sun’s apparent orbital path through the sky. (The moon orbits the Earth at a slight angle, crossing the plane of the ecliptic twice a month.) Only precise calculations of the movements of the sun and moon in their orbits could make it possible to predict the dates for eclipsing alignments.

Predicting when an eclipse will occur is not quite the same as forecasting exactly where it will occur. To be accurate, eclipse predictions need to take subtle gravitational interactions into account. Maps showing precisely accurate paths of totality (such as for the Great American Eclipse of 2017) became possible only with Isaac Newton’s 17th century law of gravity (and the further development of mathematical tools to exploit it). Nevertheless Ptolemy had developed a system that, in principle, showed how to anticipate when eclipses would happen. Curiously, though, this success was based on a seriously wrong blueprint for the architecture of the cosmos.

As Copernicus persuasively demonstrated in the 16th century, the Earth orbits the sun, not vice versa. Ptolemy’s geometry may have been sound, but his physics was backwards. While demonstrating that mathematics is essential to describing nature and predicting physical phenomena, he inadvertently showed that math can be successful without being right.

It’s wrong to blame him for that, though. In ancient times math and science were separate enterprises (science was then “natural philosophy”). Astronomy was regarded as math, not philosophy. An astronomer’s goal was to “save the phenomena” — to describe nature correctly with math that corresponded with observations, but not to seek the underlying physical causes of those observations. Ptolemy’s mathematical treatise, the Almagest, was about math, not physics.

One of the great accomplishments of Copernicus was to merge the math with the physical realty of his system. He argued that the sun occupied the center of the cosmos, and that the Earth was a planet, like the others previously supposed to have orbited the Earth. Copernicus worked out the math for a sun-centered planetary system. It was a simpler system than Ptolemy’s. And it was just as good for predicting eclipses.

As it turned out, though, even Copernicus didn’t have it quite right. He insisted that planetary orbits were circular (modified by secondary circles, the epicycles). In fact, the orbits are ellipses. It’s a recurring story in science that mathematically successful theories sometimes are just approximately correct because they are based on faulty understanding of the underlying physics. Even Newton’s law of gravity turned out to be just a good mathematical explanation; the absolute space and invariable flow of time he believed in just aren’t an accurate representation of the universe we live in. It took Einstein to see that and develop the view of gravity as the curvature of spacetime induced by the presence of mass.
Of course, proving Einstein right required the careful measurement by Arthur Eddington and colleagues of starlight bending near the sun during a solar eclipse in 1919. It’s a good thing they knew when and where to go to see it.

Map reveals the invisible universe of dark matter

Scientists have created the largest map of dark matter yet, part of a slew of new measurements that help pin down the universe’s dark contents. Covering about a thirtieth of the sky, the map (above) charts the density of both normal matter — the stuff that’s visible — and dark matter, an unidentified but far more abundant substance that pervades the cosmos.

Matter of both types is gravitationally attracted to other matter. That coupling organizes the universe into more empty regions of space (No. 1 below and blue in the map above) surrounded by dense cosmic neighborhoods (No. 2 below and red in the map above).
Researchers from the Dark Energy Survey used the Victor Blanco telescope in Chile to survey 26 million galaxies in a section of the southern sky for subtle distortions caused by the gravitational heft of both dark and normal matter. Scientists unveiled the new results August 3 at Fermilab in Batavia, Ill., during a meeting of the American Physical Society.

Dark matter is also accompanied by a stealthy companion, dark energy, an unseen force that is driving the universe to expand at an increasing clip. According to the new inventory, the universe is about 21 percent dark matter and 5 percent ordinary matter. The remainder, 74 percent, is dark energy.

The new measurements differ slightly from previous estimates based on the cosmic microwave background, light that dates back to 380,000 years after the Big Bang (SN: 3/21/15, p. 7). But the figures are consistent when measurement errors are taken into account, the researchers say.
“The fact that it’s really close, we think is pretty remarkable,” says cosmologist Josh Frieman of Fermilab, who directs the Dark Energy Survey. But if the estimates don’t continue to align as the survey collects more data, something might be missing in cosmologists’ theories of the universe.

Muscle pain in people on statins may have a genetic link

A new genetics study adds fuel to the debate about muscle aches that have been reported by many people taking popular cholesterol-lowering drugs called statins.

About 60 percent of people of European descent carry a genetic variant that may make them more susceptible to muscle aches in general. But counterintuitively, these people had a lower risk of muscle pain when they took statins compared with placebos, researchers report August 29 in the European Heart Journal.
Millions of people take statins to lower cholesterol and fend off the hardening of arteries. But up to 78 percent of patients stop taking the medicine. One common reason for ceasing the drugs’ use is side effects, especially muscle pain, says John Guyton, a clinical lipidologist at Duke University School of Medicine.

It has been unclear, however, whether statins are to blame for the pain. In one study, 43 percent of patients who had muscle aches while taking at least one type of statin were also pained by other types of statin (SN: 5/13/17, p. 22). But 37 percent of muscle-ache sufferers in that study had pain not related to statin use. Other clinical trials have found no difference in muscle aches between people taking statins and those not taking the drugs.

The new study hints that genetic factors, especially ones involved in the immune system’s maintenance and repair of muscles, may affect people’s reactions to statins. “This is a major advance in our understanding about myalgia,” or muscle pain, says Guyton, who was not involved in the study.

People with two copies of the common form of the gene LILRB5 tend to have higher-than-usual blood levels of two proteins released by injured muscles, creatine phosphokinase and lactate dehydrogenase. Higher levels of those proteins may predispose people to more aches and pains. In an examination of data from several studies involving white Europeans, people with dual copies of the common variant were nearly twice as likely to have achy muscles while taking statins as people with a less common variant, Moneeza Siddiqui of the University of Dundee School of Medicine in Scotland and colleagues discovered.

But when researchers examined who had pain when taking statins versus placebos, those with two copies of the common variant seemed to be protected from getting statin-associated muscle pain. Why is not clear.
People with double copies of the common form of the gene who experience muscle pain may stop taking statins because they erroneously think the drugs are causing the pain, study coauthor Colin Palmer of the University of Dundee said in a news release.

The less common version of the gene is linked to reduced levels of the muscle-damage proteins, and should protect against myalgia. Yet people with this version of the gene were the ones more likely to develop muscle pain specifically linked to taking statins during the trials.

The finding suggests that when people with the less common variant develop muscle pain while taking statins, the effect really is from the drugs, the researchers say.

But researchers still don’t know the nitty-gritty details of how the genetic variants promote or protect against myalgia while on statins. Neither version of the gene guarantees that a patient will develop side effects — or that they won’t. The team proposes further clinical trials to unravel interactions between the gene and the drugs.

More study is needed before doctors can add the gene to the list of tests patients get, Guyton says. “I don’t think we’re ready to put this genetic screen into clinical practice at all,” he says. For now, “it’s much easier just to give the patient the statin” and see what happens.

Dark matter still remains elusive

Patience is a virtue in the hunt for dark matter. Experiment after experiment has come up empty in the search — and the newest crop is no exception.

Observations hint at the presence of an unknown kind of matter sprinkled throughout the cosmos. Several experiments are focused on the search for one likely dark matter candidate: weakly interacting massive particles, or WIMPs (SN: 11/12/16, p. 14). But those particles have yet to be spotted.

Recent results, posted at arXiv.org, continue the trend. The PandaX-II experiment, based in China, found no hint of the particles, scientists reported August 23. The XENON1T experiment in Italy also came up WIMPless according to a May 18 paper. Scientists with the DEAP-3600 experiment in Sudbury, Canada, reported their first results on July 25. Signs of dark matter? Nada. And the SuperCDMS experiment in the Soudan mine in Minnesota likewise found no hints of WIMPs, scientists reported August 29.

Another experiment, PICO-60, also located in Sudbury, reported its contribution to the smorgasbord of negative results June 23 in Physical Review Letters.

Scientists haven’t given up hope. Researchers are building ever-larger detectors, retooling their experiments and continuing to expand the search beyond WIMPs.

How hurricanes and other devastating disasters spur scientific research

Every day, it seems like there’s a new natural disaster in the headlines. Hurricane Harvey inundates Texas. Hurricane Irma plows through the Caribbean and the U.S. south, and Jose is hot on its heels. A deadly 8.1-magnitude earthquake rocks Mexico. Wildfires blanket the western United States in choking smoke.

While gripping tales of loss and heroism rightly fill the news, another story quietly unfolds. Hurricanes, droughts, oil spills, wildfires and other disasters are natural labs. Data quickly gathered in the midst of such chaos, as well as for years afterward, can lead to discoveries that ultimately make rescue­, recovery and resilience to future crises possible.

So when disaster strikes, science surges, says human ecologist Gary Machlis of Clemson University in South Carolina. He has studied and written about the science done during crises and was part of the U.S. Department of the Interior’s Strategic Sciences Group, which helps government officials respond to disasters.

The science done during Hurricane Harvey is an example. Not long after the heavy rains stopped, crews of researchers from the U.S. Geological Survey fanned across Texas, dropping sensors into streams. The instruments measure how swiftly the water is flowing and determine the severity of the flooding in different regions affected by the hurricane. Knowing where the flooding is the worst can help the Federal Emergency Management Agency and other government groups direct funds to areas with the most extreme damage.
In the days leading up to Irma’s U.S. landfall, scientists from the same agency also went to the Florida, Georgia and South Carolina coasts to fasten storm-tide sensors to pier pylons and other structures. The sensors measure the depth and duration of the surge in seawater generated by the change in pressure and winds from the storm. This data will help determine damage from the surge and improve models of flooding in the future, which could help provide a better picture of where future storm waters will go and who needs to be evacuated ahead of hurricanes.

Even as Irma struck Florida, civil engineer Forrest Masters of the University of Florida in Gainesville, his students and collaborators traveled to the southern part of the state to study the intensity and variation in the hurricane’s winds. As winds blew and rain pelted, the team raised minitowers decked with instruments designed to measure ground-level gusts and turbulence. With this data, the researchers will compare winds in coastal areas, near buildings and around other structures, data that can help government agencies assess storm-related damage to buildings and other structures. The team will also take the data back to the Natural Hazards Engineering Research Infrastructure labs at the University of Florida to study building materials and identify those most resistant to extreme winds.
“Scientists want to use their expertise to help society in whatever way they can during a disaster,” says biologist Teresa Stoepler, who was a member of the Strategic Sciences Group when she worked at USGS.

As a former science & technology policy fellow with the American Association for the Advancement of Science, Stoepler studied the science that resulted from the 2010 Deepwater Horizon oil spill. This devastating explosion of an oil rig spewed 210 million gallons of petroleum into the Gulf of Mexico. It also opened the door for scientific research. Biologists, chemists, psychologists and a range of other scientists wanted to study the environmental, economic and mental health consequences of the disaster; local scientists wanted to study the effects of the spill on their communities; and leaders at the local and federal government needed guidance on how to respond. There was a need to coordinate all of that effort.

That’s where the Strategic Sciences Group came in. The group, officially organized in 2012, brought together researchers from federal, academic and nongovernmental organizations. The goal was to use data collected from the spill to map out possible long-term environmental and economic consequences of the disaster, determine where research still needed to be done and determine how to allocate money for response and recovery efforts.

Not long after its formation, the group had another disaster to respond to: Superstorm Sandy devastated the U.S. East Coast, even pushing floodwaters into the heart of New York City. Scientific collaborations allowed researchers and policy makers to get a better sense of whether wetlands, sea walls or other types of infrastructure would be best to invest in to prevent future devastation. The work also gave clues as to what types of measurements, such as the height of floodwaters, should be made in the future — say, during storms like Harvey and Irma — to speed recovery efforts afterward.

Moving forward, we’re likely to see this kind of collaboration coming into play time and again. No doubt, more natural disasters loom. And other groups are getting into crisis science. For instance, Stanford University, with its Science Action Network, aims to drive interdisciplinary research during disasters and encourage communication across the many groups responding to those disasters. And the Disaster Research Response program at the National Institutes of Health provides a framework for coordinating research on the medical and public health aspects of disasters and public health emergencies.

Surges in science will stretch from plunging into the chaos of a crisis to get in-the-moment data to monitoring years of aftermath. Retrospective studies of the data collected a year, three years or even five years after a disaster could reveal where there are gaps in the science and how those can be filled in during future events.

The more data collected, the more discoveries made and lessons learned, the more likely we’ll be ready to face the next disaster.

In a first, human embryos edited to explore gene function

For the first time, researchers have disabled a gene in human embryos to learn about its function.

Using molecular scissors called CRISPR/Cas9, researchers made crippling cuts in the OCT4 gene, Kathy Niakan and colleagues report September 20 in Nature. The edits revealed a surprising role for the gene in the development of the placenta.

Researchers commonly delete and disable genes in mice, fruit flies, yeast and other laboratory critters to investigate the genes’ normal roles, but have never done this before in human embryos. Last year, government regulators in the United Kingdom gave permission for Niakan, a developmental biologist at the Francis Crick Institute in London, and colleagues to perform gene editing on human embryos left over from in vitro fertilization treatments (SN Online: 2/1/16). The researchers spent nearly a year optimizing techniques in mouse embryos and human stem cells before conducting human embryo experiments, Niakan says.
This groundbreaking research allows researchers to directly study human development genes, says developmental biologist Dieter Egli of Columbia University. “This is unheard of. It’s not something that has been possible,” he says. “What we know about human development is largely inferred from studies of mice, frogs and other model organisms.”

Other researchers have used CRISPR/Cas9 to repair mutated genes in human embryos (SN: 4/15/17, p. 16; SN: 9/2/17, p. 6). The eventual aim of that research is to prevent genetic diseases, but it has led to concerns that the technology could be abused to produce “designer babies” who are better looking, smarter and more athletic than they otherwise would be.

“There’s nothing irresponsible about the research in this case,” says stem cell researcher Paul Knoepfler of the University of California, Davis, School of Medicine. The researchers focused on basic questions about how one gene affects human embryo development. Such studies may one day lead to better fertility treatments, but the more immediate goal is to gain better insights into human biology.

Niakan’s group focused on a gene called OCT4 (also known as POU5F1), a master regulator of gene activity, which is important in mouse embryo development. This gene is also known to help human embryonic stem cells stay flexible enough to become any type of body cell, a property known as pluripotency. Scientists use OCT4 protein to reprogram adult cells into embryonic-like cells, an indication that it is involved in early development (SN: 11/24/07, p. 323). But researchers didn’t know precisely how the OCT4 gene operates during human development. Niakan already had clues that it works at slightly different times in human embryos than it does in mice (SN: 10/3/15, p. 13).

In the experiment, human embryos lacking OCT4 had difficulty reaching the blastocyst stage: Only 19 percent of edited embryos formed blastocysts, while 47 percent of unedited embryos did. Blastocysts are balls of about 200 cells that form about five or six days after fertilization. The ball’s outer layer of cells gives rise to the placenta. Inside the blastocyst, one type of embryonic stem cells will become the yolk sac. Another kind, about 20 cells known as epiblast progenitor cells, will give rise to all the cells in the body.
Niakan and colleagues predicted from earlier work with mice and human embryonic stem cells that the protein OCT4 would be necessary for the epiblast cells to develop correctly. As predicted, “knocking out” the OCT4 gene disrupted epiblasts’ development. What the researchers didn’t expect is that OCT4 also affects the development of the placenta precursor cells on the outside of the blastocyst.

“That’s not predicted anywhere in the literature,” Niakan says. “We’ll be spending quite a lot of time on this in the future to uncover exactly what this role might be.”

KC Huang probes basic questions of bacterial life

Physicists often ponder small things, but probably not the ones on Kerwyn Casey “KC” Huang’s mind. He wants to know what it’s like to be a bacterium.

“My motivating questions are about understanding the physical challenges bacterial cells face,” he says. Bacteria are the dominant life-forms on Earth. They affect the health of plants and animals, including humans, for good and bad. Yet scientists know very little about the rules the microbes live by. Even questions as basic as how bacteria determine their shape are still up in the air, says Huang, of Stanford University.

Huang, 38, is out to change that. He and colleagues have determined what gives cholera bacteria their curved shape and whether it matters (a polymer protein, and it does matter; the curve makes it easier for cholera to cause disease), how different wavelengths of light affect movement of photosynthetic bacteria (red and green wavelengths encourage movement; blue light stops the microbes in their tracks), how bacteria coordinate cell division machinery and how photosynthetic bacteria’s growth changes in light and dark.

All four of these findings and more were published in just the first three months of this year.
Huang also looks for ways to use tools and techniques his team develops to solve problems unrelated to bacteria. Computer programs that measure changes in bacterial cell shape can also track cells in plant roots and in developing zebrafish embryos. He’s even helped determine how a protein’s activity and stability contribute to a human genetic disease.

A physicist by training, Huang delves into biology, biochemistry, microbial ecology, genetics, engineering, computer science and more, partnering with a variety of scientists from across those fields. He’s even teamed up with his statistician sister. He’s an “all-in-one scientist,” says longtime collaborator Ned Wingreen, a biophysicist at Princeton University.

When Huang started his lab at Stanford in 2008, after getting his Ph.D. at MIT and spending time at Princeton as a postdoctoral fellow, his background was purely theoretical. He designed and ran the computer simulations and then his collaborators carried out the experiments. But soon, he wanted to do hands-on research too, to learn why cells are the way they are.
Such a leap “is not trivial,” says Christine Jacobs-Wagner, a microbiologist at Yale University who also studies bacterial cell shape. But Huang is “a really, really good experimentalist,” she says.

Jacobs-Wagner was particularly impressed with a “brilliant microfluidics experiment” Huang did to test a well-established truism about how bacteria grow. Researchers used to think that turgor pressure — water pressure inside a cell that pushes the outer membrane against the cell wall — controlled bacterial growth, just like it does in plants. But abolishing turgor pressure didn’t change E. coli’s growth rate, Huang and colleagues reported in 2014 in Proceedings of the National Academy of Sciences. “This result blew my mind away,” Jacobs-Wagner says. The finding “crumbled the foundation” of what scientists thought about bacterial growth.

“He uses clever experiments to challenge old paradigms,” Jacobs-Wagner says. “More than once he has come up with a new trick to address a tough question.”
Sometimes Huang’s tricks require breaking things. Zemer Gitai, a microbiologist at Princeton, remembers talking with Huang and Wingreen about a question that microbiologists were stuck on: How are molecules oriented in bacterial cell walls? Researchers knew that the walls are made of rigid sugar strands connected by flexible proteins, like a chain link fence held together by rubber bands. What they didn’t know was whether the rubber bands circled the bacteria like the hoops on a wine barrel, ran in stripes down the length of the cell or stuck out like hairs.

If bacteria were put under pressure, the cells would crack along the weak rubber band–like links, Huang and Wingreen reasoned. If the cells split like hot dogs on a grill, it would mean the links ran the length of the cells. If they opened like a Slinky, it would suggest a wine-barrel configuration. The researchers reported the results — opened like a Slinky — in 2008. Another group, using improved microscope techniques, got the same result.

Huang teamed up with other researchers to do microfluidics experiments, growing bacteria in tiny chambers and tracking individual cells to learn how photosynthetic bacteria grow in light and dark.

But in nature, bacteria don’t live alone. So Huang has also worked with Stanford colleague Justin Sonnenburg to answer a basic question: “Where and when are bacteria in the gut growing? No one knows,” Huang says. “How can we not know that? It’s totally fundamental.” Without that information, it’s impossible to know, for example, how antibiotics affect the microbial community in the intestines, he says.

Stripping fiber from a mouse’s diet not only changes the mix of microbes in the gut, it alters where in the intestines the microbes grow, the researchers discovered. Bacteria deprived of fiber’s complex sugars began to munch on the protective mucus lining the intestines, bumping against the intestinal lining and sparking inflammation, Huang, Sonnenburg and colleagues reported in Cell Host & Microbe in 2015.

Huang’s breadth of research — from deciphering the nanoscale twists of proteins to mapping whole microbial communities — is sure to lead to many more discoveries. “He’s capable of making contributions to any field,” Jacobs-Wagner says, “or any research question that he’s interested in.”

In many places around the world, obesity in kids is on the rise

Over the last 40 years, the number of kids and teens with obesity has skyrocketed worldwide. In 1975, an estimated 5 million girls and 6 million boys were obese. By 2016, those numbers had risen to an estimated 50 million girls and 74 million boys, according to a report published online October 10 in the Lancet. While the increase in childhood obesity has slowed or leveled off in many high-income countries, it continues to grow in other parts of the world, especially in Asia.

Using the body mass index, a ratio of weight to height, of more than 30 million 5- to 19-year-olds, researchers tracked trends from 1975 to 2016 in five weight categories: moderate to severe underweight, mild underweight, healthy weight, overweight and obesity. The researchers defined obesity as having a BMI around 19 or higher for a 5-year-old up to around 30 or higher for a 19-year-old.

Globally, more kids and teens — an estimated 117 million boys and 75 million girls — were moderately or severely underweight in 2016 than were obese. But the total number of obese children is expected to overtake the moderately or severely underweight total by 2022, the researchers say.

The globalization of poor diet and inactivity is part of the problem, says William Dietz, a pediatrician at George Washington University in Washington D.C., who wrote a commentary that accompanies the study. Processed foods and sugary drinks have become widely available around the world. And urbanization, which also increased in the last four decades, tends to reduce physical activity, Dietz says.

While obesity rates for kids and teens have largely leveled off in most wealthy countries, those numbers continue to increase for adults. The findings in children are consistent with evidence showing a drop in the consumption of fast food among children and adults in the United States over the last decade, Dietz says. “Children are going to be much more susceptible to changes in caloric intake than adults.”

The physics of mosquito takeoffs shows why you don’t feel a thing

Discovering an itchy welt is often a sign you have been duped by one of Earth’s sneakiest creatures — the mosquito.

Scientists have puzzled over how the insects, often laden with two or three times their weight in blood, manage to flee undetected. At least one species of mosquito — Anopheles coluzzii — does so by relying more on lift from its wings than push from its legs to generate the force needed to take off from a host’s skin, researchers report October 18 in the Journal of Experimental Biology.
The mosquitoes’ undetectable departure, which lets them avoid being smacked by an annoyed host, may be part of the reason A. coluzzii so effectively spreads malaria, a parasitic disease that kills hundreds of thousands of people each year.

Researchers knew that mosquito flight is unlike that of other flies (SN Online: 3/29/17). The new study provides “fascinating insight into life immediately after the bite, as the bloodsuckers make their escape,” says Richard Bomphrey, a biomechanist at the Royal Veterinary College of the University of London, who was not involved in the research.

To capture mosquito departures, Sofia Chang of the Animal Flight Laboratory at the University of California, Berkeley and her colleagues set up a flight arena for mosquitoes. Using three high-speed video cameras, the researchers created computer reconstructions of the mosquitoes’ takeoff mechanisms to compare with those of fruit flies.

Mosquitoes are as fast as fruit flies while flying away but use only about a quarter of the leg force that fruit flies typically use to push off, Chang and her colleagues found. And 61 percent of a mosquito’s takeoff power comes from its wings. As a result, the mosquitoes do not generate enough force on a mammal’s skin to be detected.

Unlike fruit flies’ short legs, mosquitoes’ long legs extend the insects’ push-off time. That lets mosquitoes spread out already-minimal leg force over a longer time frame to reach similar takeoff speeds as fruit flies, the researchers found. This slow and steady mechanism is the same regardless of whether the bloodsuckers sense danger or are leaving of their own accord, and whether they are full of blood or have yet to get a meal. While in flight, though, a belly full of blood slowed the mosquitoes down by about 18 percent.

Chang next wants to determine whether mosquitoes land as gently as they depart. “If they are so stealthy when they leave, they must be stealthy as they land, too.”