Giant armored dinosaur may have cloaked itself in camouflage

Sometimes body armor just isn’t enough. A car-sized dinosaur covered in bony plates may have sported camo, too, researchers report online August 3 in Current Biology. That could mean the Cretaceous-period herbivore was a target for predators that relied on sight more than smell to find prey.

The dinosaur, dubbed Borealopelta markmitchelli, has already made headlines for being one of the best preserved armored dinosaurs ever unearthed. It was entombed on its back some 110 million years ago under layers of fine marine sediments that buried the animal very quickly — ideal preservation conditions, says study coauthor Caleb Brown, a paleontologist at the Royal Tyrrell Museum of Palaeontology in Drumheller, Canada. The fossil, found in Alberta in 2011, captured not only large amounts of skin and soft tissue but also the animal’s three-dimensional shape.
“Most of the other armored dinosaurs are described based on the skeleton. In this case, we can’t see the skeleton because all the skin is still there,” Brown says.

That skin contains clues to the dinosaur’s appearance, including its coloration. “We’re just beginning to realize how important color is, and we’re beginning to have the methods to detect color” in fossils, says Martin Sander, a paleontologist at Bonn University in Germany who wasn’t part of the study.

But despite ample tissue, the researchers didn’t find any melanosomes, cellular structures that often preserve evidence of pigment in fossilized remains. Instead, Brown and colleagues turned to less direct evidence: molecules that appear when pigments break down. The researchers found about a dozen types of those molecules, including substantial amounts of benzothiazole, a by-product of the reddish pigment pheomelanin. That might mean the dinosaur was reddish-brown.
The distribution of pigment by-products also gives clues about the dinosaur’s appearance. B. markmitchelli had a thin film of pigment-hinting organic molecules on its back, but that layer disappeared on the belly. That pattern is reminiscent of countershading, when an animal is darker on its back than its underside, Brown says. Countershading is a simple form of camouflage that helps animals blend in with the ground when seen from above or with the sky when seen from below.
This is not the first time countershading has been proposed for a dinosaur (SN: 11/26/16, p 24). But finding the camouflage on such a large herbivore is somewhat surprising, Brown says. Modern plant eaters that don similar camouflage tend to be smaller and at greater risk of becoming someone’s dinner. B. markmitchelli’s skin patterning suggests that at least some top Cretaceous predators might have relied more on eyesight than today’s top carnivores, which often favor smell when hunting, Brown says.

Some experts, however, want stronger evidence for the coloration claims. Molecules like benzothiazole can come from melanin, but they can also come from a number of other sources, such as oils, says Johan Lindgren, a paleontologist at Lund University in Sweden. “What this paper nicely highlights is how little we actually know about the preservation of soft tissues in animal remains. There’s definitely something there — the question is, what are those [molecules], and where do they come from?”

Sander does buy the evidence for the reddish tint, but it might not be the full story, he says. The dino could have displayed other colors that didn’t linger in the fossil record. But the countershading findings “point out the importance of vision” for dinosaurs, he says. Sharp-eyed predators might have made camouflage a perk for herbivores — even ones built like tanks.

Infant ape’s tiny skull could have a big impact on ape evolution

A 13-million-year-old infant’s skull, discovered in Africa in 2014, comes from a new species of ape that may not be far removed from the common ancestor of living apes and humans.

The tiny find, about the size of a lemon, is one of the most complete skulls known of any extinct ape that inhabited Africa, Asia or Europe between 25 million and 5 million years ago, researchers report in the Aug. 10 Nature. The fossil provides the most detailed look to date at a member of a line of African primates that are now candidates for central players in the evolution of present-day apes and humans.
Most fossils from more than 40 known extinct ape species amount to no more than jaw fragments or a few isolated teeth. A local fossil hunter spotted the nearly complete skull in rock layers located near Kenya’s Lake Turkana. Members of a team led by paleoanthropologist Isaiah Nengo estimated the fossil’s age by assessing radioactive forms of the element argon in surrounding rock, which decay at a known rate.

Comparisons with other African ape fossils indicate that the infant’s skull belongs to a new species that the researchers named Nyanzapithecus alesi. Other species in this genus, previously known mainly from jaws and teeth, date to as early as around 25 million years ago.

“This skull comes from an ancient group of apes that existed in Africa for over 10 million years and was close to the evolutionary origin of living apes and humans,” says Nengo, of Stony Brook University in New York and De Anza College in Cupertino, Calif.

He and colleagues looked inside the skull using a powerful type of 3-D X-ray imaging. This technique revealed microscopic enamel layers that had formed daily from birth in developing adult teeth that had yet to erupt. A count of those layers indicates that the ape was 16 months old when it died.

Based on a presumably rapid growth rate, the scientists calculated that the ancient ape would have weighed about 11.3 kilograms as an adult. Its adult brain volume would have been almost three times larger than that of known African monkeys from the same time, the researchers estimate.
N. alesi’s tiny mouth and nose, along with several other facial characteristics, make it look much like small-bodied apes called gibbons. Faces resembling gibbons evolved independently in several extinct monkeys, apes and their relatives, the researchers say. The same probably held for N. alesi, making it an unlikely direct ancestor of living gibbons, they conclude.
No lower-body bones turned up with the new find. Even so, it’s possible to tell that N. alesi did not behave as present-day gibbons do. In gibbons, a part of the inner ear called the semicircular canals, which coordinates balance, is large relative to body size. That allows the apes to swing acrobatically from one tree branch to another. N. alesi’s small semicircular canals indicate that it moved cautiously in trees, Nengo says.

Several of the infant skull’s features, including those downsized semicircular canals, connect it to a poorly understood, 7-million- to 8-million-year-old ape called Oreopithecus. Fossils of that primate, discovered in Italy, suggest it walked upright with a slow, shuffling gait. If an evolutionary relationship existed with the older N. alesi, the first members of the Oreopithecus genus probably originated in Africa, Nengo proposes.

Without any lower-body bones for N. alesi, it’s too early to rule out the possibility that Nyanzapithecus gave rise to modern gibbons and perhaps Oreopithecus as well, says paleontologist David Alba of the Catalan Institute of Paleontology Miquel Crusafont in Barcelona. Gibbon ancestors are thought to have diverged from precursors of living great apes and humans between 20 million and 15 million years ago, Alba says.

Despite the age and unprecedented completeness of the new ape skull, no reported tooth or skull features clearly place N. alesi close to the origins of living apes and humans, says paleoanthropologist David Begun of the University of Toronto.

Further studies of casts of the inner braincase, which show impressions from surface features of the brain, may help clarify N. alesi’s position in ape evolution, Nengo says. Insights are also expected from back, forearm and finger fossils of two or three ancient apes, possibly also from N. alesi, found near the skull site in 2015. Those specimens also date to around 13 million years ago.

What can we learn about Mercury’s surface during the eclipse?

On the morning of August 21, a pair of jets will take off from NASA’s Johnson Space Center in Houston to chase the shadow of the moon. They will climb to 15 kilometers in the stratosphere and fly in the path of the total solar eclipse over Missouri, Illinois and Tennessee at 750 kilometers per hour.

But some of the instruments the jets carry won’t be looking at the sun, or even at Earth. They’ll be focused on a different celestial body: Mercury. In the handful of minutes that the planes zip along in darkness, the instruments could collect enough data to answer this Mercury mystery: What is the innermost planet’s surface made of?
Because it’s so close to the sun, Mercury is tough to study from Earth. It’s difficult to observe close up, too. Extreme heat and radiation threaten to fry any spacecraft that gets too close. And the sun’s brightness can swamp a hardy spacecraft’s efforts to send signals back to Earth.

NASA’s Messenger spacecraft orbited Mercury from 2011 to 2015 and revealed a battered, scarred landscape made of different material than the rest of the terrestrial planets (SN: 11/19/11, p. 17).
But Messenger only scratched the surface, so to speak. It analyzed the planet’s composition with an instrument called a reflectance spectrometer, which collects light and then splits that light into its component wavelengths to figure out which elements the light was reflected from.
Messenger took measurements of reflected light from Mercury’s surface at wavelengths shorter than 1 micrometer, which revealed, among other things, that Mercury contains a surprising amount of sulfur and potassium (SN: 7/16/11, p. 12). Those wavelengths come only from the top few micrometers of Mercury. What lies below is still unknown.

To dig a few centimeters deeper into Mercury’s surface, solar physicist Amir Caspi and planetary scientist Constantine Tsang of the Southwest Research Institute in Boulder, Colo., and colleagues will use an infrared camera, specially built by Alabama-based Southern Research, that detects wavelengths between 3 and 5 micrometers.

Copies of the instrument will fly on the two NASA WB-57 research jets, whose altitude and speed will give the observers two advantages: less atmospheric interference and more time in the path of the eclipse. Chasing the moon’s shadow will let the planes stay in totality — the region where the sun’s bright disk is completely blocked by the moon — for a combined 400 seconds (6.67 minutes). That’s nearly three times longer than they would get by staying in one spot.
Mercury’s dayside surface is 425° Celsius, and it actually emits light at 4.1 micrometers — right in the middle of the range of Caspi’s instrument. As any given spot on Mercury rotates away from the sun, its temperature drops as low as ‒179° C. Measuring how quickly the planet loses heat can help researchers figure out what the subsurface material is made of and how densely it’s packed. Looser sand will give up its heat more readily, while more close-packed rock will hold heat in longer.

“This is something that has never been done before,” Caspi says. “We’re going to try to make the first thermal image heat map of the surface of Mercury.”

Unfortunately for Caspi, only two people can fly on the jet: The pilot and someone to run the instrument. Caspi will remain on the ground in Houston, out of the path of totality. “So I will get to watch the eclipse on TV,” Caspi says.

Eclipses show wrong physics can give right results

Every few years, for a handful of minutes or so, science shines while the sun goes dark.

A total eclipse of the sun is, for those who witness it, something like a religious experience. For those who understand it, it is symbolic of science’s triumph over mythology as a way to understand the heavens.

In ancient Greece, the pioneer philosophers realized that eclipses illustrate how fantastic phenomena do not require phantasmagoric explanation. An eclipse was not magic or illusion; it happened naturally when one celestial body got in the way of another one. In the fourth century B.C., Aristotle argued that lunar eclipses provided strong evidence that the Earth was itself a sphere (not flat as some primitive philosophers had believed). As the eclipsed moon darkened, the edge of the advancing shadow was a curved line, demonstrating the curvature of the Earth’s surface intervening between the moon and sun.

Oft-repeated legend proclaims that the first famous Greek natural philosopher, Thales of Miletus, even predicted a solar eclipse that occurred in Turkey in 585 B.C. But the only account of that prediction comes from the historian Herodotus, writing more than a century later. He claimed that during a fierce battle “day suddenly became night,” just as Thales had forecast would happen sometime during that year.

There was an eclipse in 585 B.C., but it’s unlikely that Thales could have predicted it. He might have known that the moon blocks the sun in an eclipse. But no mathematical methods then available would have allowed him to say when — except, perhaps, a lucky coincidence based on the possibility that solar eclipses occurred at some regular cycle after lunar eclipses. Yet even that seems unlikely, a new analysis posted online last month finds.

“Some scholars … have flatly denied the prediction, while others have struggled to find a numerical cycle by means of which the prediction could have been carried out,” writes astronomer Miguel Querejeta. Many such cycles have already been ruled out, he notes. And his assessment of two other cycles concludes “that none of those conjectures can be regarded as serious explanations of the problematic prediction of Thales: in addition to requiring the existence of long and precise eclipse records … both cycles that have been examined overlook a number of eclipses which match the visibility criteria and, consequently, the patterns suggested seem to disappear.”

It’s true that the ancient Babylonians worked out methods for predicting lunar eclipses based on patterns in the intervals between them. And the famous Greek Antikythera mechanism from the second century B.C. seems to have used such cycle data to predict some eclipses.

Ancient Greek astronomers, such as Hipparchus (c. 190–120 B.C.), studied eclipses and the geometrical relationships of the Earth, moon and sun that made them possible. Understanding those relationships well enough to make reasonably accurate predictions became possible, though, only with the elaborate mathematical description of the cosmos developed (drawing on Hipparchus’ work) by Claudius Ptolemy. In the second century A.D., he worked out the math for explaining the movements of heavenly bodies, assuming the Earth sat motionless in the center of the universe.

His system specified the basic requirements for a solar eclipse: It must be the time of the new moon — when moon and sun are on the same side of the Earth — and the positions of their orbits must also be crossing the ecliptic, the plane of the sun’s apparent orbital path through the sky. (The moon orbits the Earth at a slight angle, crossing the plane of the ecliptic twice a month.) Only precise calculations of the movements of the sun and moon in their orbits could make it possible to predict the dates for eclipsing alignments.

Predicting when an eclipse will occur is not quite the same as forecasting exactly where it will occur. To be accurate, eclipse predictions need to take subtle gravitational interactions into account. Maps showing precisely accurate paths of totality (such as for the Great American Eclipse of 2017) became possible only with Isaac Newton’s 17th century law of gravity (and the further development of mathematical tools to exploit it). Nevertheless Ptolemy had developed a system that, in principle, showed how to anticipate when eclipses would happen. Curiously, though, this success was based on a seriously wrong blueprint for the architecture of the cosmos.

As Copernicus persuasively demonstrated in the 16th century, the Earth orbits the sun, not vice versa. Ptolemy’s geometry may have been sound, but his physics was backwards. While demonstrating that mathematics is essential to describing nature and predicting physical phenomena, he inadvertently showed that math can be successful without being right.

It’s wrong to blame him for that, though. In ancient times math and science were separate enterprises (science was then “natural philosophy”). Astronomy was regarded as math, not philosophy. An astronomer’s goal was to “save the phenomena” — to describe nature correctly with math that corresponded with observations, but not to seek the underlying physical causes of those observations. Ptolemy’s mathematical treatise, the Almagest, was about math, not physics.

One of the great accomplishments of Copernicus was to merge the math with the physical realty of his system. He argued that the sun occupied the center of the cosmos, and that the Earth was a planet, like the others previously supposed to have orbited the Earth. Copernicus worked out the math for a sun-centered planetary system. It was a simpler system than Ptolemy’s. And it was just as good for predicting eclipses.

As it turned out, though, even Copernicus didn’t have it quite right. He insisted that planetary orbits were circular (modified by secondary circles, the epicycles). In fact, the orbits are ellipses. It’s a recurring story in science that mathematically successful theories sometimes are just approximately correct because they are based on faulty understanding of the underlying physics. Even Newton’s law of gravity turned out to be just a good mathematical explanation; the absolute space and invariable flow of time he believed in just aren’t an accurate representation of the universe we live in. It took Einstein to see that and develop the view of gravity as the curvature of spacetime induced by the presence of mass.
Of course, proving Einstein right required the careful measurement by Arthur Eddington and colleagues of starlight bending near the sun during a solar eclipse in 1919. It’s a good thing they knew when and where to go to see it.

Map reveals the invisible universe of dark matter

Scientists have created the largest map of dark matter yet, part of a slew of new measurements that help pin down the universe’s dark contents. Covering about a thirtieth of the sky, the map (above) charts the density of both normal matter — the stuff that’s visible — and dark matter, an unidentified but far more abundant substance that pervades the cosmos.

Matter of both types is gravitationally attracted to other matter. That coupling organizes the universe into more empty regions of space (No. 1 below and blue in the map above) surrounded by dense cosmic neighborhoods (No. 2 below and red in the map above).
Researchers from the Dark Energy Survey used the Victor Blanco telescope in Chile to survey 26 million galaxies in a section of the southern sky for subtle distortions caused by the gravitational heft of both dark and normal matter. Scientists unveiled the new results August 3 at Fermilab in Batavia, Ill., during a meeting of the American Physical Society.

Dark matter is also accompanied by a stealthy companion, dark energy, an unseen force that is driving the universe to expand at an increasing clip. According to the new inventory, the universe is about 21 percent dark matter and 5 percent ordinary matter. The remainder, 74 percent, is dark energy.

The new measurements differ slightly from previous estimates based on the cosmic microwave background, light that dates back to 380,000 years after the Big Bang (SN: 3/21/15, p. 7). But the figures are consistent when measurement errors are taken into account, the researchers say.
“The fact that it’s really close, we think is pretty remarkable,” says cosmologist Josh Frieman of Fermilab, who directs the Dark Energy Survey. But if the estimates don’t continue to align as the survey collects more data, something might be missing in cosmologists’ theories of the universe.

Muscle pain in people on statins may have a genetic link

A new genetics study adds fuel to the debate about muscle aches that have been reported by many people taking popular cholesterol-lowering drugs called statins.

About 60 percent of people of European descent carry a genetic variant that may make them more susceptible to muscle aches in general. But counterintuitively, these people had a lower risk of muscle pain when they took statins compared with placebos, researchers report August 29 in the European Heart Journal.
Millions of people take statins to lower cholesterol and fend off the hardening of arteries. But up to 78 percent of patients stop taking the medicine. One common reason for ceasing the drugs’ use is side effects, especially muscle pain, says John Guyton, a clinical lipidologist at Duke University School of Medicine.

It has been unclear, however, whether statins are to blame for the pain. In one study, 43 percent of patients who had muscle aches while taking at least one type of statin were also pained by other types of statin (SN: 5/13/17, p. 22). But 37 percent of muscle-ache sufferers in that study had pain not related to statin use. Other clinical trials have found no difference in muscle aches between people taking statins and those not taking the drugs.

The new study hints that genetic factors, especially ones involved in the immune system’s maintenance and repair of muscles, may affect people’s reactions to statins. “This is a major advance in our understanding about myalgia,” or muscle pain, says Guyton, who was not involved in the study.

People with two copies of the common form of the gene LILRB5 tend to have higher-than-usual blood levels of two proteins released by injured muscles, creatine phosphokinase and lactate dehydrogenase. Higher levels of those proteins may predispose people to more aches and pains. In an examination of data from several studies involving white Europeans, people with dual copies of the common variant were nearly twice as likely to have achy muscles while taking statins as people with a less common variant, Moneeza Siddiqui of the University of Dundee School of Medicine in Scotland and colleagues discovered.

But when researchers examined who had pain when taking statins versus placebos, those with two copies of the common variant seemed to be protected from getting statin-associated muscle pain. Why is not clear.
People with double copies of the common form of the gene who experience muscle pain may stop taking statins because they erroneously think the drugs are causing the pain, study coauthor Colin Palmer of the University of Dundee said in a news release.

The less common version of the gene is linked to reduced levels of the muscle-damage proteins, and should protect against myalgia. Yet people with this version of the gene were the ones more likely to develop muscle pain specifically linked to taking statins during the trials.

The finding suggests that when people with the less common variant develop muscle pain while taking statins, the effect really is from the drugs, the researchers say.

But researchers still don’t know the nitty-gritty details of how the genetic variants promote or protect against myalgia while on statins. Neither version of the gene guarantees that a patient will develop side effects — or that they won’t. The team proposes further clinical trials to unravel interactions between the gene and the drugs.

More study is needed before doctors can add the gene to the list of tests patients get, Guyton says. “I don’t think we’re ready to put this genetic screen into clinical practice at all,” he says. For now, “it’s much easier just to give the patient the statin” and see what happens.

Dark matter still remains elusive

Patience is a virtue in the hunt for dark matter. Experiment after experiment has come up empty in the search — and the newest crop is no exception.

Observations hint at the presence of an unknown kind of matter sprinkled throughout the cosmos. Several experiments are focused on the search for one likely dark matter candidate: weakly interacting massive particles, or WIMPs (SN: 11/12/16, p. 14). But those particles have yet to be spotted.

Recent results, posted at arXiv.org, continue the trend. The PandaX-II experiment, based in China, found no hint of the particles, scientists reported August 23. The XENON1T experiment in Italy also came up WIMPless according to a May 18 paper. Scientists with the DEAP-3600 experiment in Sudbury, Canada, reported their first results on July 25. Signs of dark matter? Nada. And the SuperCDMS experiment in the Soudan mine in Minnesota likewise found no hints of WIMPs, scientists reported August 29.

Another experiment, PICO-60, also located in Sudbury, reported its contribution to the smorgasbord of negative results June 23 in Physical Review Letters.

Scientists haven’t given up hope. Researchers are building ever-larger detectors, retooling their experiments and continuing to expand the search beyond WIMPs.

How hurricanes and other devastating disasters spur scientific research

Every day, it seems like there’s a new natural disaster in the headlines. Hurricane Harvey inundates Texas. Hurricane Irma plows through the Caribbean and the U.S. south, and Jose is hot on its heels. A deadly 8.1-magnitude earthquake rocks Mexico. Wildfires blanket the western United States in choking smoke.

While gripping tales of loss and heroism rightly fill the news, another story quietly unfolds. Hurricanes, droughts, oil spills, wildfires and other disasters are natural labs. Data quickly gathered in the midst of such chaos, as well as for years afterward, can lead to discoveries that ultimately make rescue­, recovery and resilience to future crises possible.

So when disaster strikes, science surges, says human ecologist Gary Machlis of Clemson University in South Carolina. He has studied and written about the science done during crises and was part of the U.S. Department of the Interior’s Strategic Sciences Group, which helps government officials respond to disasters.

The science done during Hurricane Harvey is an example. Not long after the heavy rains stopped, crews of researchers from the U.S. Geological Survey fanned across Texas, dropping sensors into streams. The instruments measure how swiftly the water is flowing and determine the severity of the flooding in different regions affected by the hurricane. Knowing where the flooding is the worst can help the Federal Emergency Management Agency and other government groups direct funds to areas with the most extreme damage.
In the days leading up to Irma’s U.S. landfall, scientists from the same agency also went to the Florida, Georgia and South Carolina coasts to fasten storm-tide sensors to pier pylons and other structures. The sensors measure the depth and duration of the surge in seawater generated by the change in pressure and winds from the storm. This data will help determine damage from the surge and improve models of flooding in the future, which could help provide a better picture of where future storm waters will go and who needs to be evacuated ahead of hurricanes.

Even as Irma struck Florida, civil engineer Forrest Masters of the University of Florida in Gainesville, his students and collaborators traveled to the southern part of the state to study the intensity and variation in the hurricane’s winds. As winds blew and rain pelted, the team raised minitowers decked with instruments designed to measure ground-level gusts and turbulence. With this data, the researchers will compare winds in coastal areas, near buildings and around other structures, data that can help government agencies assess storm-related damage to buildings and other structures. The team will also take the data back to the Natural Hazards Engineering Research Infrastructure labs at the University of Florida to study building materials and identify those most resistant to extreme winds.
“Scientists want to use their expertise to help society in whatever way they can during a disaster,” says biologist Teresa Stoepler, who was a member of the Strategic Sciences Group when she worked at USGS.

As a former science & technology policy fellow with the American Association for the Advancement of Science, Stoepler studied the science that resulted from the 2010 Deepwater Horizon oil spill. This devastating explosion of an oil rig spewed 210 million gallons of petroleum into the Gulf of Mexico. It also opened the door for scientific research. Biologists, chemists, psychologists and a range of other scientists wanted to study the environmental, economic and mental health consequences of the disaster; local scientists wanted to study the effects of the spill on their communities; and leaders at the local and federal government needed guidance on how to respond. There was a need to coordinate all of that effort.

That’s where the Strategic Sciences Group came in. The group, officially organized in 2012, brought together researchers from federal, academic and nongovernmental organizations. The goal was to use data collected from the spill to map out possible long-term environmental and economic consequences of the disaster, determine where research still needed to be done and determine how to allocate money for response and recovery efforts.

Not long after its formation, the group had another disaster to respond to: Superstorm Sandy devastated the U.S. East Coast, even pushing floodwaters into the heart of New York City. Scientific collaborations allowed researchers and policy makers to get a better sense of whether wetlands, sea walls or other types of infrastructure would be best to invest in to prevent future devastation. The work also gave clues as to what types of measurements, such as the height of floodwaters, should be made in the future — say, during storms like Harvey and Irma — to speed recovery efforts afterward.

Moving forward, we’re likely to see this kind of collaboration coming into play time and again. No doubt, more natural disasters loom. And other groups are getting into crisis science. For instance, Stanford University, with its Science Action Network, aims to drive interdisciplinary research during disasters and encourage communication across the many groups responding to those disasters. And the Disaster Research Response program at the National Institutes of Health provides a framework for coordinating research on the medical and public health aspects of disasters and public health emergencies.

Surges in science will stretch from plunging into the chaos of a crisis to get in-the-moment data to monitoring years of aftermath. Retrospective studies of the data collected a year, three years or even five years after a disaster could reveal where there are gaps in the science and how those can be filled in during future events.

The more data collected, the more discoveries made and lessons learned, the more likely we’ll be ready to face the next disaster.

In a first, human embryos edited to explore gene function

For the first time, researchers have disabled a gene in human embryos to learn about its function.

Using molecular scissors called CRISPR/Cas9, researchers made crippling cuts in the OCT4 gene, Kathy Niakan and colleagues report September 20 in Nature. The edits revealed a surprising role for the gene in the development of the placenta.

Researchers commonly delete and disable genes in mice, fruit flies, yeast and other laboratory critters to investigate the genes’ normal roles, but have never done this before in human embryos. Last year, government regulators in the United Kingdom gave permission for Niakan, a developmental biologist at the Francis Crick Institute in London, and colleagues to perform gene editing on human embryos left over from in vitro fertilization treatments (SN Online: 2/1/16). The researchers spent nearly a year optimizing techniques in mouse embryos and human stem cells before conducting human embryo experiments, Niakan says.
This groundbreaking research allows researchers to directly study human development genes, says developmental biologist Dieter Egli of Columbia University. “This is unheard of. It’s not something that has been possible,” he says. “What we know about human development is largely inferred from studies of mice, frogs and other model organisms.”

Other researchers have used CRISPR/Cas9 to repair mutated genes in human embryos (SN: 4/15/17, p. 16; SN: 9/2/17, p. 6). The eventual aim of that research is to prevent genetic diseases, but it has led to concerns that the technology could be abused to produce “designer babies” who are better looking, smarter and more athletic than they otherwise would be.

“There’s nothing irresponsible about the research in this case,” says stem cell researcher Paul Knoepfler of the University of California, Davis, School of Medicine. The researchers focused on basic questions about how one gene affects human embryo development. Such studies may one day lead to better fertility treatments, but the more immediate goal is to gain better insights into human biology.

Niakan’s group focused on a gene called OCT4 (also known as POU5F1), a master regulator of gene activity, which is important in mouse embryo development. This gene is also known to help human embryonic stem cells stay flexible enough to become any type of body cell, a property known as pluripotency. Scientists use OCT4 protein to reprogram adult cells into embryonic-like cells, an indication that it is involved in early development (SN: 11/24/07, p. 323). But researchers didn’t know precisely how the OCT4 gene operates during human development. Niakan already had clues that it works at slightly different times in human embryos than it does in mice (SN: 10/3/15, p. 13).

In the experiment, human embryos lacking OCT4 had difficulty reaching the blastocyst stage: Only 19 percent of edited embryos formed blastocysts, while 47 percent of unedited embryos did. Blastocysts are balls of about 200 cells that form about five or six days after fertilization. The ball’s outer layer of cells gives rise to the placenta. Inside the blastocyst, one type of embryonic stem cells will become the yolk sac. Another kind, about 20 cells known as epiblast progenitor cells, will give rise to all the cells in the body.
Niakan and colleagues predicted from earlier work with mice and human embryonic stem cells that the protein OCT4 would be necessary for the epiblast cells to develop correctly. As predicted, “knocking out” the OCT4 gene disrupted epiblasts’ development. What the researchers didn’t expect is that OCT4 also affects the development of the placenta precursor cells on the outside of the blastocyst.

“That’s not predicted anywhere in the literature,” Niakan says. “We’ll be spending quite a lot of time on this in the future to uncover exactly what this role might be.”

A mutation may explain the sudden rise in birth defects from Zika

A single genetic mutation made the Zika virus far more dangerous by enhancing its ability to kill nerve cells in developing brains, a new study suggests.

The small change — which tweaks just one amino acid in a protein that helps Zika exit cells — may cause microcephaly, researchers report September 28 in Science. The mutation arose around May 2013, shortly before a Zika outbreak in French Polynesia, the researchers calculate.

Zika virus was discovered decades ago but wasn’t associated with microcephaly — a birth defect characterized by a small head and brain — until the 2015–2016 outbreak in Brazil. Women who had contracted the virus while pregnant started giving birth to babies with the condition at higher-than-usual rates (SN: 4/2/16, p. 26).
Researchers weren’t sure why microcephaly suddenly became a complication of Zika infections, says Pei-Yong Shi, a virologist at the University of Texas Medical Branch at Galveston. Maybe the virus did cause microcephaly before, scientists suggested, but at such low rates that no one noticed. Or people in South America might be more vulnerable to the virus. Perhaps their immune systems don’t know how to fight it, they have a genetic susceptibility or prior infections with dengue made Zika worse (SN: 4/29/17, p. 14). But Shi and colleagues in China thought the problem might be linked to changes in the virus itself.
The researchers compared a strain of Zika isolated from a patient in Cambodia in 2010 with three Zika strains collected from patients who contracted the virus in Venezuela, Samoa and Martinique during the epidemic of 2015–2016. The team found seven differences between the Cambodian virus and the three epidemic strains.

Researchers engineered seven versions of the Cambodian virus, each with one of the epidemic strains’ mutations, and injected the viruses into fetal mouse brains. Viruses with one of these mutations, dubbed S139N, killed brain cells in fetal mice and destroyed human brain cells grown in lab dishes more aggressively than the Cambodian strain from 2010 did, the researchers found.
“That’s pretty convincing evidence that it at least plays some role in what we’re seeing now,” says Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases.

The mutation changes an amino acid in a Zika protein called prM. That protein helps the virus mature within infected cells and get out of the cells to infect others. Shi and colleagues don’t yet know why tweaking the protein makes the virus kill brain cells more readily.

The alteration in that protein probably isn’t the entire reason epidemic strains cause microcephaly, Shi says. The Cambodian strain also led to the death of a few brain cells, but perhaps not enough to cause microcephaly. “We believe there are other changes in the virus that collectively enhance its virulence,” he says. In May in Nature, Shi and colleagues described a different mutation that allows the virus to infect mosquitoes more effectively.

Brain cells from different people vary in their susceptibility to Zika infections, says infectious disease researcher Scott Weaver, also at the University of Texas Medical Branch but not involved in the study. He says more work on human cells and in nonhuman primates is needed to confirm whether this mutation is really the culprit in microcephaly.