Coral reefs won’t be out of hot water anytime soon. A global bleaching event that began in June 2014 is the longest on record and now covers a larger area than ever before. What’s worse, it shows no signs of ending.
Global warming exacerbated by the latest El Niño is to blame, National Oceanic and Atmospheric Administration scientists reported Monday at the 13th International Coral Reef Symposium in Honolulu. Since 1979, periodic mass bleachings covering hundreds of kilometers have only lasted for “a year or so,” said NOAA Coral Reef Watch Coordinator Mark Eakin. But this one has dragged on for two years, threatening more than 40 percent of reefs globally, and more than 70 percent in the United States.
When corals are stressed by heat, they reject the colorful algae living inside them and turn a ghostly white. Those algae are a major source of food, so reefs can die if conditions don’t improve.
NOAA scientists aren’t sure what will end this episode. It could extend into 2017, and more frequent events are possible in the future, the scientists said. “Climate models suggest that most coral reefs may be seeing bleaching every other year by mid-century,” Eakin added. “How much worse that gets will depend on how we deal with global warming.”
Not too long ago, the internet was stationary. Most often, we’d browse the Web from a desktop computer in our living room or office. If we were feeling really adventurous, maybe we’d cart our laptop to a coffee shop. Looking back, those days seem quaint.
Today, the internet moves through our lives with us. We hunt Pokémon as we shuffle down the sidewalk. We text at red lights. We tweet from the bathroom. We sleep with a smartphone within arm’s reach, using the device as both lullaby and alarm clock. Sometimes we put our phones down while we eat, but usually faceup, just in case something important happens. Our iPhones, Androids and other smartphones have led us to effortlessly adjust our behavior. Portable technology has overhauled our driving habits, our dating styles and even our posture. Despite the occasional headlines claiming that digital technology is rotting our brains, not to mention what it’s doing to our children, we’ve welcomed this alluring life partner with open arms and swiping thumbs.
Scientists suspect that these near-constant interactions with digital technology influence our brains. Small studies are turning up hints that our devices may change how we remember, how we navigate and how we create happiness — or not. Somewhat limited, occasionally contradictory findings illustrate how science has struggled to pin down this slippery, fast-moving phenomenon. Laboratory studies hint that technology, and its constant interruptions, may change our thinking strategies. Like our husbands and wives, our devices have become “memory partners,” allowing us to dump information there and forget about it — an off-loading that comes with benefits and drawbacks. Navigational strategies may be shifting in the GPS era, a change that might be reflected in how the brain maps its place in the world. Constant interactions with technology may even raise anxiety in certain settings.
Yet one large study that asked people about their digital lives suggests that moderate use of digital technology has no ill effects on mental well-being.
The question of how technology helps and hinders our thinking is incredibly hard to answer. Both lab and observational studies have drawbacks. The artificial confines of lab experiments lead to very limited sets of observations, insights that may not apply to real life, says experimental psychologist Andrew Przybylski of the University of Oxford. “This is a lot like drawing conclusions about the effects of baseball on players’ brains after observing three swings in the batting cage.”
Observational studies of behavior in the real world, on the other hand, turn up associations, not causes. It’s hard to pull out real effects from within life’s messiness. The goal, some scientists say, is to design studies that bring the rigors of the lab to the complexities of real life, and then to use the resulting insights to guide our behavior. But that’s a big goal, and one that scientists may never reach.
Evolutionary neurobiologist Leah Krubitzer is comfortable with this scientific ambiguity. She doesn’t put a positive or negative value on today’s digital landscape. Neither good nor bad, it just is what it is: the latest iteration on the continuum of changing environments, says Krubitzer, of the University of California, Davis.
“I can tell you for sure that technology is changing our brains,” she says. It’s just that so far, no one knows what those changes mean.
Of course, nearly everything changes the brain. Musical training reshapes parts of the brain. Learning the convoluted streets of London swells a mapmaking structure in the brains of cabbies. Even getting a good night’s sleep changes the brain. Every aspect of our environment can influence brain and behaviors. In some ways, digital technology is no different. Yet some scientists suspect that there might be something particularly pernicious about digital technology’s grip on the brain.
“We are information-seeking creatures,” says neuroscientist Adam Gazzaley of the University of California, San Francisco. “We are driven to it in very powerful ways.” Today’s digital tools give us unprecedented exposure to information that doesn’t wait for you to seek it out; it seeks you out, he says. That pull is nearly irresistible.
Despite the many unanswered questions about whether our digital devices are influencing our brains and behaviors, and whether for good or evil, technology is galloping ahead. “We should have been asking ourselves [these sorts of questions] in the ’70s or ’80s,” Krubitzer says. “It’s too late now. We’re kind of closing the barn doors after the horses got out.” Attention grabber One way in which today’s digital technology is distinct from earlier advances (like landline telephones) is the sheer amount of time people spend with it. In just a decade, smartphones have saturated the market, enabling instant internet access to an estimated 2 billion people around the world. In one small study reported in 2015, 23 adults, ages 18 to 33, spent an average of five hours a day on their phones, broken up into 85 distinct daily sessions. When asked how many times they thought they used their phones, participants underestimated by half.
In a different study, Larry Rosen, a psychologist at California State University, Dominguez Hills, used an app to monitor how often college students unlocked their phones. The students checked their phones an average of 60 times a day, each session lasting about three to four minutes for a total of 220 minutes a day. That’s a lot of interruption, Rosen says. Smartphones are “literally omnipresent 24-7, and as such, it’s almost like an appendage,” he says. And often, we are compelled to look at this new, alluring rectangular limb instead of what’s around us. “This device is really powerful,” Rosen says. “It’s really influencing our behavior. It’s changed the way we see the world.”
Technology does that. Printing presses, electricity, televisions and telephones all shifted people’s habits drastically, Przybylski says. He proposes that the furor over digital technology melting brains and crippling social lives is just the latest incarnation of the age-old fear of change. “You have to ask yourself, ‘Is there something magical about the power of an LCD screen?’ ” Przybylski says.
Yet some researchers suspect that there is something particularly compelling about this advance. “It just feels different. Computers and the internet and the cloud are embedded in our lives,” says psychologist Benjamin Storm of the University of California, Santa Cruz. “The scope of the amount of information we have at our fingertips is beyond anything we’ve ever experienced. The temptation to become really reliant on it seems to be greater.”
Memory outsourcing Our digital reliance may encourage even more reliance, at least for memory, Storm’s work suggests. Sixty college undergraduates were given a mix of trivia questions — some easy, some hard. Half of the students had to answer the questions on their own; the other half were told to use the internet. Later, the students were given an easier set of questions, such as “What is the center of a hurricane called?” This time, the students were told they could use the internet if they wanted.
People who had used the internet initially were more likely to rely on internet help for the second, easy set of questions, Storm and colleagues reported online last July in Memory. “People who had gotten used to using the internet continued to do so, even though they knew the answer,” Storm says. This kind of overreliance may signal a change in how people use their memory. “No longer do we just rely on what we know,” he says. That work builds on results published in a 2011 paper in Science . A series of experiments showed that people who expected to have access to the internet later made less effort to remember things . In this way, the internet has taken the place formerly filled by spouses who remember birthdays, grandparents who remember recipes and coworkers who remember the correct paperwork codes — officially known as “transactive memory partners.” “We are becoming symbiotic with our computer tools,” Betsy Sparrow, then at Columbia University, and colleagues wrote in 2011. “The experience of losing our internet connection becomes more and more like losing a friend. We must remain plugged in to know what Google knows.”
That digital crutch isn’t necessarily a bad thing, Storm points out. Human memory is notoriously squishy, susceptible to false memories and outright forgetting. The internet, though imperfect, can be a resource of good information. And it’s not clear, he says, whether our memories are truly worse, or whether we perform at the same level, but just reach the answer in a different way.
“Some people think memory is absolutely declining as a result of us using technology,” he says. “Others disagree. Based on the current data, though, I don’t think we can really make strong conclusions one way or the other.”
The potential downsides of this memory outsourcing are nebulous, Storm says. It’s possible that digital reliance influences — and perhaps even weakens — other parts of our thinking. “Does it change the way we learn? Does it change the way we start to put information together, to build our own stories, to generate new ideas?” Storm asks. “There could be consequences that we’re not necessarily aware of yet.”
Research by Gazzaley and others has documented effects of interruptions and multitasking, which are hard to avoid with incessant news alerts, status updates and Instagrams waiting in our pockets. Siphoning attention can cause trouble for a long list of thinking skills, including short- and long-term memory, attention, perception and reaction time. Those findings, however, come from experiments in labs that ask a person to toggle between two tasks while undergoing a brain scan, for instance. Similar effects have not been as obvious for people going about their daily lives, Gazzaley says. But he is convinced that constant interruptions — the dings and buzzes, our own restless need to check our phones — are influencing our ability to think.
Making maps Consequences of technology are starting to show up for another cognitive task — navigating, particularly while driving. Instead of checking a map and planning a route before a trip, people can now rely on their smartphones to do the work for them. Anecdotal news stories describe people who obeyed the tinny GPS voice that instructed them to drive into a lake or through barricades at the entrance of a partially demolished bridge. Our navigational skills may be at risk as we shift to neurologically easier ways to find our way, says cognitive neuroscientist Véronique Bohbot of McGill University in Montreal.
Historically, getting to the right destination required a person to have the lay of the land, a mental map of the terrain. That strategy takes more work than one that’s called a “response strategy,” the type of navigating that starts with an electronic voice command. “You just know the response — turn right, turn left, go straight. That’s all you know,” Bohbot says. “You’re on autopilot.” A response strategy is easier, but it leaves people with less knowledge. People who walked through a town in Japan with human guides did a better job later navigating the same route than people who had walked with GPS as a companion, researchers have found.
Scientists are looking for signs that video games, which often expose people to lots of response-heavy situations, influence how people get around. In a small study, Bohbot and colleagues found that people who average 18 hours a week playing action video games such as Call of Duty navigated differently than people who don’t play the games. When tested on a virtual maze, players of action video games were more likely to use the simpler response learning strategy to make their way through, Bohbot and colleagues reported in 2015 in Proceedings of the Royal Society B.
That easier type of response navigation depends on the caudate nucleus, a brain area thought to be involved in habit formation and addiction. In contrast, nerve cells in the brain’s hippocampus help create mental maps of the world and assist in the more complex navigation. Some results suggest that people who use the response method have bigger caudate nuclei, and more brain activity there. Conversely, people who use spatial strategies that require a mental map have larger, busier hippocampi.
Those results on video game players are preliminary and show an association within a group that may share potentially confounding similarities. Yet it’s possible that getting into a habit of mental laxity may change the way people navigate. Digital technology isn’t itself to blame, Bohbot says. “It’s not the technology that’s necessarily good or bad for our brain. It’s how we use the technology,” she says. “We have a tendency to use it in the way that seems to be easiest for us. We’re not making the effort.”
Parts of the brain, including those used to navigate, have many jobs. Changing one aspect of brain function with one type of behavior might have implications for other aspects of life. A small study by Bohbot showed that people who navigate by relying on the addiction-related caudate nucleus smoke more cigarettes, drink more alcohol and are more likely to use marijuana than people who rely on the hippocampus. What to make of that association is still very much up in the air.
Sweating the smartphone Other researchers are trying to tackle questions of how technology affects our psychological outlooks. Rosen and colleagues have turned up clues that digital devices have become a new source of anxiety for people. In diabolical experiments, Cal State’s Rosen takes college students’ phones away, under the ruse that the devices are interfering with laboratory measurements of stress, such as heart rate and sweating. The phones are left on, but placed out of reach of the students, who are reading a passage. Then, the researchers start texting the students, who are forced to listen to the dings without being able to see the messages or respond. Measurements of anxiety spike, Rosen has found, and reading comprehension dwindles.
Other experiments have found that heavy technology users last about 10 minutes without their phones before showing signs of anxiety.
Fundamentally, an interruption in smartphone access is no different from those in the days before smartphones, when the landline rang as you were walking into the house with bags full of groceries, so you missed the call. Both situations can raise anxiety over a connection missed. But Rosen suspects that our dependence on digital technology causes these situations to occur much more often.
“The technology is magnificent,” he says. “Having said that, I think that this constant bombardment of needing to check in, needing to be connected, this feeling of ‘I can’t be disconnected, I can’t cut the tether for five minutes,’ that’s going to have a long-term effect.”
The question of whether digital technology is good or bad for people is nearly impossible to answer, but a survey of 120,000 British 15-year-olds (99.5 percent reported using technology daily) takes a stab at it. Oxford’s Przybylski and Netta Weinstein at Cardiff University in Wales have turned up hints that moderate use of digital technology — TV, computers, video games and smartphones — correlates with good mental health, measured by questions that asked about happiness, life satisfaction and social activity.
When the researchers plotted technology use against mental well-being, an umbrella-shaped curve emerged, highlighting what the researchers call the “Goldilocks spot” of technology use — not too little and not too much.
“We found that you’ve got to do a lot of texting before it hurts,” Przybylski says. For smartphone use, the shift from benign to potentially harmful came after about two hours of use on weekdays, mathematical analyses revealed. Weekday recreational computer use had a longer limit: four hours and 17 minutes, the researchers wrote in the February Psychological Science. For even the heaviest users, the relationship between technology use and poorer mental health wasn’t all that strong. For scale, the potential negative effects of all that screen time was less than a third of the size of the positive effects of eating breakfast, Przybylski and Weinstein found.
Even if a relationship is found between technology use and poorer mental health, scientists still wouldn’t know why, Przybylski says. Perhaps the effect comes from displacing something, such as exercise or socializing, and not the technology itself.
We may never know just how our digital toys shape our brains. Technology is constantly changing, and fast. Our brains are responding and adapting to it.
“The human neocortex basically re-creates itself over successive generations,” Krubitzer says. It’s a given that people raised in a digital environment are going to have brains that reflect that environment. “We went from using stones to crack nuts to texting on a daily basis,” she says. “Clearly the brain has changed.”
It’s possible that those changes are a good thing, perhaps better preparing children to succeed in a fast-paced digital world. Or maybe we will come to discover that when we no longer make the effort to memorize our best friend’s phone number, something important is quietly slipping away.
Pluto is a planet. It always has been, and it always will be, says Will Grundy of Lowell Observatory in Flagstaff, Arizona. Now he just has to convince the world of that.
For centuries, the word planet meant “wanderer” and included the sun, the moon, Mercury, Venus, Mars, Jupiter and Saturn. Eventually the moon and sun were dropped from the definition, but Pluto was included, after its discovery in 1930. That idea of a planet as a rocky or gaseous body that orbited the sun stuck, all the way up until 2006. Then, the International Astronomical Union narrowed the definition, describing a planet as any round object that orbits the sun and has moved any pesky neighbors out of its way, either by consuming them or flinging them off into space. Pluto failed to meet the last criterion (SN: 9/2/06, p. 149), so it was demoted to a dwarf planet.
Almost overnight, the solar system was down to eight planets. “The public took notice,” Grundy says. It latched onto the IAU’s definition — perhaps a bit prematurely. The definition has flaws, he and other planetary scientists argue. First, it discounts the thousands of exotic worlds that orbit other stars and also rogue ones with no star to call home (SN: 4/4/15, p. 22).
Second, it requires that a planet cut a clear path around the sun. But no planet does that; Earth, Mars, Jupiter and Neptune share their paths with asteroids, and objects crisscross planets’ paths all the time.
The third flaw is related to the second. Objects farther from the sun need to be pretty bulky to cut a clear path. You could have a rock the size of Earth in the Kuiper Belt and it wouldn’t have the heft required to gobble down or eject objects from its path. So, it couldn’t be considered a planet.
Grundy and colleagues (all members of NASA’s New Horizons mission to Pluto) laid out these arguments against the IAU definition of a planet March 21 at the Lunar and Planetary Science Conference in The Woodlands, Texas. A more suitable definition of a planet, says Grundy, is simpler: It’s any round object in space that is smaller than a star. By that definition, Pluto is a planet. So is the asteroid-belt object Ceres. So is Earth’s moon. “There’d be about 110 known planets in our solar system,” Grundy says, and plenty of exoplanets and rogue worlds would fit the bill as well.
The reason for the tweak is to keep the focus on the features — the physics, the geology, the atmosphere — of the world itself, rather than worry about what’s going on around it, he says.
The New Horizons mission has shown that Pluto is an interesting world with active geology, an intricate atmosphere and other features associated with planets in the solar system. It makes no sense to write Pluto off because it doesn’t fit one criterion. Grundy seems convinced the public could easily readopt the small world as a planet. Though he admits astronomers might be a tougher sell.
“People have been using the word correctly all along,” Grundy says. He suggests we stick with the original definition. That’s his plan.
It is the dazzling star of the biotech world: a powerful new tool that can deftly and precisely alter the structure of DNA. It promises cures for diseases, sturdier crops, malaria-resistant mosquitoes and more. Frenzy over the technique — known as CRISPR/Cas9 — is in full swing. Every week, new CRISPR findings are unfurled in scientific journals. In the courts, universities fight over patents. The media report on the breakthroughs as well as the ethics of this game changer almost daily.
But there is a less sequins-and-glitter side to CRISPR that’s just as alluring to anyone thirsty to understand the natural world. The biology behind CRISPR technology comes from a battle that has been raging for eons, out of sight and yet all around us (and on us, and in us).
The CRISPR editing tool has its origins in microbes — bacteria and archaea that live in obscene numbers everywhere from undersea vents to the snot in the human nose. For billions of years, these single-celled organisms have been at odds with the viruses — known as phages — that attack them, invaders so plentiful that a single drop of seawater can hold 10 million. And natural CRISPR systems (there are many) play a big part in this tussle. They act as gatekeepers, essentially cataloging viruses that get into cells. If a virus shows up again, the cell — and its offspring — can recognize and destroy it. Studying this system will teach biologists much about ecology, disease and the overall workings of life on Earth.
But moving from the simple, textbook story into real life is messy. In the few years since the defensive function of CRISPR systems was first appreciated, microbiologists have busied themselves collecting samples, conducting experiments and crunching reams of DNA data to try to understand what the systems do. From that has come much elegant physiology, a mass of complexity, surprises aplenty — and more than a little mystery. Spoiled yogurt The biology is complicated, and its basic nuts and bolts took some figuring out. There are two parts to CRISPR/Cas systems: the CRISPR bit and the Cas bit. The CRISPR bit — or “clustered regularly interspaced short palindromic repeats” — was stumbled on in the late 1980s and 1990s. Scientists then slowly pieced the story together by studying microbes that thrive in animals’ guts and in salt marshes, that cause the plague and that are used to make delicious yogurt and cheese.
None of the scientists knew what they were dealing with at first. They saw stretches of DNA with a characteristic pattern: short lengths of repeated sequence separated by other DNA sequences now known as spacers. Each spacer was unique. Because the roster of spacers could differ from one cell to the next in a given microbe species, an early realization was that these differences could be useful for forensic “typing” — investigators could tell whether food poisoning cases were linked, or if someone had stolen a company’s yogurt starter culture. But curious findings piled up. Some of those spacers, it turned out, matched the DNA of phages. In a flurry of reports in 2005, scientists showed, to name one example, that strains of the lactic acid bacterium Streptococcus thermophilus contained spacers that matched genetic material of phages known to infect Streptococcus. And the more spacers a strain had, the more resistant it was to attack by phages.
This began to look a lot like learned or adaptive immunity, akin to our own antibody system: After exposure to a specific threat, your immune system remembers and you are thereafter resistant to that threat. In a classic experiment published in Science in 2007, researchers at the food company Danisco showed it was so. They could see new spacers added when a phage infected a culture of S. thermophilus. Afterward, the bacterium was immune to the phage. They could artificially engineer a phage spacer into the CRISPR DNA and see resistance emerge; when they took the spacer away, immunity was lost.
This was handy intel for an industry that could find whole vats of yogurt-making bacteria wiped out by phage infestations. It was an exciting time scientifically and commercially, says Rodolphe Barrangou of North Carolina State University in Raleigh, who did a lot of the Danisco work. “It was not just discovering a cool system, but also uncovering a powerful phage-resistance technology for the dairy industry,” he says.
The second part of the CRISPR/Cas system is the Cas bit: a set of genes located near the cluster of CRISPR spacers. The DNA sequences of these genes strongly suggested that they carried instructions for proteins that interact with DNA or RNA in some fashion — sticking to it, cutting it, copying it, unraveling it. When researchers inactivated one Cas gene or another, they saw immunity falter. Clearly, the two bits of the system — CRISPR and Cas — were a team. It took many more experiments to get to today’s basic model of how CRISPR/Cas systems fight phages — and not just phages. Other types of foreign DNA can get into microbes, including circular rings called plasmids that shuttle from cell to cell and DNA pieces called transposable elements, which jump around within genomes. CRISPRs can fend off these intruders, as well as keep a microbe’s genome in tidy order.
The process works like this: A virus injects its genetic material into the cell. Sensing this danger, the cell selects a little strip of that genetic material and adds it to the spacers in the CRISPR cluster. This step, known as immunization or adaptation, creates a list of encounters a cell has had with viruses, plasmids or other foreign bits of DNA over time — neatly lined up in reverse chronological order, newest to oldest.
Older spacers eventually get shed, but a CRISPR cluster can grow to be long — the record holder to date is 587 spacers in Haliangium ochraceum, a salt-loving microbe isolated from a piece of seaweed. “It’s like looking at the last 600 shots you had in your arm,” says Barrangou. “Think about that.”
New spacer in place, the microbe is now immunized. Later comes targeting. If that same phage enters the cell again, it’s recognized. The cell has made RNA copies of the relevant spacer, which bind to the matching spot on the genome of the invading phage. That “guide RNA” leads Cas proteins to target and snip the phage DNA, defanging the intruder. Researchers now know there are a confetti-storm of different CRISPR systems, and the list continues to grow. Some are simple — such as the CRISPR/Cas9 system that’s been adapted for gene editing in more complex creatures (SN: 4/15/17, p. 16) — and some are elaborate, with many protein workhorses deployed to get the job done.
Those who are sleuthing the evolution of CRISPR systems are deciphering a complex story. The part of the CRISPR toolbox involved in immunity (adding spacers after phages inject their genetic material) seems to have originated from a specific type of transposable element called a casposon. But the part responsible for targeting has multiple origins — in some cases, it’s another type of transposable element. In others, it’s a mystery.
The downsides Given the power of CRISPR systems to ward off foes, one might think every respectable microbe out there in the soils, vents, lakes, guts and nostrils of this planet would have one. Not so.
Numbers are far from certain, partly because science hasn’t come close to identifying all the world’s microbes, let alone probe them all for CRISPRs. But the scads of microbial genetic data accrued so far throw up interesting trends.
Tallies suggest that CRISPR systems are far more prevalent in known archaea than in known bacteria — such systems exist in roughly 90 percent of archaea and about 35 percent of bacteria, says Eugene Koonin, a computational evolutionary biologist at the National Institutes of Health in Bethesda, Md. Archaea and bacteria, though both small and single-celled, are on opposite sides of the tree of life.
Perhaps more significantly, Koonin says, almost all the known microbes that live in superhot environments have CRISPRs. His group’s math models suggest that CRISPR systems are most useful when microbes encounter a big enough variety of viruses to make adaptive memory worth having. But if there’s too much variety, and viruses are changing very fast, CRISPRs don’t really help — because you’d never see the same virus again. The superhot ecosystems, he says, seem to have a stable amount of phage diversity that’s not too high or low.
And CRISPR systems have downsides. Just as people can develop autoimmune reactions against their own bodies, bacteria and archaea can accidentally make CRISPR spacers from bits of their own DNA — and risk chewing up their own genetic material. Researchers have seen this happen. “No immunity comes without a cost,” says Rotem Sorek, a microbial genomicist at the Weizmann Institute of Science in Rehovot, Israel.
But mistakes are rare, and Sorek and his colleagues recently figured out why in the microbe they study. The researchers reported in Nature in 2015 that CRISPR spacers are created from linear bits of DNA — and phage DNA is linear when it enters cells. The bacterial chromosome is protected because of its circular form. Should it break and become linear for a spell, such as when it’s being replicated, it contains signals that ward off the Cas proteins.
There are other negatives to CRISPR systems. It’s not always a bonus to keep out phages and other invaders, which can sometimes bring in useful things. Escherichia coli O157:H7, of food poisoning fame, can make humans sick because of toxin genes it harbors that were brought in by a phage, to name just one of myriad examples. Even CRISPR systems themselves are spread around the microbial kingdom via phages, plasmids or transposable elements.
For microbes that lack CRISPR systems, there are many other ways to repel foreign DNA — as much as 10 percent of a microbial genome may be devoted to hawkish warfare, and new defense systems are still being uncovered.
Countermeasures The war between bacteria and phages is two-sided, of course. Just as a microbe wants to keep doors shut to protect its genetic integrity and escape destruction, the phage wants in.
And so the phage fights back against CRISPRs. It genetically morphs into forms that CRISPRs no longer recognize. Or it designs bespoke artillery. Microbiologist Joe Bondy-Denomy, now at the University of California, San Francisco, happened upon such customized weapons as a grad student in the lab of molecular microbiologist Alan Davidson at the University of Toronto. The team knew that the bacterium Pseudomonas aeruginosa, which lives in soil and water and can cause dangerous infections, has a vigorous CRISPR system. Yet some phages didn’t seem fazed by it.
That’s because those phages have small proteins that will bind to and interfere with this or that part of the CRISPR machinery, such as the Cas enzyme that cuts phage DNA. The binding disables the CRISPR system, the researchers reported in 2015 in Nature. Bondy-Denomy and others have since found anti-CRISPR genes in other phages and other kinds of interloping DNA. The genes are so common, Davidson says, that he wonders how many CRISPR systems are truly active.
In an especially bizarre twist, microbiologist Kimberley Seed of the University of California, Berkeley found a phage that carries its own CRISPR system and uses it to fight back against the cholera bacterium it invades, she and colleagues reported in 2013 in Nature. It chops up a segment of bacterial DNA that normally inhibits phage infection.
Of course, in this never-ending scuffle one would expect the microbes to again fight back against the phages. “It’s something I often get asked: ‘Great, the anti-CRISPRs are there, so where are the anti-anti-CRISPRs?’ ” Bondy-Denomy says. Nobody has found such things yet.
Evolution drivers It’s one thing to study CRISPR systems in well-controlled lab settings, or in just one type of microbe. It’s another to understand what all the various CRISPRs do to shape the ecosystem of a bubbling hot spring, human gut, diseased lung or cholera-tainted river. Estimates of CRISPR abundance could drop as more sampling is done, especially of dark horse microbes that researchers know little about.
In a 2016 report in Nature Communications, for example, geomicrobiologist Jill Banfield of UC Berkeley and colleagues detected 1,724 microbes in Colorado groundwater that had been treated to boost the abundance of types that are difficult to isolate. CRISPR systems were much rarer in this sample than in databases of better-known microbes.
Tallying CRISPRs is just the start, of course. Microbial communities — including those inside our own guts, where there are plenty of CRISPR systems and phages — are dynamic, not frozen. How do CRISPRs shape the evolution of phages and microbes in the wild? Banfield’s and Barrangou’s labs teamed up to watch as S. thermophilus and phages incubated together in a milk medium for hundreds of days. The team saw bacterial numbers fall as phages invaded; then bacteria acquired spacers against the phage and rallied — and phage numbers fell downward in turn. Then new phage populations sprang up, immune to S. thermophilus defenses because of genetic changes. In this way, the researchers reported in 2016 in mBio, CRISPRs are “one of the fundamental drivers of phage evolution.”
CRISPR systems can be picked up, dropped, then picked up again by bacteria and archaea over time, perhaps as conditions and needs change. The bacterium Vibrio cholerae is an example of this dynamism, as Seed and colleagues reported in 2015 in the Journal of Bacteriology. The older, classical strains of this medical blight harbored CRISPRs, but these strains went largely extinct in the wild in the 1960s. Strains that cause cholera today do not have CRISPRs.
Nobody knows why, Seed says. But scientists stress that it is a mischaracterization to paint the relationship between microbes and phages, plasmids and transposable elements as a simplistic war. Phages don’t always wreak havoc; they can slip their genomes quietly into the bacterial chromosome and coexist benignly, getting copied along with the host DNA. Phages, plasmids and transposable elements can confer new, useful traits — sometimes even essential ones. Indeed, such movement of DNA across species and strains is at the heart of how bacteria and archaea evolve.
So it’s about finding balance. “If you incorporate too much foreign DNA, you cannot maintain a species,” says Luciano Marraffini, a molecular microbiologist at the Rockefeller University in New York City whose work first showed that DNA-cutting was key to CRISPR systems. But you do need to let some DNA in, and it’s likely that some CRISPR systems permit this: The system he studies in Staphylococcus epidermidis, for example, only goes after phages that are in their cell-killing, or lytic, state, he and colleagues reported in 2014 in Nature. Beyond defense One thing is very clear about CRISPR systems: They are perplexing in many ways. For a start, the spacers in a microbe should reflect its own, individual story of the phages it has encountered. So you’d think there would be local pedigrees, that a bacterium sampled in France would have a different spacer cluster from a bacterium sampled in Argentina. This is not what researchers always see.
Take the nasty P. aeruginosa. Rachel Whitaker, a microbial population biologist at the University of Illinois at Urbana-Champaign, studies Pseudomonas samples collected from people with cystic fibrosis, whose lungs develop chronic infections. She’s found no sign that two patients living close to each other carry more-similar P. aeruginosa CRISPRs than two patients thousands of miles apart. Yet surely one would expect nearby CRISPRs to be closer matches, because the Pseudomonas would have encountered similar phages. “It’s very weird,” Whitaker says.
Others have seen the same thing in heat-loving bacteria sampled from very distant bubbling hot springs. It’s as if scientists don’t truly understand how bacteria spread around the world — there could be a strong effect of far-flung passage by air or wind, says Konstantin Severinov, who studies CRISPR systems at Rutgers University in New Brunswick, N.J. Another weirdness is the differing vigor of CRISPR systems. Some are very active. Molecular biologist Devaki Bhaya of the Carnegie Institution for Science’s plant biology department at Stanford University sees clear signs that spacers are frequently added and dropped in the cyanobacteria of Yellowstone’s hot springs, for example. But other systems are sluggish, and E. coli, that classic workhorse of genetics research, has a respectable-looking CRISPR system — that is switched off.
It may have been off for a long time. Some 42,000 years ago, a baby woolly mammoth died in what is now northwestern Siberia. The remains, found in 2007, were so well-preserved that the intestines were intact and E. coli DNA could be extracted.
In research published in Molecular Ecology in January, Severinov’s team found surprising similarities between the spacers in the mammoth-derived E. coli CRISPR cluster and those in modern-day E. coli. “There was no turnover in all that time,” Severinov marvels. If the CRISPR system isn’t active, why does E. coli bother to keep it?
That quandary leads neatly to what some researchers refer to as an intellectually “scandalous situation.”
In some cases, the genetic sequence of spacers nicely matches phage DNA. But overall, only a fraction (around 1 to 2 percent) of the spacers scientists know about have been matched to a virus or a plasmid. In E. coli, the spacers don’t match common, classic phages known to infect the bacterium. “Is it the case that there is a huge, unknown amount of viral dark matter in the world?” says Koonin — or are phages evolving superfast? “Or is it something completely different?”
Faced with this conundrum, some researchers strongly suspect — and have evidence — that CRISPR systems may do more than defend; they may have other jobs. Communication, perhaps. Or turning genes on and off.
But some microbes’ CRISPR sequences do make sense, especially if looking at the spacers most recently added, and others may be clues to phages still undiscovered. So even as they scratch their heads about many things CRISPR, scientists are also excited by the stories CRISPR clusters can tell about the viruses and other bits of DNA that bacteria and archaea encounter and that they choose, for whatever reason, to note for the record. What do microbes pay attention to? What do they ignore?
CRISPRs offer a bright new window on such questions and, indeed, already are unearthing novel phages and facts about who infects whom in the microscopic world.
“We can catalog everything that’s out there. But we don’t really know what matters,” says Bondy-Denomy. “CRISPRs can help us understand.”
Immigrants, they get the job done — eventually. Among dwarf mongooses, it takes newcomers a bit to settle into a pack. But once these immigrants become established residents, everyone in the pack profits, researchers from the University of Bristol in England report online December 4 in Current Biology.
Dwarf mongooses (Helogale parvula) live in groups of around 10, with a pecking order. The alphas — a top male and female — get breeding priority, while the others help with such group activities as babysitting and guard duty. But the road to the top of the social hierarchy is linear and sometimes crowded. So some individuals skip out on the group they were born into to find one with fewer members of their sex with which to compete —“effectively ‘skipping the queue,’” says ecologist Julie Kern. Kern and her colleague Andrew Radford tracked mongoose immigration among nine packs at Sorabi Rock Lodge Reserve in Limpopo, South Africa. The researchers focused on guard duty, in which sentinels watch for predators and warn foragers digging for food.
Dwarf mongoose packs gain about one member a year. Among pack animals, higher group numbers are thought to come with the benefit of better access to shared social information like the approach of prowling predators. But upon arrival, new individuals are less likely to pitch in and serve as sentinels, Kern and Radford found. One possible reason: Immigrants lose weight during their transition from one pack to another and may not have the energy required for guard duty. Pack residents don’t exactly put out a welcome mat for strangers, either. On the rare occasions when newcomers take a guard shift, residents tend to ignore their warning calls. Newbies may be seen as less reliable guards, or packs may have signature alarm calls that immigrants must learn. But after five months, these immigrants have come far. “Given time to recuperate following dispersal and a period of integration,” Kern says, “they contribute equally to their new group.”
Science came out of the lab and touched people’s lives in some awe-inspiring and alarming ways in 2017. Science enthusiasts gathered to celebrate a total solar eclipse, but also to march on behalf of evidence-based policy making. Meanwhile, deadly natural disasters revealed the strengths and limitations of science. Here’s a closer look at some of the top science events of the year.
Great American Eclipse On August 21, many Americans witnessed their first total solar eclipse, dubbed the “Great American Eclipse.” Its path of totality stretched across the United States, passing through 14 states — with other states seeing a partial eclipse. This was the first total solar eclipse visible from the mainland United States since 1979, and the first to pass from coast to coast since 1918 (SN: 8/20/16, p. 14). As people donned protective glasses to watch, scientists used telescopes, spectrometers, radio receivers and even cameras aboard balloons and research jets in hopes of answering lingering questions about the sun, Earth’s atmosphere and the solar system. One of the biggest: Why is the solar atmosphere so much hotter than the sun’s surface (SN Online: 8/20/17)? Data collected during the event may soon provide new insights.
March for Science On April 22, Earth Day, more than 1 million people in over 600 cities around the world marched to defend science’s role in society. Called the first-ever March for Science, the main event was in Washington, D.C. Featured speakers included Denis Hayes, coordinator of the first Earth Day in 1970, and science advocate Bill Nye (SN Online: 4/22/17). Attendees advocated for government funding for scientific research and acceptance of the scientific evidence on climate change.
The march came on the heels of the Trump administration’s first budget proposal, released in March, which called for cutting federal science spending in fiscal year 2018 (SN: 4/15/17, p. 15). Some scientists worried that being involved with the march painted science in a partisan light, but others said science has always been political since scientists are people with their own values and opinions (SN Online: 4/19/17).
Climate deal announcement On June 1, President Donald Trump announced that the United States would pull out of the Paris climate accord (SN Online: 6/1/17) — an agreement the United States and nearly 200 other countries signed in 2015 pledging to curb greenhouse gas emissions to combat global warming. With the announcement, Trump made good on one of his campaign promises. He said during a news conference that the agreement “is less about the climate and more about other countries gaining a financial advantage over the United States.”
Nicaragua and Syria signed on to the agreement in late 2017. A withdrawal from the United States would leave it as the only United Nations–recognized country to reject the global pact. President Trump left the door open for the United States to stay in the climate deal under revised terms. A U.S. climate assessment released in November by 13 federal agencies said it is “extremely likely” that humans are driving warming on Earth (SN Online: 11/3/17). Whether that report — the final version of which is due to be released in 2018 — will have an impact on U.S. involvement in the global accord remains to be seen.
North Korea nuclear test On September 3, North Korea reported testing a hydrogen bomb, its sixth confirmed nuclear detonation, within a mountain at Punggye-ri. That test, along with the launch of intercontinental ballistic missiles this year, increased hostilities between North Korea and other nations, raising fears of nuclear war. As a result of these tests, the United Nations Security Council passed a resolution strengthening sanctions against North Korea to discourage the country from more nuclear testing.
As the international community waits to see what’s next, scientists continue to study the seismic waves that result from underground explosions in North Korea. These studies can help reveal the location, depth and strength of a blast (SN: 8/5/17, p. 18).
Natural disasters The 2017 Atlantic hurricane season saw hurricanes Harvey, Irma and Maria devastate areas of Texas, Florida and the Caribbean. More than 200 people died from these three massive storms, and preliminary estimates of damage are as high as hundreds of billions of dollars. The National Oceanic and Atmospheric Administration had predicted that the 2017 season could be extreme, thanks to above-normal sea surface temperatures. The storms offered scientists an opportunity to test new technologies that might save lives by improving forecasting (SN Online: 9/21/17) and by determining the severity of flooding in affected regions (SN Online: 9/12/17).
In addition to these deadly storms, two major earthquakes rocked Mexico in September, killing more than 400 people. More than 500 died when a magnitude 7.3 earthquake shook Iran and Iraq in November. And wildfires raged across the western United States in late summer and fall. In California, fires spread quickly thanks to record summer heat and high winds. At least 40 people died and many more were hospitalized in California’s October fires. Rising global temperatures and worsening droughts are making wildfire seasons worldwide last longer on average than in the past, researchers have found (SN Online: 7/15/15).
Not so long ago, the lives of sea turtles were largely a mystery. From the time that hatchlings left the beaches where they were born to waddle into the ocean until females returned to lay their eggs, no one really knew where the turtles went or what they did.
Then researchers started attaching satellite trackers to young turtles. And that’s when scientists discovered that the turtles aren’t just passive ocean drifters; they actively swim at least some of the time. Now scientists have used tracking technology to get some clues about where South Atlantic loggerhead turtles go. And it turns out that those turtles are traveling to some unexpected places.
Katherine Mansfield, a marine scientist and turtle biologist at the University of Central Florida in Orlando, and colleagues put 19 solar-powered satellite tags on young (less than a year old), lab-reared loggerhead sea turtles. The turtles were then let loose into the ocean off the coast of Brazil at various times during the hatching season, between November 2011 and April 2012.
The tags get applied to the turtles in several steps. Turtle shells are made of keratin, like your fingernails, and this flakes off and changes shape as a turtle grows. Mansfield’s team had figured out, thanks to a handy tip from a manicurist, that a base layer of manicure acrylic deals with the flaking. And then some strips of neoprene along with aquarium silicone attach the tag to the shell. With all that prep, the tag can stay on for months. The tags transmit while a turtle is at the water’s surface. A loss of the signal indicates that either the tag has fallen off and sunken into the water, “or something ate the turtle,” Mansfield says. The trackers revealed that not all Brazilian loggerhead sea turtles stay in the South Atlantic. Turtles released in the early- to mid-hatching season stay in southern waters. But then the off-coast currents change direction, which brings later-season turtles north, across the equator. Their trajectories could take them as far as the Caribbean, the Gulf of Mexico or even farther north, which would explain genetic evidence of mixing between southern and northern loggerhead populations. And it may help to make the species, which is endangered, more resilient in the face of environmental and human threats, the researchers conclude December 6 in the Proceedings of the Royal Society B.
But, Mansfield cautions, “these are just a handful of satellite tracks for a handful of turtles off the coast of Brazil.” She and other scientists “are just starting to build a story” about what happens to these turtles out in the ocean. “There’s still so much we don’t know,” she says.
Mansfield hopes the tracking data will help researchers figure out where the young turtles can be found out in the open ocean so scientists can catch, tag and track wild turtles. And there’s a need for even tinier tags that can be attached to newly hatched turtles to see exactly where they go and how many actually survive those first vulnerable weeks and months at sea. Eventually, Mansfield would like to have enough data to make comparisons between sea turtle species.
“The more we’re tracking, the more we’re studying them, we’re starting to realize [the turtles] behave differently than we’ve historically assumed,” Mansfield says.
The moon might have formed from the filling during Earth’s jelly doughnut phase.
Around 4.5 billion years ago, something hit Earth, and the moon appeared shortly after. A new simulation of how the moon formed suggests it took shape in the midst of a hot cloud of rotating rock and vapor, which (in theory) forms when big planetary objects smash into each other at high speeds and energies. Planetary scientists Simon Lock of Harvard University and Sarah Stewart of the University of California, Davis proposed this doughnut-shaped planetary blob in 2017 and dubbed it a synestia (SN: 8/5/17, p. 5). Radiation at the surface of this swirling cloud of vaporized, mixed-together planet matter sent rocky rain inward toward bigger debris. The gooey seed of the moon grew from fragments in this hot, high-pressure environment, with a bit of iron solidifying into the lunar core. Some elements, such as potassium and sodium, remained aloft in vapor, accounting for their scarcity in moon rocks today.
After a few hundred years, the synestia shrank and cooled. Eventually, a nearly full-grown moon emerged from the cloud and condensed. While Earth ended up with most of the synestia material, the moon spent enough time in the doughnut filling to gain similar ingredients, Lock, Stewart and colleagues write February 28 in Journal of Geophysical Research: Planets . The simulation shakes up the prevailing explanation for the moon’s birth: A Mars-sized protoplanet called Theia collided with Earth, and the moon formed from distinct rubble pieces. If that’s true, moon rocks should have very different chemical compositions than Earth’s. But they don’t.
Other recent studies have wrestled with why rocks from the moon and Earth are so alike (SN: 4/15/17, p. 18). Having a synestia in the mix shifts the focus from the nature of the collision to what happened in its aftermath, potentially resolving the conundrum.
On the hormonal roller coaster of life, the ups and downs of childbirth are the Tower of Power. For nine long months, a woman’s body and brain absorb a slow upwelling of hormones, notably progesterone and estrogen. The ovaries and placenta produce these two chemicals in a gradual but relentless rise to support the developing fetus.
With the birth of a baby, and the immediate expulsion of the placenta, hormone levels plummet. No other physiological change comes close to this kind of free fall in both speed and intensity. For most women, the brain and body make a smooth landing, but more than 1 in 10 women in the United States may have trouble coping with the sudden crash. Those new mothers are left feeling depressed, isolated or anxious at a time society expects them to be deliriously happy. This has always been so. Mental struggles following childbirth have been recognized for as long as doctors have documented the experience of pregnancy. Hippocrates described a woman’s restlessness and insomnia after giving birth. In the 19th century, some doctors declared that mothers were suffering from “insanity of pregnancy” or “insanity of lactation.” Women were sent to mental hospitals.
Modern medicine recognizes psychiatric suffering in new mothers as an illness like any other, but the condition, known as postpartum depression, still bears stigma. Both depression and anxiety are thought to be woefully underdiagnosed in new mothers, given that many women are afraid to admit that a new baby is anything less than a bundle of joy. It’s not the feeling they expected when they were expecting.
Treatment — when offered — most commonly involves some combination of antidepression medication, hormone therapy, counseling and exercise. Still, a significant number of mothers find these options wanting. Untreated, postpartum depression can last for years, interfering with a mother’s ability to connect with and care for her baby.
Although postpartum depression entered official medical literature in the 1950s, decades have passed with few new options and little research. Even as brain imaging has become a common tool for looking at the innermost workings of the mind, its use to study postpartum depression has been sparse. A 2017 review in Trends in Neurosciences found only 17 human brain imaging studies of postpartum depression completed through 2016. For comparison, more than four times as many have been conducted on a problem called “internet gaming disorder” — an unofficial diagnosis acknowledged only five years ago. Now, however, more researchers are turning their attention to this long-neglected women’s health issue, peering into the brains of women to search for the root causes of the depression. At the same time, animal studies exploring the biochemistry of the postpartum brain are uncovering changes in neural circuitry and areas in need of repair.
And for the first time, researchers are testing an experimental drug designed specifically for postpartum depression. Early results have surprised even the scientists.
Women’s health experts hope that these recent developments signal a new era of research to help new moms who are hurting.
“I get this question all the time: Isn’t it just depression during the postpartum period? My answer is no,” says neuroscientist Benedetta Leuner of Ohio State University. “It’s occurring in the context of dramatic hormonal changes, and that has to be impacting the brain in a unique way. It occurs when you have an infant to care for. There’s no other time in a woman’s life when the stakes are quite as high.”
Brain drain Even though progesterone and estrogen changes create hormonal whiplash, pregnancy wouldn’t be possible without them. Progesterone, largely coming from the ovaries, helps orchestrate a woman’s monthly menstrual cycle. The hormone’s primary job is to help thicken the lining of the uterus so it will warmly welcome a fertilized egg. In months when conception doesn’t happen, progesterone levels fall and the uterine lining disintegrates. If a woman becomes pregnant, the fertilized egg implants in the uterine wall and progesterone production is eventually taken over by the placenta, which acts like an extra endocrine organ.
Like progesterone, estrogen is a normal part of the menstrual cycle that kicks into overdrive after conception. In addition to its usual duties in the female body, estrogen helps encourage the growth of the uterus and fetal development, particularly the formation of the hormone-producing endocrine system.
These surges in estrogen and progesterone, along with other physiological changes, are meant to support the fetus. But the hormones, or chemicals made from them, cross into the mother’s brain, which must constantly adapt. When it doesn’t, signs of trouble can appear even before childbirth, although they are often missed. Despite the name “postpartum,” about half of women who become ill are silently distressed in the later months of pregnancy.
Decades ago, controversy churned over whether postpartum depression was a consequence of fluctuating hormones alone or something else, says neuroscientist Joseph Lonstein of Michigan State University in East Lansing. He studies the neurochemistry of maternal caregiving and postpartum anxiety. Lonstein says many early studies measured hormone levels in women’s blood and tried to determine whether natural fluctuations were associated with the risk of postpartum depression. Those studies found “no clear correlations with [women’s] hormones and their susceptibility to symptoms,” he says. “While the hormone changes are certainly thought to be involved, not all women are equally susceptible. The question then became, what is it about their brains that makes particular women more susceptible?” Seeking answers, researchers have examined rodent brains and placed women into brain scanners to measure the women’s responses to pictures or videos of babies smiling, babbling or crying. Though hormones likely underlie the condition, many investigations have led to the amygdalae. These two, almond-shaped clumps of nerve cells deep in the brain are sometimes referred to as the emotional thermostat for their role in the processing of emotions, particularly fear.
The amygdalae are entangled with many structures that help make mothers feel like mothering, says neuroscientist Alison Fleming of the University of Toronto Mississauga. The amygdalae connect to the striatum, which is involved in experiencing reward, and to the hippocampus, a key player in memory and the body’s stress response. And more: They are wired to the hypothalamus, the interface between the brain and the endocrine system (when you are afraid, the endocrine system produces adrenaline and other chemicals that get your heart racing and palms sweating). The amygdalae are also connected to the prefrontal cortex and insula, involved in decision making, motivation and other functions intertwined with maternal instinct.
Fleming and colleagues have recently moved from studies in postpartum rodents to human mothers. In one investigation, reported in 2012 in Social Neuroscience, women were asked to look at pictures of smiling infants while in a functional MRI, which images brain activity. In mothers who were not depressed, the researchers found a higher amygdala response, more positive feelings and lower stress when women saw their own babies compared with unfamiliar infants.
But an unexpected pattern emerged in mothers with postpartum depression, as the researchers reported in 2016 in Social Neuroscience. While both depressed and not-depressed mothers showed elevated amygdala activity when viewing their own babies, the depressed mothers also showed heightened responses to happy, unknown babies, suggesting reactions to the women’s own children were blunted and not unique. This finding may mean that depressed women had less inclination to emotionally attach to their babies.
Mothers with postpartum depression also showed weaker connectivity between the amygdalae and the insula. Mothers with weaker connectivity in this area had greater symptoms of depression and anxiety. Women with stronger connectivity were more responsive to their newborns.
While there’s still no way to definitely know that the amygdalae are responding to postpartum chemical changes, “it’s very likely,” Lonstein says, pointing out that the amygdalae are influenced by the body’s reaction to hormones in other emotional settings.
Maternal rewards While important, the amygdalae are just part of the puzzle that seems to underlie postpartum depression. Among others is the nucleus accumbens, famous for its role in the brain’s reward system and in addiction, largely driven by the yin and yang of the neurotransmitters dopamine and serotonin. In studies, mothers who watched films of their infants (as opposed to watching unknown infants) experienced increased production of feel-good dopamine. The women also had a strengthening of the connection between the nucleus accumbens, the amygdalae and other structures, researchers from Harvard Medical School and their collaborators reported in February 2017 in Proceedings of the National Academy of Sciences.
That’s not entirely surprising given that rodent mothers find interacting with their newborn pups as neurologically rewarding as addictive drugs, says Ohio State’s Leuner. Rodent mothers that are separated from their offspring “will press a bar 100 times an hour to get to a pup. They will step across electrified grids to get to their pups. They’ve even been shown in some studies to choose the pups over cocaine.” Mothers find their offspring “highly, highly rewarding,” she says.
When there are postpartum glitches in the brain’s reward system, women may find their babies less satisfying, which could increase the risk for impaired mothering. Writing in 2014 in the European Journal of Neuroscience, Leuner and colleagues reported that in rats with symptoms of postpartum depression (induced by stress during pregnancy, a major risk factor for postpartum depression in women), nerve cells in the nucleus accumbens atrophied and showed fewer protrusions called dendritic spines — suggesting weaker connections to surrounding nerve cells compared with healthy rats. This is in contrast to other forms of depression, which show an increase in dendritic spines. Unpublished follow-up experiments conducted by Leuner’s team also point to a role for oxytocin, a hormone that spikes with the birth of a baby as estrogen and progesterone fall. Sometimes called the “cuddle chemical,” oxytocin is known for its role in maternal bonding (SN Online: 4/16/15). Leuner hypothesizes that maternal depression is associated with deficits in oxytocin receptors that enable the hormone to have its effects as part of the brain’s reward system.
If correct, the idea may help explain why oxytocin treatment failed women in some studies of postpartum depression. The hormone may simply not have the same potency in some women whose brains are short on receptors the chemical can latch on to. The next step is to test whether reversing the oxytocin receptor deficits in rodents’ brains relieves symptoms.
Leuner and other scientists emphasize that the oxytocin story is complex. In 2017, in a study reported in Depression & Anxiety, women without a history of depression who received oxytocin — which is often given to promote contractions or stem bleeding after delivery — had a 32 percent higher likelihood of developing postpartum depression than women who did not receive the hormone. In more than 46,000 births, 5 percent of women who did not receive the hormone were diagnosed with depression, compared with 7 percent who did.
“This was the opposite of what we predicted,” says Kristina Deligiannidis, a neuroscientist and perinatal psychiatrist at the Feinstein Institute for Medical Research in Manhasset, N.Y. After all, oxytocin is supposed to enhance brain circuits involved in mothering. “We had a whole group of statisticians reanalyze the data because we didn’t believe it,” she says. While the explanation is unknown, one theory is that perhaps the women who needed synthetic oxytocin during labor weren’t making enough on their own — and that could be why they are more prone to depression after childbirth.
But postpartum depression can’t be pinned to any single substance or brain malfunction — it doesn’t reside in one tidy nest of brain cells, or any one chemical process gone haywire. Maternal behavior is based on complex neurological circuitry. “Multiple parts of the brain are involved in any single function,” Deligiannidis says. “Just to have this conversation, I’m activating several different parts of my brain.” When any kind of depression occurs, she says, multiple regions of the brain are suffering from a communication breakdown.
Looking further, Deligiannidis has also examined the role of certain steroids synthesized from progesterone and other hormones and known to affect maternal brain circuitry. In a 2016 study in Psychoneuroendocrinology involving 32 new mothers at risk for postpartum depression and 24 healthy mothers, Deligiannidis and colleagues reported that concentrations of some steroids that affect the brain, also called neurosteroids, were higher in women at risk for developing depression (because of their past history or symptoms), compared with women who were not. The higher levels suggest a system out of balance — the brain is making too much of one neurosteroid and not enough of another, called allopregnanolone, which is thought to protect against postpartum depression and is being tested as a treatment. Treating pregnancy withdrawal Beyond mom
CASEZY IDEA/SHUTTERSTOCK Postpartum depression doesn’t weigh down just mom. Research suggests it might have negative effects on her offspring that can last for years. Risks include:
Newborns Higher levels of cortisol and other stress hormones More time fussing and crying More “indeterminate sleep,” hovering between deep and active sleep Infants and children Increased risk of developmental problems Slower growth Lower cognitive function Elevated cortisol levels Adolescents Higher risk of depression Tufts University neuroscientist Jamie Maguire, based in Boston, got interested in neurosteroids during her postgraduate studies in the lab of Istvan Mody at UCLA. Maguire and Mody reported in 2008 in Neuron that during pregnancy, the hippocampus has fewer receptors for neurosteroids, presumably to protect the brain from the massive levels of progesterone and estrogen circulating at that time. When progesterone drops after birth, the receptors repopulate.
But in mice genetically engineered to lack those receptors, something else happened: The animals were less interested in tending to their offspring, failing to make nests for them.
“We started investigating. Why are these animals having these abnormal postpartum behaviors?” Maguire recalls. Was an inability to recover these receptors making some women susceptible? Interestingly, similar receptors are responsible for the mood-altering and addictive effects of some antianxiety drugs, suggesting that the sudden progesterone drop after childbirth could be leaving some women with a kind of withdrawal effect.
Further experiments demonstrated that giving the mice a progesterone-derived neurosteroid — producing levels close to what the mice had in pregnancy — alleviated the symptoms.
Today, Maguire is on the scientific advisory board of Boston area–based Sage Therapeutics, which is testing a formulation of allopregnanolone called brexanolone. Results of an early clinical trial published last July in The Lancet assessed whether brexanolone would alleviate postpartum symptoms in women with severe postpartum depression. The study involved 21 women randomly assigned to receive a 60-hour infusion of the drug or a placebo within six months after delivery.
At the end of treatment, the women who received the drug reported a 21-point reduction on a standard scale of depression symptoms, compared with about 9 points for the women on a placebo. “These women got better in about a day,” says Deligiannidis, who is on the study’s research team. “The results were astonishing.”
In November, Sage Therapeutics announced the results of two larger studies, although neither has been published. Combined, the trials involved 226 women with severe or moderate postpartum depression. Both groups showed similar improvements that lasted for the month the women were followed. The company has announced plans to request approval from the U.S. Food and Drug Administration to market brexanolone in the United States. This is an important first step, researchers say, toward better treatments.
“We are just touching on one small piece of a bigger puzzle,” says Jodi Pawluski, a neuroscientist at the Université de Rennes 1 in France who coauthored the 2017 review in Trends in Neurosciences. She was surprised at the dearth of research, given how common postpartum depression is. “This is not the end, it’s the beginning.”