Long-lasting mental health isn’t normal

Abnormal is the new normal in mental health.

A small, poorly understood segment of the population stays mentally healthy from age 11 to 38, a new study of New Zealanders finds. Everyone else encounters either temporary or long-lasting mental disorders.

Only 171 of 988 participants, or 17 percent, experienced no anxiety disorders, depression or other mental ailments from late childhood to middle age, researchers report in the February Journal of Abnormal Psychology. Of the rest, half experienced a transient mental disorder, typically just a single bout of depression, anxiety or substance abuse by middle age.
“For many, an episode of mental disorder is like influenza, bronchitis, kidney stones, a broken bone or other highly prevalent conditions,” says study coauthor Jonathan Schaefer, a psychologist at Duke University. “Sufferers experience impaired functioning, many seek medical care, but most recover.”

The remaining 408 individuals (41 percent) experienced one or more mental disorders that lasted several years or more. Their diagnoses included more severe conditions such as bipolar and psychotic disorders.

Researchers analyzed data for individuals born between April 1972 and March 1973 in Dunedin, New Zealand. Each participant’s general health and behavior were assessed 13 times from birth to age 38. Eight mental health assessments occurred from age 11 to 38.
Surprisingly, those who experienced lasting mental health did not display several characteristics previously linked to a lower likelihood of developing mental disorders. Those attributes consist of growing up in unusually affluent families, enjoying especially sound physical health and scoring exceptionally high on intelligence tests.
Instead, mentally healthy participants tended to possess advantageous personality traits starting in childhood, Schaefer and colleagues found. These participants rarely expressed strongly negative emotions, had lots of friends and displayed superior self-control. Kiwis with rock-solid mental health also had fewer first- and second-degree relatives with mental disorders compared with their peers.

As adults, participants with enduring mental health reported, on average, more education, better jobs, higher-quality relationships and more satisfaction with their lives than their peers did. But lasting mental health doesn’t guarantee an exceptional sense of well-being, Schaefer says. Nearly one-quarter of never-diagnosed individuals scored below the entire sample’s average score for life satisfaction.

Less surprising was the 83 percent overall prevalence rate for mental disorders. That coincides with recent estimates from four other long-term projects. In those investigations — two in the United States, one in Switzerland and another in New Zealand — between 61 percent and 85 percent of participants developed mental disorders over 12- to 30-year spans.

Comparably high rates of emotional disorders were reported in 1962 for randomly selected Manhattan residents. Many researchers doubted those findings, which relied on a diagnostic system that was less strict than the three versions of psychiatry’s diagnostic manual that were introduced and used to evaluate New Zealand participants as they got older, says psychiatric epidemiologist William Eaton of Johns Hopkins Bloomberg School of Public Health. But the Manhattan study appears to have been on the right track, Eaton says.

Increased awareness that most people will eventually develop a mental disorder (SN: 10/10/09, p. 5), at least briefly, can reduce stigma attached to these conditions (SN Online: 10/13/16), he suspects.

Psychiatric epidemiologist Ronald Kessler suspects the numbers of people experiencing a mental disorder may be even higher than reported. Many participants deemed to have enduring mental health likely developed brief mental disorders that got overlooked, such as a couple of weeks of serious depression after a romantic breakup, says Kessler of Harvard Medical School, who directs U.S. surveys of mental disorders. Rather than focusing on rare cases of lasting mental health, “the more interesting thing is to compare people with persistent mental illness to those with temporary disorders,” he says.

How hydras know where to regrow their heads

Hydras, petite pond polyps known for their seemingly eternal youth, exemplify the art of bouncing back (SN: 7/23/16, p. 26). The animals’ cellular scaffolding, or cytoskeleton, can regrow from a slice of tissue that’s just 2 percent of the original hydra’s full body size. Researchers thought that molecular signals told cells where and how to rebuild, but new evidence suggests there are other forces at play.

Physicist Anton Livshits and colleagues at the Technion-Israel Institute of Technology in Haifa genetically engineered Hydra vulgaris specimens so that stretchy protein fibers called actins, which form the cytoskeleton, lit up under a microscope. Then, the team sliced and diced to look for mechanical patterns in the regeneration process.
Actin fibers in pieces of hydra exert mechanical force that lines up new cells and guides the growth of the animal’s head and tentacles, the researchers found. Turning off motor proteins that move actin stopped regeneration, and physically manipulating actin fiber alignment resulted in hydras with multiple heads. Providing hydras with further structural stability encouraged tissue slices to grow normally. Both mechanical and molecular forces may mold hydras in regeneration, the researchers report in the Feb. 7 Cell Reports.
When researchers anchored rings of hydra tissue to a wire (right), they found that the added mechanical stability made a hydra grow normally along one body axis, and thus grow one head. Without this stability, the actin scaffolding was more disrupted and the animal grew two heads (left).

Cold plasma puts the chill on norovirus

WASHINGTON — A nasty stomach virus that can linger on fruits and veggies may have met its match in cold plasma.

In experiments, the ionized gas, created by filtering room-temperature air through an electric field, virtually eliminated norovirus from lettuce, researchers reported February 7 at the American Society for Microbiology Biothreats meeting.

Norovirus is the leading cause of foodborne illness in the United States, infecting more than 20 million people every year. Sterilizing food with heat is one way to kill the virus, but that approach doesn’t work for fresh produce. Cold plasma could be a way to sterilize fruits and vegetables without damaging them, said Hamada Aboubakr, a food microbiologist at the University of Minnesota in St. Paul.
Aboubakr and colleagues used a cold plasma device to blast contaminated romaine lettuce leaves and stainless steel surfaces. After five minutes, the plasma wiped out about 99 percent of norovirus particles.

The researchers are testing the device on other foodborne viruses such as hepatitis A, which sickened more than 140 people last year after they ate contaminated strawberries. Unpublished experiments have shown that cold plasma also can destroy drug-resistant bacteria on chicken breasts and leafy greens. Aboubakr hopes to adapt the technology for use in restaurants, on cruise ships and in the produce aisles of grocery stores.

Seagrasses boost ecosystem health by fighting bad bacteria

BOSTON — For a lawn that helps the environment — and doesn’t need to be mowed — look to the ocean. Meadows of underwater seagrass plants might lower levels of harmful bacteria in nearby ocean waters, researchers reported February 16 during a news conference at the annual meeting of the American Association for the Advancement of Science. That could make the whole ecosystem — from corals to fish to humans — healthier.

Not truly a grass, seagrasses are flowering plants with long, narrow leaves. They grow in shallow ocean water, spreading into vast underwater lawns. Seagrasses are “a marine powerhouse, almost equal to the rainforest. They’re one of the largest stores of carbon in the ocean,” says study coauthor Joleah Lamb, an ecologist at Cornell University. “But they don’t get a lot of attention.”
It’s no secret that seagrasses improve water quality, says James Fourqurean, a biologist at Florida International University in Miami who wasn’t involved in the research, which appears in the Feb. 17 Science. The plants are great at removing excess nitrogen and phosphorus from coastal waters. But now, it seems, they might take away harmful bacteria, too.

A few years ago, Lamb’s colleagues became ill with amoebic dysentery while studying coral reefs in Indonesia, an archipelagic nation that straddles the Indian and Pacific oceans. When a city or village on one of the country’s thousands of islands dumps raw sewage into the ocean, shoreline bacteria populations can spike to dangerous levels.
Water sampled close to the shores of four small and densely populated Indonesian islands had 10 times the U.S. Environmental Protection Agency’s recommended exposure limit of Enterococcusbacteria, which can cause illness in humans and often signals the presence of other pathogens. But water collected from offshore tidal flats and coral reefs with seagrass beds had lower levels of the bacteria compared with similar sites without the plants less than 20 meters away. The water had lower levels of numerous bacterial species that can make fish and marine invertebrates sick, too. And field surveys of more than 8,000 coral heads showed that those growing adjacent to or within seagrass beds had fewer diseases than those growing farther away.
It’s unclear how far from seagrass beds this cleaner water extends, but the benefits can ripple through the entire ecosystem, Lamb said at the news conference. Healthier corals help protect the islands from erosion. And fish less contaminated with bacteria make a better source of food for people.

Lamb is planning follow-up studies to figure out exactly how the seagrasses clean the water. Like a shag carpet, seagrasses trap small particulates drifting through the ocean and prevent them from flowing on. The plants might ensnare bacteria in the same way, building up biofilms on their blades. Or, she suggests, the leaves could be giving off antimicrobial compounds that directly kill the bacteria.

The findings are one more reason to conserve seagrasses, study coauthor Jeroen van de Water, an ecologist at the Scientific Center of Monaco, said at the news conference. Worldwide, seagrass beds are declining by 7 percent each year, thanks to pollution and habitat loss. And while restoration efforts are underway in some areas, “it’s better to stop what we’re doing to the meadows than to try to replant them,” Lamb added. “Seagrasses are quite particular in the depth they want to be at and the environment they want to have. It’s hard to start doing restoration projects if the environment isn’t exactly what the seagrass prefers.”

Coconut crab pinches like a lion, eats like a dumpster diver

A big coconut crab snaps its outsized left claw as hard as a lion can bite, new measurements suggest. So what does a land crab the size of a small house cat do with all that pinch power?

For starters, it protests having its claw-force measured, says Shin-ichiro Oka of the Okinawa Churashima Foundation in Motobu, Japan. “The coconut crab is very shy,” he says. It doesn’t attack people unprovoked. But wrangling 29 wild Birgus latro crabs on Okinawa and getting them to grip a measurement probe inspired much snapping at scientists. Oka’s hand got pinched twice (no broken bones). “Although it was just a few minutes,” he says, “I felt eternal hell.”
The strongest claw grip the researchers measured squeezed with a force of about 1,765 newtons, worse than crushing a toe under the force of the full weight of a fridge. For comparison, a lion’s canines bite with 1,315 newtons and some of its molars can crunch with 2,024 newtons, a 2007 study calculated. Because grip strength increases with body size, crabs bigger than those measured in the study might surpass the bite force of most land predators, Oka and colleagues proposed last year in PLOS ONE.
Coconut crabs, however, start life about as scary as a soggy grain of rice. Fertilized eggs hatch in seawater and bob around planktonlike in the western Pacific and Indian oceans. The crabs eventually return to land, where they spend most of their long lives, up to 50 (or maybe 100) years, as landlubbers that will drown if forced back into water for too long. Yet females have to risk the ocean’s edge each time they lay the next generation of eggs.
Both moms and dads grow a powerful left claw, handy for dismembering whatever the omnivorous scavengers find: roadkill and other dead stuff, innards of palm trees and nuts. The crabs can break open coconuts, but the job “takes hours,” says Jakob Krieger of the University of Greifswald in Germany. Cracking open a red crab, however, takes seconds.

Coconut crabs not only scavenge red crabs but also hunt them on Christmas Island in the Indian Ocean, Krieger says. Only the strictest vegetarian would ignore the 44 million or so red crabs scuttling around, and even small coconut crabs get a taste. Krieger watched an underpowered coconut crab grab hold of and wrestle its prey. The red crab abandoned its trapped limb and fled. But the little coconut crab scored a crab-leg dinner.

The stories of supernova 1987A, as told by Science News

The planning for our supernova special issue began months ago. In one early meeting, astronomy writer Christopher Crockett lit up as he told the story of the night supernova 1987A was discovered. The account has all the ingredients of a blockbuster. There’s a struggle (with an observatory door), the element of surprise (an unexpected burst on a photographic plate), disbelief (by our protagonist and a collaborator), a scramble (to figure out how to report the discovery of the supernova), and an action scene that seems impossibly quaint: A driver races to the nearest town 100 kilometers away to send a telegram and alert the world that 166,000 light-years away, a star has exploded.

That story opens Crockett’s feature article commemorating the 30th anniversary of the supernova’s discovery. And we’ve brought it to life in video form as well. Ian Shelton, the telescope operator who spotted 1987A on a three-hour exposure he took of the Large Magellanic Cloud, was kind enough to consult with us for the video. He stars in it in clay form, and his voice makes a few guest appearances too.
Our anniversary coverage of 1987A offers a great summary of the importance of the discoveries that came from the stellar explosion. But Science News has been telling the story of SN 1987A for years. In fact, we began telling its story just days after the discovery. News of the explosion reached the International Astronomical Union on a Tuesday; on Wednesday, the day we went to press, Science News editors slipped a mention of it into that week’s issue. The following week’s issue carried a full story. Dozens more followed. We’ve pulled many of those together in the timeline below, which includes links to PDFs of the original magazine articles. Happy reading!

Hydrogen volcanoes might boost planets’ potential for life

Volcanoes that belch hydrogen could bump up the number of potentially habitable planets in the universe.

Ramses Ramirez and Lisa Kaltenegger, both of Cornell University, modeled the atmospheres of planets blemished with hydrogen-spewing volcanoes. These gaseous eruptions could warm planets and ultimately widen a star’s habitable zone, the region where liquid water can exist on a planet’s surface, by about 30 to 60 percent, researchers report in the March 1 Astrophysical Journal Letters. That would be like extending the outer edge of the sun’s habitable zone from about 254 million kilometers — just beyond Mars’ orbit — to 359 million kilometers, or roughly to the asteroid belt between Mars and Jupiter.

Exoplanets that astronomers had previously thought too cold to support life might, in fact, be ripe for habitability if they have hydrogen volcanoes, the researchers say. One example is TRAPPIST-1h, the farthest-out planet identified in an odd system of seven Earth-sized planets 39 light-years from Earth (SN Online: 2/22/17). That world is thought to be icy like Jupiter’s moon Europa.

Adding planets to a star’s habitable zone means more exotic worlds could be targets in the search for signatures of life beyond our solar system. Astronomers plan to search for these signatures with the James Webb Space Telescope, slated to launch in 2018, and later with the European Extremely Large Telescope, scheduled to begin operations in 2024.

Brain training turns recall rookies into memory masters

Just six weeks of training can turn average people into memory masters.

Boosting these prodigious mnemonic skills came with overhauls in brain activity, resulting in brains that behaved more like those of experts who win World Memory Championships competitions, scientists report March 8 in Neuron.

The findings are notable because they show just how remarkably adaptable the human brain is, says neuroscientist Craig Stark of the University of California, Irvine. “The brain is plastic,” he says. “Through use, it changes.”
It’s not yet clear how long the changes in the newly trained brains last, but the memory gains persisted for four months.

In an initial matchup, a group of 17 memory experts, people who place high in World Memory Championships, throttled a group of people with average memories. Twenty minutes after seeing a list of 72 words, the experts remembered an average of 70.8 words; the nonexperts caught, on average, only 39.9 words.

In subsequent matchups, some nonexperts got varying levels of help. Fifty-one novices were split into three groups. A third of these people spent six weeks learning the method of loci, a memorization strategy used by ancient Greek and Roman orators. To use the technique, a person must imagine an elaborate mental scene, such as a palace or a familiar walking path, and populate it with memorable items. New information can then be placed onto this scaffold, offering a way to quickly “see” long lists of items.

Other participants spent six weeks training to improve short-term memory, performing a tricky task that required people to simultaneously keep track of series of locations they see and numbers they hear. The rest of the participants had no training at all.

After the training, the people who learned the method of loci performed nearly as well as the memory experts. But the rest didn’t show such improvement. Study coauthor Martin Dresler, a neuroscientist at the Radboud University Medical Center in the Netherlands, knew that the method of loci works quite well; he wasn’t surprised to see those memory scores spike. To him, the more interesting changes happened in the trained people’s brains.
Before and after training, nonexperts underwent scans that pinpointed brain areas that were active at the same time, an indication that these brain areas work together closely. Dresler and colleagues looked at 2,485 connections in brain networks important for memory and visual and spatial thinking. Training in the method of loci seemed to reconfigure many of those connections, making some of the connections stronger and others weaker. The overall effect of training was to make brains “look like those of the world’s best memorizers,” Dresler says. The results suggest that large-scale changes across the brain, as opposed to changes in individual areas, drive the increased memory capacity.

These new memory skills were still obvious four months after training ended, particularly for the people whose brain behavior became more similar to that of the memory experts. The researchers didn’t scan participants’ brains four months out, so they don’t know whether the brain retains its reshaped connections. No such brain changes or big increases in memory skills were seen in the other groups.

Memorization techniques have been criticized as interesting tricks that have little use in real life. But “that’s not the case,” Dresler says. Boris Konrad, a coauthor of the study also at Radboud, is a memory master who trained in the method of loci. The technique “really helped him get much better grades” in physics and other complex studies, Dresler says.

Improvements in mnemonic memory, like other types of cognitive training, might not improve a broader range of thinking skills. The current study can’t answer bigger questions about whether brain training has more general benefits.

Smartphones may be changing the way we think

Not too long ago, the internet was stationary. Most often, we’d browse the Web from a desktop computer in our living room or office. If we were feeling really adventurous, maybe we’d cart our laptop to a coffee shop. Looking back, those days seem quaint.

Today, the internet moves through our lives with us. We hunt Pokémon as we shuffle down the sidewalk. We text at red lights. We tweet from the bathroom. We sleep with a smartphone within arm’s reach, using the device as both lullaby and alarm clock. Sometimes we put our phones down while we eat, but usually faceup, just in case something important happens.
Our iPhones, Androids and other smartphones have led us to effortlessly adjust our behavior. Portable technology has overhauled our driving habits, our dating styles and even our posture. Despite the occasional headlines claiming that digital technology is rotting our brains, not to mention what it’s doing to our children, we’ve welcomed this alluring life partner with open arms and swiping thumbs.

Scientists suspect that these near-constant interactions with digital technology influence our brains. Small studies are turning up hints that our devices may change how we remember, how we navigate and how we create happiness — or not.
Somewhat limited, occasionally contradictory findings illustrate how science has struggled to pin down this slippery, fast-moving phenomenon. Laboratory studies hint that technology, and its constant interruptions, may change our thinking strategies. Like our husbands and wives, our devices have become “memory partners,” allowing us to dump information there and forget about it — an off-loading that comes with benefits and drawbacks. Navigational strategies may be shifting in the GPS era, a change that might be reflected in how the brain maps its place in the world. Constant interactions with technology may even raise anxiety in certain settings.

Yet one large study that asked people about their digital lives suggests that moderate use of digital technology has no ill effects on mental well-being.

The question of how technology helps and hinders our thinking is incredibly hard to answer. Both lab and observational studies have drawbacks. The artificial confines of lab experiments lead to very limited sets of observations, insights that may not apply to real life, says experimental psychologist Andrew Przybylski of the University of Oxford. “This is a lot like drawing conclusions about the effects of baseball on players’ brains after observing three swings in the batting cage.”

Observational studies of behavior in the real world, on the other hand, turn up associations, not causes. It’s hard to pull out real effects from within life’s messiness. The goal, some scientists say, is to design studies that bring the rigors of the lab to the complexities of real life, and then to use the resulting insights to guide our behavior. But that’s a big goal, and one that scientists may never reach.

Evolutionary neurobiologist Leah Krubitzer is comfortable with this scientific ambiguity. She doesn’t put a positive or negative value on today’s digital landscape. Neither good nor bad, it just is what it is: the latest iteration on the continuum of changing environments, says Krubitzer, of the University of California, Davis.

“I can tell you for sure that technology is changing our brains,” she says. It’s just that so far, no one knows what those changes mean.

Of course, nearly everything changes the brain. Musical training reshapes parts of the brain. Learning the convoluted streets of London swells a mapmaking structure in the brains of cabbies. Even getting a good night’s sleep changes the brain. Every aspect of our environment can influence brain and behaviors. In some ways, digital technology is no different. Yet some scientists suspect that there might be something particularly pernicious about digital technology’s grip on the brain.

“We are information-seeking creatures,” says neuroscientist Adam Gazzaley of the University of California, San Francisco. “We are driven to it in very powerful ways.” Today’s digital tools give us unprecedented exposure to information that doesn’t wait for you to seek it out; it seeks you out, he says. That pull is nearly irresistible.

Despite the many unanswered questions about whether our digital devices are influencing our brains and behaviors, and whether for good or evil, technology is galloping ahead. “We should have been asking ourselves [these sorts of questions] in the ’70s or ’80s,” Krubitzer says. “It’s too late now. We’re kind of closing the barn doors after the horses got out.”
Attention grabber
One way in which today’s digital technology is distinct from earlier advances (like landline telephones) is the sheer amount of time people spend with it. In just a decade, smartphones have saturated the market, enabling instant internet access to an estimated 2 billion people around the world. In one small study reported in 2015, 23 adults, ages 18 to 33, spent an average of five hours a day on their phones, broken up into 85 distinct daily sessions. When asked how many times they thought they used their phones, participants underestimated by half.

In a different study, Larry Rosen, a psychologist at California State University, Dominguez Hills, used an app to monitor how often college students unlocked their phones. The students checked their phones an average of 60 times a day, each session lasting about three to four minutes for a total of 220 minutes a day. That’s a lot of interruption, Rosen says.
Smartphones are “literally omnipresent 24-7, and as such, it’s almost like an appendage,” he says. And often, we are compelled to look at this new, alluring rectangular limb instead of what’s around us. “This device is really powerful,” Rosen says. “It’s really influencing our behavior. It’s changed the way we see the world.”

Technology does that. Printing presses, electricity, televisions and telephones all shifted people’s habits drastically, Przybylski says. He proposes that the furor over digital technology melting brains and crippling social lives is just the latest incarnation of the age-old fear of change. “You have to ask yourself, ‘Is there something magical about the power of an LCD screen?’ ” Przybylski says.

Yet some researchers suspect that there is something particularly compelling about this advance. “It just feels different. Computers and the internet and the cloud are embedded in our lives,” says psychologist Benjamin Storm of the University of California, Santa Cruz. “The scope of the amount of information we have at our fingertips is beyond anything we’ve ever experienced. The temptation to become really reliant on it seems to be greater.”

Memory outsourcing
Our digital reliance may encourage even more reliance, at least for memory, Storm’s work suggests. Sixty college undergraduates were given a mix of trivia questions — some easy, some hard. Half of the students had to answer the questions on their own; the other half were told to use the internet. Later, the students were given an easier set of questions, such as “What is the center of a hurricane called?” This time, the students were told they could use the internet if they wanted.

People who had used the internet initially were more likely to rely on internet help for the second, easy set of questions, Storm and colleagues reported online last July in Memory. “People who had gotten used to using the internet continued to do so, even though they knew the answer,” Storm says. This kind of overreliance may signal a change in how people use their memory. “No longer do we just rely on what we know,” he says.
That work builds on results published in a 2011 paper in Science . A series of experiments showed that people who expected to have access to the internet later made less effort to remember things . In this way, the internet has taken the place formerly filled by spouses who remember birthdays, grandparents who remember recipes and coworkers who remember the correct paperwork codes — officially known as “transactive memory partners.”
“We are becoming symbiotic with our computer tools,” Betsy Sparrow, then at Columbia University, and colleagues wrote in 2011. “The experience of losing our internet connection becomes more and more like losing a friend. We must remain plugged in to know what Google knows.”

That digital crutch isn’t necessarily a bad thing, Storm points out. Human memory is notoriously squishy, susceptible to false memories and outright forgetting. The internet, though imperfect, can be a resource of good information. And it’s not clear, he says, whether our memories are truly worse, or whether we perform at the same level, but just reach the answer in a different way.

“Some people think memory is absolutely declining as a result of us using technology,” he says. “Others disagree. Based on the current data, though, I don’t think we can really make strong conclusions one way or the other.”

The potential downsides of this memory outsourcing are nebulous, Storm says. It’s possible that digital reliance influences — and perhaps even weakens — other parts of our thinking. “Does it change the way we learn? Does it change the way we start to put information together, to build our own stories, to generate new ideas?” Storm asks. “There could be consequences that we’re not necessarily aware of yet.”

Research by Gazzaley and others has documented effects of interruptions and multitasking, which are hard to avoid with incessant news alerts, status updates and Instagrams waiting in our pockets. Siphoning attention can cause trouble for a long list of thinking skills, including short- and long-term memory, attention, perception and reaction time. Those findings, however, come from experiments in labs that ask a person to toggle between two tasks while undergoing a brain scan, for instance. Similar effects have not been as obvious for people going about their daily lives, Gazzaley says. But he is convinced that constant interruptions — the dings and buzzes, our own restless need to check our phones — are influencing our ability to think.

Making maps
Consequences of technology are starting to show up for another cognitive task — navigating, particularly while driving. Instead of checking a map and planning a route before a trip, people can now rely on their smartphones to do the work for them. Anecdotal news stories describe people who obeyed the tinny GPS voice that instructed them to drive into a lake or through barricades at the entrance of a partially demolished bridge. Our navigational skills may be at risk as we shift to neurologically easier ways to find our way, says cognitive neuroscientist Véronique Bohbot of McGill University in Montreal.

Historically, getting to the right destination required a person to have the lay of the land, a mental map of the terrain. That strategy takes more work than one that’s called a “response strategy,” the type of navigating that starts with an electronic voice command. “You just know the response — turn right, turn left, go straight. That’s all you know,” Bohbot says. “You’re on autopilot.”
A response strategy is easier, but it leaves people with less knowledge. People who walked through a town in Japan with human guides did a better job later navigating the same route than people who had walked with GPS as a companion, researchers have found.

Scientists are looking for signs that video games, which often expose people to lots of response-heavy situations, influence how people get around. In a small study, Bohbot and colleagues found that people who average 18 hours a week playing action video games such as Call of Duty navigated differently than people who don’t play the games. When tested on a virtual maze, players of action video games were more likely to use the simpler response learning strategy to make their way through, Bohbot and colleagues reported in 2015 in Proceedings of the Royal Society B.

That easier type of response navigation depends on the caudate nucleus, a brain area thought to be involved in habit formation and addiction. In contrast, nerve cells in the brain’s hippocampus help create mental maps of the world and assist in the more complex navigation. Some results suggest that people who use the response method have bigger caudate nuclei, and more brain activity there. Conversely, people who use spatial strategies that require a mental map have larger, busier hippocampi.

Those results on video game players are preliminary and show an association within a group that may share potentially confounding similarities. Yet it’s possible that getting into a habit of mental laxity may change the way people navigate. Digital technology isn’t itself to blame, Bohbot says. “It’s not the technology that’s necessarily good or bad for our brain. It’s how we use the technology,” she says. “We have a tendency to use it in the way that seems to be easiest for us. We’re not making the effort.”

Parts of the brain, including those used to navigate, have many jobs. Changing one aspect of brain function with one type of behavior might have implications for other aspects of life. A small study by Bohbot showed that people who navigate by relying on the addiction-related caudate nucleus smoke more cigarettes, drink more alcohol and are more likely to use marijuana than people who rely on the hippocampus. What to make of that association is still very much up in the air.

Sweating the smartphone
Other researchers are trying to tackle questions of how technology affects our psychological outlooks. Rosen and colleagues have turned up clues that digital devices have become a new source of anxiety for people.
In diabolical experiments, Cal State’s Rosen takes college students’ phones away, under the ruse that the devices are interfering with laboratory measurements of stress, such as heart rate and sweating. The phones are left on, but placed out of reach of the students, who are reading a passage. Then, the researchers start texting the students, who are forced to listen to the dings without being able to see the messages or respond. Measurements of anxiety spike, Rosen has found, and reading comprehension dwindles.

Other experiments have found that heavy technology users last about 10 minutes without their phones before showing signs of anxiety.

Fundamentally, an interruption in smartphone access is no different from those in the days before smartphones, when the landline rang as you were walking into the house with bags full of groceries, so you missed the call. Both situations can raise anxiety over a connection missed. But Rosen suspects that our dependence on digital technology causes these situations to occur much more often.

“The technology is magnificent,” he says. “Having said that, I think that this constant bombardment of needing to check in, needing to be connected, this feeling of ‘I can’t be disconnected, I can’t cut the tether for five minutes,’ that’s going to have a long-term effect.”

The question of whether digital technology is good or bad for people is nearly impossible to answer, but a survey of 120,000 British 15-year-olds (99.5 percent reported using technology daily) takes a stab at it. Oxford’s Przybylski and Netta Weinstein at Cardiff University in Wales have turned up hints that moderate use of digital technology — TV, computers, video games and smartphones — correlates with good mental health, measured by questions that asked about happiness, life satisfaction and social activity.

When the researchers plotted technology use against mental well-being, an umbrella-shaped curve emerged, highlighting what the researchers call the “Goldilocks spot” of technology use — not too little and not too much.

“We found that you’ve got to do a lot of texting before it hurts,” Przybylski says. For smartphone use, the shift from benign to potentially harmful came after about two hours of use on weekdays, mathematical analyses revealed. Weekday recreational computer use had a longer limit: four hours and 17 minutes, the researchers wrote in the February Psychological Science.
For even the heaviest users, the relationship between technology use and poorer mental health wasn’t all that strong. For scale, the potential negative effects of all that screen time was less than a third of the size of the positive effects of eating breakfast, Przybylski and Weinstein found.

Even if a relationship is found between technology use and poorer mental health, scientists still wouldn’t know why, Przybylski says. Perhaps the effect comes from displacing something, such as exercise or socializing, and not the technology itself.

We may never know just how our digital toys shape our brains. Technology is constantly changing, and fast. Our brains are responding and adapting to it.

“The human neocortex basically re-creates itself over successive generations,” Krubitzer says. It’s a given that people raised in a digital environment are going to have brains that reflect that environment. “We went from using stones to crack nuts to texting on a daily basis,” she says. “Clearly the brain has changed.”

It’s possible that those changes are a good thing, perhaps better preparing children to succeed in a fast-paced digital world. Or maybe we will come to discover that when we no longer make the effort to memorize our best friend’s phone number, something important is quietly slipping away.

It’s time to redefine what qualifies as a planet, scientists propose

Pluto is a planet. It always has been, and it always will be, says Will Grundy of Lowell Observatory in Flagstaff, Arizona. Now he just has to convince the world of that.

For centuries, the word planet meant “wanderer” and included the sun, the moon, Mercury, Venus, Mars, Jupiter and Saturn. Eventually the moon and sun were dropped from the definition, but Pluto was included, after its discovery in 1930. That idea of a planet as a rocky or gaseous body that orbited the sun stuck, all the way up until 2006.
Then, the International Astronomical Union narrowed the definition, describing a planet as any round object that orbits the sun and has moved any pesky neighbors out of its way, either by consuming them or flinging them off into space. Pluto failed to meet the last criterion (SN: 9/2/06, p. 149), so it was demoted to a dwarf planet.

Almost overnight, the solar system was down to eight planets. “The public took notice,” Grundy says. It latched onto the IAU’s definition — perhaps a bit prematurely. The definition has flaws, he and other planetary scientists argue. First, it discounts the thousands of exotic worlds that orbit other stars and also rogue ones with no star to call home (SN: 4/4/15, p. 22).

Second, it requires that a planet cut a clear path around the sun. But no planet does that; Earth, Mars, Jupiter and Neptune share their paths with asteroids, and objects crisscross planets’ paths all the time.

The third flaw is related to the second. Objects farther from the sun need to be pretty bulky to cut a clear path. You could have a rock the size of Earth in the Kuiper Belt and it wouldn’t have the heft required to gobble down or eject objects from its path. So, it couldn’t be considered a planet.

Grundy and colleagues (all members of NASA’s New Horizons mission to Pluto) laid out these arguments against the IAU definition of a planet March 21 at the Lunar and Planetary Science Conference in The Woodlands, Texas.
A more suitable definition of a planet, says Grundy, is simpler: It’s any round object in space that is smaller than a star. By that definition, Pluto is a planet. So is the asteroid-belt object Ceres. So is Earth’s moon. “There’d be about 110 known planets in our solar system,” Grundy says, and plenty of exoplanets and rogue worlds would fit the bill as well.

The reason for the tweak is to keep the focus on the features — the physics, the geology, the atmosphere — of the world itself, rather than worry about what’s going on around it, he says.

The New Horizons mission has shown that Pluto is an interesting world with active geology, an intricate atmosphere and other features associated with planets in the solar system. It makes no sense to write Pluto off because it doesn’t fit one criterion. Grundy seems convinced the public could easily readopt the small world as a planet. Though he admits astronomers might be a tougher sell.

“People have been using the word correctly all along,” Grundy says. He suggests we stick with the original definition. That’s his plan.