Slight variations in the moon’s gravitational tug have hinted that kilometers-wide caverns lurk beneath the lunar surface. Like the lava tubes of Hawaii and Iceland, these structures probably formed when underground rivers of molten rock ran dry, leaving behind a cylindrical channel. On Earth, such structures max out at around 30 meters across, but the gravitational data suggest that the moon’s tubes are vastly wider.
Assessing the sturdiness of lava tubes under lunar gravity, planetary geophysicist Dave Blair of Purdue University in West Lafayette, Ind., and colleagues estimate that the caves could remain structurally sound up to 5 kilometers across. That’s wide enough to fit the Golden Gate Bridge, Brooklyn Bridge and London Bridge end to end.
Such colossal caves will be prime real estate for lunar pioneers, the researchers report in the Jan. 15 Icarus. Lava tubes could offer protection from the extreme temperatures, harsh radiation and meteorite impacts on the surface.
New research is stirring the pot about an ancient Egyptian burial practice.
Many ancient peoples, including Egyptians, buried some of their dead in ceramic pots or urns. Researchers have long thought these pot burials, which often recycled containers used for domestic purposes, were a common, make-do burial for poor children.
But at least in ancient Egypt, the practice was not limited to children or to impoverished families, according to a new analysis. Bioarchaeologist Ronika Power and Egyptologist Yann Tristant, both of Macquarie University in Sydney, reviewed published accounts of pot burials at 46 sites, most near the Nile River and dating from about 3300 B.C. to 1650 B.C. Their results appear in the December Antiquity. A little over half of the sites contained the remains of adults. For children, pot burials were less common than expected: Of 746 children, infants and fetuses interred in some type of burial container, 338 were buried in wooden coffins despite wood’s relative scarcity and cost. Another 329 were buried in pots. Most of the rest were placed in baskets or, in a few cases, containers fashioned from materials such as reeds or limestone.
In the tomb of a wealthy governor, an infant was found in a pot containing beads covered in gold foil. Other pot burials held myriad goods — gold, ivory, ostrich eggshell beads, clothing or ceramics. Bodies were either placed directly into urns, or sometimes pots were broken or cut to fit the deceased.
People deliberately chose the containers, in part for symbolic reasons, the researchers now propose. The hollow vessels, which echo the womb, may have been used to represent a rebirth into the afterlife, the scientists say.
Gold’s glimmer is not the only reason the element is so captivating. For decades, scientists have puzzled over why theoretical predictions of gold’s properties don’t match up with experiments. Now, highly detailed calculations have erased the discrepancy, according to a paper published in the Jan. 13 Physical Review Letters.
At issue was the energy required to remove an electron from a gold atom, or ionize it. Theoretical calculations of this ionization energy differed from scientists’ measurements. Likewise, the energy released when adding an electron — a quantity known as the electron affinity — was also off the mark. How easily an atom gives up or accepts electrons is important for understanding how elements react with other substances. “It was well known that gold is a difficult system,” says chemist Sourav Pal of the Indian Institute of Technology Bombay, who was not involved with the study. Even gold’s most obvious feature can’t be explained without calling Einstein’s special theory of relativity into play: The theory accounts for gold’s yellowish color. (Special relativity shifts around the energy levels of electrons in gold atoms, causing the metal to absorb blue light, and thereby making reflected light appear more yellow.)
With this new study, scientists have finally resolved the lingering questions about the energy involved in removing or adding an electron to the atom. “That is the main significance of this paper,” Pal says.
Early calculations, performed in the 1990s, differed from the predicted energies by more than a percent, and improved calculations since then still didn’t match the measured value. “Every time I went to a conference, people discussed that and asked, ‘What the hell is going on?’” says study coauthor Peter Schwerdtfeger, a chemist at Massey University Auckland in New Zealand.
The solution required a more complete consideration of the complex interplay among gold’s 79 electrons. Using advanced supercomputers to calculate the interactions of up to five of gold’s electrons at a time, the scientists resolved the discrepancy. Previous calculations had considered up to three electrons at a time. Also essential to include in the calculation were the effects of special relativity and the theory of quantum electrodynamics, which describes the quantum physics of particles like electrons.
The result indicates that gold indeed adheres to expectations — when calculations are detailed enough. “Quantum theory works perfectly well, and that makes me extremely happy,” says Schwerdtfeger.
WASHINGTON — Researchers have devised a test to see if pairs of black holes — famous for creating gravitational waves when they merge — themselves formed from multiple mergers of smaller black holes.
The Advanced Laser Interferometer Gravitational-Wave Observatory, LIGO, has detected spacetime ripples from two sets of merging black holes (SN: 7/9/16, p. 8). Scientists typically assume that such black holes formed in the collapse of a massive star. But in especially crowded patches of the universe, black holes could have formed over generations of unions, astrophysicist Maya Fishbach of the University of Chicago explained January 28 at a meeting of the American Physical Society. Or the merging cycle could have occurred in the very early universe, starting with primordial black holes — objects that may have formed when extremely dense pockets of matter directly collapsed. Fishbach and colleagues studied how quickly black holes whirl around. In simulations, black holes that repeatedly merged reached a high rate of spin, the scientists found. That result didn’t depend on certain properties of the initial black holes, like whether they were spinning to begin with or not. “It’s cool,” says Fishbach. “The predictions from this in terms of spin are very robust,” making the idea easy to test.
So far, the spins of LIGO’s black holes are lower than the predictions. If the multiple merging process occurs, it could be very rare, so to conclusively test the idea would require tens to hundreds of black hole detections, Fishbach says.
A small, poorly understood segment of the population stays mentally healthy from age 11 to 38, a new study of New Zealanders finds. Everyone else encounters either temporary or long-lasting mental disorders.
Only 171 of 988 participants, or 17 percent, experienced no anxiety disorders, depression or other mental ailments from late childhood to middle age, researchers report in the February Journal of Abnormal Psychology. Of the rest, half experienced a transient mental disorder, typically just a single bout of depression, anxiety or substance abuse by middle age. “For many, an episode of mental disorder is like influenza, bronchitis, kidney stones, a broken bone or other highly prevalent conditions,” says study coauthor Jonathan Schaefer, a psychologist at Duke University. “Sufferers experience impaired functioning, many seek medical care, but most recover.”
The remaining 408 individuals (41 percent) experienced one or more mental disorders that lasted several years or more. Their diagnoses included more severe conditions such as bipolar and psychotic disorders.
Researchers analyzed data for individuals born between April 1972 and March 1973 in Dunedin, New Zealand. Each participant’s general health and behavior were assessed 13 times from birth to age 38. Eight mental health assessments occurred from age 11 to 38. Surprisingly, those who experienced lasting mental health did not display several characteristics previously linked to a lower likelihood of developing mental disorders. Those attributes consist of growing up in unusually affluent families, enjoying especially sound physical health and scoring exceptionally high on intelligence tests. Instead, mentally healthy participants tended to possess advantageous personality traits starting in childhood, Schaefer and colleagues found. These participants rarely expressed strongly negative emotions, had lots of friends and displayed superior self-control. Kiwis with rock-solid mental health also had fewer first- and second-degree relatives with mental disorders compared with their peers.
As adults, participants with enduring mental health reported, on average, more education, better jobs, higher-quality relationships and more satisfaction with their lives than their peers did. But lasting mental health doesn’t guarantee an exceptional sense of well-being, Schaefer says. Nearly one-quarter of never-diagnosed individuals scored below the entire sample’s average score for life satisfaction.
Less surprising was the 83 percent overall prevalence rate for mental disorders. That coincides with recent estimates from four other long-term projects. In those investigations — two in the United States, one in Switzerland and another in New Zealand — between 61 percent and 85 percent of participants developed mental disorders over 12- to 30-year spans.
Comparably high rates of emotional disorders were reported in 1962 for randomly selected Manhattan residents. Many researchers doubted those findings, which relied on a diagnostic system that was less strict than the three versions of psychiatry’s diagnostic manual that were introduced and used to evaluate New Zealand participants as they got older, says psychiatric epidemiologist William Eaton of Johns Hopkins Bloomberg School of Public Health. But the Manhattan study appears to have been on the right track, Eaton says.
Increased awareness that most people will eventually develop a mental disorder (SN: 10/10/09, p. 5), at least briefly, can reduce stigma attached to these conditions (SN Online: 10/13/16), he suspects.
Psychiatric epidemiologist Ronald Kessler suspects the numbers of people experiencing a mental disorder may be even higher than reported. Many participants deemed to have enduring mental health likely developed brief mental disorders that got overlooked, such as a couple of weeks of serious depression after a romantic breakup, says Kessler of Harvard Medical School, who directs U.S. surveys of mental disorders. Rather than focusing on rare cases of lasting mental health, “the more interesting thing is to compare people with persistent mental illness to those with temporary disorders,” he says.
Hydras, petite pond polyps known for their seemingly eternal youth, exemplify the art of bouncing back (SN: 7/23/16, p. 26). The animals’ cellular scaffolding, or cytoskeleton, can regrow from a slice of tissue that’s just 2 percent of the original hydra’s full body size. Researchers thought that molecular signals told cells where and how to rebuild, but new evidence suggests there are other forces at play.
Physicist Anton Livshits and colleagues at the Technion-Israel Institute of Technology in Haifa genetically engineered Hydra vulgaris specimens so that stretchy protein fibers called actins, which form the cytoskeleton, lit up under a microscope. Then, the team sliced and diced to look for mechanical patterns in the regeneration process. Actin fibers in pieces of hydra exert mechanical force that lines up new cells and guides the growth of the animal’s head and tentacles, the researchers found. Turning off motor proteins that move actin stopped regeneration, and physically manipulating actin fiber alignment resulted in hydras with multiple heads. Providing hydras with further structural stability encouraged tissue slices to grow normally. Both mechanical and molecular forces may mold hydras in regeneration, the researchers report in the Feb. 7 Cell Reports. When researchers anchored rings of hydra tissue to a wire (right), they found that the added mechanical stability made a hydra grow normally along one body axis, and thus grow one head. Without this stability, the actin scaffolding was more disrupted and the animal grew two heads (left).
BOSTON — For a lawn that helps the environment — and doesn’t need to be mowed — look to the ocean. Meadows of underwater seagrass plants might lower levels of harmful bacteria in nearby ocean waters, researchers reported February 16 during a news conference at the annual meeting of the American Association for the Advancement of Science. That could make the whole ecosystem — from corals to fish to humans — healthier.
Not truly a grass, seagrasses are flowering plants with long, narrow leaves. They grow in shallow ocean water, spreading into vast underwater lawns. Seagrasses are “a marine powerhouse, almost equal to the rainforest. They’re one of the largest stores of carbon in the ocean,” says study coauthor Joleah Lamb, an ecologist at Cornell University. “But they don’t get a lot of attention.” It’s no secret that seagrasses improve water quality, says James Fourqurean, a biologist at Florida International University in Miami who wasn’t involved in the research, which appears in the Feb. 17 Science. The plants are great at removing excess nitrogen and phosphorus from coastal waters. But now, it seems, they might take away harmful bacteria, too.
A few years ago, Lamb’s colleagues became ill with amoebic dysentery while studying coral reefs in Indonesia, an archipelagic nation that straddles the Indian and Pacific oceans. When a city or village on one of the country’s thousands of islands dumps raw sewage into the ocean, shoreline bacteria populations can spike to dangerous levels. Water sampled close to the shores of four small and densely populated Indonesian islands had 10 times the U.S. Environmental Protection Agency’s recommended exposure limit of Enterococcusbacteria, which can cause illness in humans and often signals the presence of other pathogens. But water collected from offshore tidal flats and coral reefs with seagrass beds had lower levels of the bacteria compared with similar sites without the plants less than 20 meters away. The water had lower levels of numerous bacterial species that can make fish and marine invertebrates sick, too. And field surveys of more than 8,000 coral heads showed that those growing adjacent to or within seagrass beds had fewer diseases than those growing farther away. It’s unclear how far from seagrass beds this cleaner water extends, but the benefits can ripple through the entire ecosystem, Lamb said at the news conference. Healthier corals help protect the islands from erosion. And fish less contaminated with bacteria make a better source of food for people.
Lamb is planning follow-up studies to figure out exactly how the seagrasses clean the water. Like a shag carpet, seagrasses trap small particulates drifting through the ocean and prevent them from flowing on. The plants might ensnare bacteria in the same way, building up biofilms on their blades. Or, she suggests, the leaves could be giving off antimicrobial compounds that directly kill the bacteria.
The findings are one more reason to conserve seagrasses, study coauthor Jeroen van de Water, an ecologist at the Scientific Center of Monaco, said at the news conference. Worldwide, seagrass beds are declining by 7 percent each year, thanks to pollution and habitat loss. And while restoration efforts are underway in some areas, “it’s better to stop what we’re doing to the meadows than to try to replant them,” Lamb added. “Seagrasses are quite particular in the depth they want to be at and the environment they want to have. It’s hard to start doing restoration projects if the environment isn’t exactly what the seagrass prefers.”
Volcanoes that belch hydrogen could bump up the number of potentially habitable planets in the universe.
Ramses Ramirez and Lisa Kaltenegger, both of Cornell University, modeled the atmospheres of planets blemished with hydrogen-spewing volcanoes. These gaseous eruptions could warm planets and ultimately widen a star’s habitable zone, the region where liquid water can exist on a planet’s surface, by about 30 to 60 percent, researchers report in the March 1 Astrophysical Journal Letters. That would be like extending the outer edge of the sun’s habitable zone from about 254 million kilometers — just beyond Mars’ orbit — to 359 million kilometers, or roughly to the asteroid belt between Mars and Jupiter.
Exoplanets that astronomers had previously thought too cold to support life might, in fact, be ripe for habitability if they have hydrogen volcanoes, the researchers say. One example is TRAPPIST-1h, the farthest-out planet identified in an odd system of seven Earth-sized planets 39 light-years from Earth (SN Online: 2/22/17). That world is thought to be icy like Jupiter’s moon Europa.
Adding planets to a star’s habitable zone means more exotic worlds could be targets in the search for signatures of life beyond our solar system. Astronomers plan to search for these signatures with the James Webb Space Telescope, slated to launch in 2018, and later with the European Extremely Large Telescope, scheduled to begin operations in 2024.
Just six weeks of training can turn average people into memory masters.
Boosting these prodigious mnemonic skills came with overhauls in brain activity, resulting in brains that behaved more like those of experts who win World Memory Championships competitions, scientists report March 8 in Neuron.
The findings are notable because they show just how remarkably adaptable the human brain is, says neuroscientist Craig Stark of the University of California, Irvine. “The brain is plastic,” he says. “Through use, it changes.” It’s not yet clear how long the changes in the newly trained brains last, but the memory gains persisted for four months.
In an initial matchup, a group of 17 memory experts, people who place high in World Memory Championships, throttled a group of people with average memories. Twenty minutes after seeing a list of 72 words, the experts remembered an average of 70.8 words; the nonexperts caught, on average, only 39.9 words.
In subsequent matchups, some nonexperts got varying levels of help. Fifty-one novices were split into three groups. A third of these people spent six weeks learning the method of loci, a memorization strategy used by ancient Greek and Roman orators. To use the technique, a person must imagine an elaborate mental scene, such as a palace or a familiar walking path, and populate it with memorable items. New information can then be placed onto this scaffold, offering a way to quickly “see” long lists of items.
Other participants spent six weeks training to improve short-term memory, performing a tricky task that required people to simultaneously keep track of series of locations they see and numbers they hear. The rest of the participants had no training at all.
After the training, the people who learned the method of loci performed nearly as well as the memory experts. But the rest didn’t show such improvement. Study coauthor Martin Dresler, a neuroscientist at the Radboud University Medical Center in the Netherlands, knew that the method of loci works quite well; he wasn’t surprised to see those memory scores spike. To him, the more interesting changes happened in the trained people’s brains. Before and after training, nonexperts underwent scans that pinpointed brain areas that were active at the same time, an indication that these brain areas work together closely. Dresler and colleagues looked at 2,485 connections in brain networks important for memory and visual and spatial thinking. Training in the method of loci seemed to reconfigure many of those connections, making some of the connections stronger and others weaker. The overall effect of training was to make brains “look like those of the world’s best memorizers,” Dresler says. The results suggest that large-scale changes across the brain, as opposed to changes in individual areas, drive the increased memory capacity.
These new memory skills were still obvious four months after training ended, particularly for the people whose brain behavior became more similar to that of the memory experts. The researchers didn’t scan participants’ brains four months out, so they don’t know whether the brain retains its reshaped connections. No such brain changes or big increases in memory skills were seen in the other groups.
Memorization techniques have been criticized as interesting tricks that have little use in real life. But “that’s not the case,” Dresler says. Boris Konrad, a coauthor of the study also at Radboud, is a memory master who trained in the method of loci. The technique “really helped him get much better grades” in physics and other complex studies, Dresler says.
Improvements in mnemonic memory, like other types of cognitive training, might not improve a broader range of thinking skills. The current study can’t answer bigger questions about whether brain training has more general benefits.
Not too long ago, the internet was stationary. Most often, we’d browse the Web from a desktop computer in our living room or office. If we were feeling really adventurous, maybe we’d cart our laptop to a coffee shop. Looking back, those days seem quaint.
Today, the internet moves through our lives with us. We hunt Pokémon as we shuffle down the sidewalk. We text at red lights. We tweet from the bathroom. We sleep with a smartphone within arm’s reach, using the device as both lullaby and alarm clock. Sometimes we put our phones down while we eat, but usually faceup, just in case something important happens. Our iPhones, Androids and other smartphones have led us to effortlessly adjust our behavior. Portable technology has overhauled our driving habits, our dating styles and even our posture. Despite the occasional headlines claiming that digital technology is rotting our brains, not to mention what it’s doing to our children, we’ve welcomed this alluring life partner with open arms and swiping thumbs.
Scientists suspect that these near-constant interactions with digital technology influence our brains. Small studies are turning up hints that our devices may change how we remember, how we navigate and how we create happiness — or not. Somewhat limited, occasionally contradictory findings illustrate how science has struggled to pin down this slippery, fast-moving phenomenon. Laboratory studies hint that technology, and its constant interruptions, may change our thinking strategies. Like our husbands and wives, our devices have become “memory partners,” allowing us to dump information there and forget about it — an off-loading that comes with benefits and drawbacks. Navigational strategies may be shifting in the GPS era, a change that might be reflected in how the brain maps its place in the world. Constant interactions with technology may even raise anxiety in certain settings.
Yet one large study that asked people about their digital lives suggests that moderate use of digital technology has no ill effects on mental well-being.
The question of how technology helps and hinders our thinking is incredibly hard to answer. Both lab and observational studies have drawbacks. The artificial confines of lab experiments lead to very limited sets of observations, insights that may not apply to real life, says experimental psychologist Andrew Przybylski of the University of Oxford. “This is a lot like drawing conclusions about the effects of baseball on players’ brains after observing three swings in the batting cage.”
Observational studies of behavior in the real world, on the other hand, turn up associations, not causes. It’s hard to pull out real effects from within life’s messiness. The goal, some scientists say, is to design studies that bring the rigors of the lab to the complexities of real life, and then to use the resulting insights to guide our behavior. But that’s a big goal, and one that scientists may never reach.
Evolutionary neurobiologist Leah Krubitzer is comfortable with this scientific ambiguity. She doesn’t put a positive or negative value on today’s digital landscape. Neither good nor bad, it just is what it is: the latest iteration on the continuum of changing environments, says Krubitzer, of the University of California, Davis.
“I can tell you for sure that technology is changing our brains,” she says. It’s just that so far, no one knows what those changes mean.
Of course, nearly everything changes the brain. Musical training reshapes parts of the brain. Learning the convoluted streets of London swells a mapmaking structure in the brains of cabbies. Even getting a good night’s sleep changes the brain. Every aspect of our environment can influence brain and behaviors. In some ways, digital technology is no different. Yet some scientists suspect that there might be something particularly pernicious about digital technology’s grip on the brain.
“We are information-seeking creatures,” says neuroscientist Adam Gazzaley of the University of California, San Francisco. “We are driven to it in very powerful ways.” Today’s digital tools give us unprecedented exposure to information that doesn’t wait for you to seek it out; it seeks you out, he says. That pull is nearly irresistible.
Despite the many unanswered questions about whether our digital devices are influencing our brains and behaviors, and whether for good or evil, technology is galloping ahead. “We should have been asking ourselves [these sorts of questions] in the ’70s or ’80s,” Krubitzer says. “It’s too late now. We’re kind of closing the barn doors after the horses got out.” Attention grabber One way in which today’s digital technology is distinct from earlier advances (like landline telephones) is the sheer amount of time people spend with it. In just a decade, smartphones have saturated the market, enabling instant internet access to an estimated 2 billion people around the world. In one small study reported in 2015, 23 adults, ages 18 to 33, spent an average of five hours a day on their phones, broken up into 85 distinct daily sessions. When asked how many times they thought they used their phones, participants underestimated by half.
In a different study, Larry Rosen, a psychologist at California State University, Dominguez Hills, used an app to monitor how often college students unlocked their phones. The students checked their phones an average of 60 times a day, each session lasting about three to four minutes for a total of 220 minutes a day. That’s a lot of interruption, Rosen says. Smartphones are “literally omnipresent 24-7, and as such, it’s almost like an appendage,” he says. And often, we are compelled to look at this new, alluring rectangular limb instead of what’s around us. “This device is really powerful,” Rosen says. “It’s really influencing our behavior. It’s changed the way we see the world.”
Technology does that. Printing presses, electricity, televisions and telephones all shifted people’s habits drastically, Przybylski says. He proposes that the furor over digital technology melting brains and crippling social lives is just the latest incarnation of the age-old fear of change. “You have to ask yourself, ‘Is there something magical about the power of an LCD screen?’ ” Przybylski says.
Yet some researchers suspect that there is something particularly compelling about this advance. “It just feels different. Computers and the internet and the cloud are embedded in our lives,” says psychologist Benjamin Storm of the University of California, Santa Cruz. “The scope of the amount of information we have at our fingertips is beyond anything we’ve ever experienced. The temptation to become really reliant on it seems to be greater.”
Memory outsourcing Our digital reliance may encourage even more reliance, at least for memory, Storm’s work suggests. Sixty college undergraduates were given a mix of trivia questions — some easy, some hard. Half of the students had to answer the questions on their own; the other half were told to use the internet. Later, the students were given an easier set of questions, such as “What is the center of a hurricane called?” This time, the students were told they could use the internet if they wanted.
People who had used the internet initially were more likely to rely on internet help for the second, easy set of questions, Storm and colleagues reported online last July in Memory. “People who had gotten used to using the internet continued to do so, even though they knew the answer,” Storm says. This kind of overreliance may signal a change in how people use their memory. “No longer do we just rely on what we know,” he says. That work builds on results published in a 2011 paper in Science . A series of experiments showed that people who expected to have access to the internet later made less effort to remember things . In this way, the internet has taken the place formerly filled by spouses who remember birthdays, grandparents who remember recipes and coworkers who remember the correct paperwork codes — officially known as “transactive memory partners.” “We are becoming symbiotic with our computer tools,” Betsy Sparrow, then at Columbia University, and colleagues wrote in 2011. “The experience of losing our internet connection becomes more and more like losing a friend. We must remain plugged in to know what Google knows.”
That digital crutch isn’t necessarily a bad thing, Storm points out. Human memory is notoriously squishy, susceptible to false memories and outright forgetting. The internet, though imperfect, can be a resource of good information. And it’s not clear, he says, whether our memories are truly worse, or whether we perform at the same level, but just reach the answer in a different way.
“Some people think memory is absolutely declining as a result of us using technology,” he says. “Others disagree. Based on the current data, though, I don’t think we can really make strong conclusions one way or the other.”
The potential downsides of this memory outsourcing are nebulous, Storm says. It’s possible that digital reliance influences — and perhaps even weakens — other parts of our thinking. “Does it change the way we learn? Does it change the way we start to put information together, to build our own stories, to generate new ideas?” Storm asks. “There could be consequences that we’re not necessarily aware of yet.”
Research by Gazzaley and others has documented effects of interruptions and multitasking, which are hard to avoid with incessant news alerts, status updates and Instagrams waiting in our pockets. Siphoning attention can cause trouble for a long list of thinking skills, including short- and long-term memory, attention, perception and reaction time. Those findings, however, come from experiments in labs that ask a person to toggle between two tasks while undergoing a brain scan, for instance. Similar effects have not been as obvious for people going about their daily lives, Gazzaley says. But he is convinced that constant interruptions — the dings and buzzes, our own restless need to check our phones — are influencing our ability to think.
Making maps Consequences of technology are starting to show up for another cognitive task — navigating, particularly while driving. Instead of checking a map and planning a route before a trip, people can now rely on their smartphones to do the work for them. Anecdotal news stories describe people who obeyed the tinny GPS voice that instructed them to drive into a lake or through barricades at the entrance of a partially demolished bridge. Our navigational skills may be at risk as we shift to neurologically easier ways to find our way, says cognitive neuroscientist Véronique Bohbot of McGill University in Montreal.
Historically, getting to the right destination required a person to have the lay of the land, a mental map of the terrain. That strategy takes more work than one that’s called a “response strategy,” the type of navigating that starts with an electronic voice command. “You just know the response — turn right, turn left, go straight. That’s all you know,” Bohbot says. “You’re on autopilot.” A response strategy is easier, but it leaves people with less knowledge. People who walked through a town in Japan with human guides did a better job later navigating the same route than people who had walked with GPS as a companion, researchers have found.
Scientists are looking for signs that video games, which often expose people to lots of response-heavy situations, influence how people get around. In a small study, Bohbot and colleagues found that people who average 18 hours a week playing action video games such as Call of Duty navigated differently than people who don’t play the games. When tested on a virtual maze, players of action video games were more likely to use the simpler response learning strategy to make their way through, Bohbot and colleagues reported in 2015 in Proceedings of the Royal Society B.
That easier type of response navigation depends on the caudate nucleus, a brain area thought to be involved in habit formation and addiction. In contrast, nerve cells in the brain’s hippocampus help create mental maps of the world and assist in the more complex navigation. Some results suggest that people who use the response method have bigger caudate nuclei, and more brain activity there. Conversely, people who use spatial strategies that require a mental map have larger, busier hippocampi.
Those results on video game players are preliminary and show an association within a group that may share potentially confounding similarities. Yet it’s possible that getting into a habit of mental laxity may change the way people navigate. Digital technology isn’t itself to blame, Bohbot says. “It’s not the technology that’s necessarily good or bad for our brain. It’s how we use the technology,” she says. “We have a tendency to use it in the way that seems to be easiest for us. We’re not making the effort.”
Parts of the brain, including those used to navigate, have many jobs. Changing one aspect of brain function with one type of behavior might have implications for other aspects of life. A small study by Bohbot showed that people who navigate by relying on the addiction-related caudate nucleus smoke more cigarettes, drink more alcohol and are more likely to use marijuana than people who rely on the hippocampus. What to make of that association is still very much up in the air.
Sweating the smartphone Other researchers are trying to tackle questions of how technology affects our psychological outlooks. Rosen and colleagues have turned up clues that digital devices have become a new source of anxiety for people. In diabolical experiments, Cal State’s Rosen takes college students’ phones away, under the ruse that the devices are interfering with laboratory measurements of stress, such as heart rate and sweating. The phones are left on, but placed out of reach of the students, who are reading a passage. Then, the researchers start texting the students, who are forced to listen to the dings without being able to see the messages or respond. Measurements of anxiety spike, Rosen has found, and reading comprehension dwindles.
Other experiments have found that heavy technology users last about 10 minutes without their phones before showing signs of anxiety.
Fundamentally, an interruption in smartphone access is no different from those in the days before smartphones, when the landline rang as you were walking into the house with bags full of groceries, so you missed the call. Both situations can raise anxiety over a connection missed. But Rosen suspects that our dependence on digital technology causes these situations to occur much more often.
“The technology is magnificent,” he says. “Having said that, I think that this constant bombardment of needing to check in, needing to be connected, this feeling of ‘I can’t be disconnected, I can’t cut the tether for five minutes,’ that’s going to have a long-term effect.”
The question of whether digital technology is good or bad for people is nearly impossible to answer, but a survey of 120,000 British 15-year-olds (99.5 percent reported using technology daily) takes a stab at it. Oxford’s Przybylski and Netta Weinstein at Cardiff University in Wales have turned up hints that moderate use of digital technology — TV, computers, video games and smartphones — correlates with good mental health, measured by questions that asked about happiness, life satisfaction and social activity.
When the researchers plotted technology use against mental well-being, an umbrella-shaped curve emerged, highlighting what the researchers call the “Goldilocks spot” of technology use — not too little and not too much.
“We found that you’ve got to do a lot of texting before it hurts,” Przybylski says. For smartphone use, the shift from benign to potentially harmful came after about two hours of use on weekdays, mathematical analyses revealed. Weekday recreational computer use had a longer limit: four hours and 17 minutes, the researchers wrote in the February Psychological Science. For even the heaviest users, the relationship between technology use and poorer mental health wasn’t all that strong. For scale, the potential negative effects of all that screen time was less than a third of the size of the positive effects of eating breakfast, Przybylski and Weinstein found.
Even if a relationship is found between technology use and poorer mental health, scientists still wouldn’t know why, Przybylski says. Perhaps the effect comes from displacing something, such as exercise or socializing, and not the technology itself.
We may never know just how our digital toys shape our brains. Technology is constantly changing, and fast. Our brains are responding and adapting to it.
“The human neocortex basically re-creates itself over successive generations,” Krubitzer says. It’s a given that people raised in a digital environment are going to have brains that reflect that environment. “We went from using stones to crack nuts to texting on a daily basis,” she says. “Clearly the brain has changed.”
It’s possible that those changes are a good thing, perhaps better preparing children to succeed in a fast-paced digital world. Or maybe we will come to discover that when we no longer make the effort to memorize our best friend’s phone number, something important is quietly slipping away.