Experts issue warning on problems with P values

Here’s a good idea for the next presidential candidate debate: They can insult each other about their ignorance of statistics.

Actually, it’s a pertinent topic for political office seekers, as public opinion polls use statistical methods to measure the electorate’s support (or lack thereof) for a particular candidate. But such polls are notoriously unreliable, as Hillary Clinton found out in Michigan.

It probably wouldn’t be a very informative debate, of course — just imagine how Donald Trump would respond to a question asking what he thought about P values. Sadly, though, he and the other candidates might actually understand P values just about as well as many practicing scientists — which is to say, not very well at all.
In recent years criticism about P values — statistical measures widely used to analyze experimental data in most scientific disciplines — has finally reverberated loudly enough for the scientific community to listen. A watershed acknowledgment of P value problems appeared this week when the American Statistical Association issued a statement warning the rest of the world about the limitations of P values and their widespread misuse.

“While the p-value can be a useful statistical measure, it is commonly misused and misinterpreted,” the statistical association report stated. “This has led to some scientific journals discouraging the use of p-values, and some scientists and statisticians recommending their abandonment.”

In light of these issues, the association convened a group of experts to formulate a document listing six “principles” regarding P values for the guidance of “researchers, practitioners and science writers who are not primarily statisticians.” Of those six principles, the most pertinent for people in general (and science journalists in particular) is No. 5: “A p-value, or statistical significance, does not measure the size of an effect or the importance of a result.”

What, then, does it measure? That’s principle No. 1: “… how incompatible the data are with a specified statistical model.” But note well principle No. 2: “P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.” And therefore, always remember principle No. 3: “Scientific conclusions … or policy decisions should not be based only on whether a p-value passes a specific threshold.”

In other words, the common convention of judging a P value less than .05 to be “statistically significant” is not really a proper basis for assigning significance at all. Except that scientific journals still regularly use that criterion for deciding whether a paper gets published. Which in turn drives researchers to finagle their data to get a P value of less than .05. As a result, the scientific process is tarnished and the published scientific literature is often unreliable.
As the statistical association statement points out, this situation is far from merely of academic concern.

“The issues touched on here affect not only research, but research funding, journal practices, career advancement, scientific education, public policy, journalism, and law,” the authors point out in the report, published online March 7 in The American Statistician.

Many of the experts who participated in the process wrote commentaries on the document, some stressing that it did not go far enough in condemning P values’ pernicious influences on science.

“Viewed alone, p-values calculated from a set of numbers and assuming a statistical model are of limited value and frequently are meaningless,” wrote biostatistician Donald Berry of MD Anderson Cancer Center in Houston. He cited the serious negative impact that misuse and misinterpretation of P values has had not only on science, but also on society. “Patients with serious diseases have been harmed. Researchers have chased wild geese, finding too often that statistically significant conclusions could not be reproduced. The economic impacts of faulty statistical conclusions are great.”

Echoing Berry’s concerns was Boston University epidemiologist Kenneth Rothman. “It is a safe bet that people have suffered or died because scientists (and editors, regulators, journalists and others) have used significance tests to interpret results,” Rothman wrote. “The correspondence between results that are statistically significant and those that are truly important is far too low to be useful. Consequently, scientists have embraced and even avidly pursued meaningless differences solely because they are statistically significant, and have ignored important effects because they failed to pass the screen of statistical significance.”

Stanford University epidemiologist John Ioannidis compared the scientific community’s attachment to P values with drug addiction, fueled by the institutional rewards that accompany the publication process.

“Misleading use of P-values is so easy and automated that, especially when rewarded with publication and funding, it can become addictive,” Ioannidis commented. “Investigators generating these torrents of P-values should be seen with sympathy as drug addicts in need of rehabilitation that will help them live a better, more meaningful scientific life in the future.”

Although a handful of P value defenders can still be found among the participants in this discussion, it should be clear by now that P values, as currently used in science, do more harm than good. They may be valid and useful under certain specific circumstances, but those circumstances are rarely relevant in most experimental contexts. As Berry notes, statisticians can correctly define P values in a technical sense, but “most statisticians do not really understand the issues in applied settings.”

In its statement, the statistical association goes a long way toward validating the concerns about P values that have been expressed for decades by many critical observers. This validation may succeed in initiating change where previous efforts have failed. But that won’t happen without identifying some alternative to the P value system, and while many have been proposed, no candidate has emerged as an acceptable nominee for a majority of the scientific world’s electorate. So the next debate should not be about P values — it should be about what to replace them with.

Microbes can play games with the mind

The 22 men took the same pill for four weeks. When interviewed, they said they felt less daily stress and their memories were sharper. The brain benefits were subtle, but the results, reported at last year’s annual meeting of the Society for Neuroscience, got attention. That’s because the pills were not a precise chemical formula synthesized by the pharmaceutical industry.

The capsules were brimming with bacteria.

In the ultimate PR turnaround, once-dreaded bacteria are being welcomed as health heroes. People gobble them up in probiotic yogurts, swallow pills packed with billions of bugs and recoil from hand sanitizers. Helping us nurture the microbial gardens in and on our bodies has become big business, judging by grocery store shelves.
These bacteria are possibly working at more than just keeping our bodies healthy: They may be changing our minds. Recent studies have begun turning up tantalizing hints about how the bacteria living in the gut can alter the way the brain works. These findings raise a question with profound implications for mental health: Can we soothe our brains by cultivating our bacteria?
By tinkering with the gut’s bacterial residents, scientists have changed the behavior of lab animals and small numbers of people. Microbial meddling has turned anxious mice bold and shy mice social. Rats inoculated with bacteria from depressed people develop signs of depression themselves. And small studies of people suggest that eating specific kinds of bacteria may change brain activity and ease anxiety. Because gut bacteria can make the very chemicals that brain cells use to communicate, the idea makes a certain amount of sense.

Though preliminary, such results suggest that the right bacteria in your gut could brighten mood and perhaps even combat pernicious mental disorders including anxiety and depression. The wrong microbes, however, might lead in a darker direction.
This perspective might sound a little too much like our minds are being controlled by our bacterial overlords. But consider this: Microbes have been with us since even before we were humans. Human and bacterial cells evolved together, like a pair of entwined trees, growing and adapting into a (mostly) harmonious ecosystem.

Our microbes (known collectively as the microbiome) are “so innate in who we are,” says gastroenterologist Kirsten Tillisch of UCLA. It’s easy to imagine that “they’re controlling us, or we’re controlling them.” But it’s becoming increasingly clear that no one is in charge. Instead, “it’s a conversation that our bodies are having with our microbiome,” Tillisch says.

Figuring out what’s being said in this body-microbe exchange, and how to shift the tone in a way that improves mental health, won’t be easy. For starters, no one knows the exact ingredients for a healthy microbial community, and the recipe probably differs from person to person. And it’s not always simple to deliver microbes to the gut and persuade them to stay. Nor is it clear how messages travel between microbes and brain, though scientists have some ideas.

It’s early days, but so far, the results are compelling, says neuro­scientist John Cryan of University College Cork in Ireland, who has been trying to clarify how microbes influence the brain. “It’s all slightly weird and it’s all fascinating,” he says.

Cryan and others are amassing evidence that they hope will lead to “psychobiotics” — bacteria-based drugs made of live organisms that could improve mental health.

We’re not alone
Ted Dinan, the psychiatrist who coined the term “psycho­biotics,” was fascinated by a tragedy in Walkerton, Canada, in May 2000. Floods caused the small town’s water supply to be overrun with dangerous strains of two bacteria: Escherichia coli and Campylobacter. About half the town’s population got ill, and a handful of people died. For most residents, the illness was short-lived, about 10 days on average, says Dinan, who collaborates with Cryan at University College Cork. But years later, scientists who had been following the health of Walkerton residents noticed something surprising. “The rates of depression in Walkerton were clearly and significantly up,” Dinan says. That spike raised suspicion that the infection had caused the depression.

Other notorious bacteria have been tied to depression, such as those behind syphilis and the cattle-related brucellosis, and not just because ill people feel sad, Dinan says. He suspects there’s something specific about an off-kilter microbiome that can harm mental health.
This possibility, though it raises troubling questions about free will, is certainly true for lab animals. Mice born and raised without bacteria behave in all sorts of bizarre ways, exhibiting antisocial tendencies, memory troubles and recklessness, in some cases. Microbes in fruit flies can influence who mates with whom (SN: 1/11/14, p. 14), and bacteria in stinging wasps can interfere with reproduction in a way that prevents separate species from merging. Those findings, some by evolutionary biologist Seth Bordenstein of Vanderbilt University in Nashville, show that “there’s this potential for [microbes] to influence behavior in this complex and vast way,” he says.

By sheer numbers, human bodies are awash in bacteria. A recent study estimates there are just as many bacterial cells as human cells in our bodies (SN: 2/6/16, p. 6). Just how legions of bacteria get messages to the brain isn’t clear, though scientists have already found some likely communication channels. Chemically, gut microbes and the brain actually speak the same language. The microbiome churns out the mood-influencing neurotransmitters serotonin, norepinephrine and dopamine. Bacteria can also change how the central nervous system uses these chemicals. Cryan calls microbes in the gut “little factories for producing lots of different neuro­active substances.”

Signals between the gut and the brain may zip along the vagus nerve, a multilane highway that connects the two (SN: 11/28/15, p. 18). Although scientists don’t understand the details of how messages move along the vagus nerve, they do know that this highway is important. Snip the nerve in mice and the bacteria no longer have an effect on behavior, a 2011 study found. And when the gut-to-brain messages change, problems can arise.

New bacteria, new behavior
Wholesale microbe swaps can also influence behavior. In unpublished work, Dinan and his colleagues took stool samples from people with depression and put those bacteria (called “melancholic microbes” by Dinan in a 2013 review in Neurogastroenterology and Motility) into rats. The formerly carefree rodents soon began showing signs of depression and anxiety, forgoing a sweet water treat and showing more anxiety in a variety of tests. “Their behavior does quite dramatically change,” Dinan says. Rats that got a microbiome from a person without depression showed no changes in behavior.

Cryan and colleagues have found that the microbiomes of people with depression differ from those of people without depression, raising the possibility that a diseased microbiome could be to blame.
The fecal-transplant results suggest that depression — and perhaps other mental disorders — are contagious, in a sense. And a mental illness that could be caught from microbe swaps could pose problems. Fecal transplants have recently emerged as powerful ways to treat serious gut infections (SN Online: 10/16/14). Fecal donors ought to be screened for a history of mental illness along with other potentially communicable diseases, Dinan says.

“Gastroenterologists obviously check for HIV and hepatitis C. They don’t want to transmit an infection,” he says. The psychiatric characteristics of the donor should be taken into account as well, he says.

A fecal transplant is an extreme microbiome overhaul. But there are hints that introducing just one or several bacterial species can also change the way the brain works. One such example comes from Cryan, Dinan and colleagues. After taking a probiotic pill containing a bacterium called Bifidobacterium longum for a month, 22 healthy men reported feeling less stress than when they took a placebo. The men also had lower levels of the stress-related hormone cortisol while under duress, the researchers reported at the Society for Neuroscience meeting in Chicago last October. After taking the probiotic, the men also showed slight improvements on a test of visual memory, benefits that were reflected in the brain. EEG recordings revealed brain wave signatures that have been tied to memory skill, Cryan says.

The researchers had previously published similar effects in mice, but the new results move those findings into people. “What’s going to be important is to mechanistically find out why this specific bacteria is inducing these effects,” Cryan says. And whether there could be a benefit for people with heightened anxiety. “It’s a very exciting study, but it’s a small study,” Cryan cautions.

Bacteria in an even more palatable form — yogurt — affected brain activity in response to upsetting scenes in one study. After eating a carefully concocted yogurt every morning and evening for a month, 12 healthy women showed a blunted brain reaction to pictures of angry or scared faces compared with 11 women who had eaten a yogurtlike food without bacteria.

Brain response was gauged by functional MRI, which measures changes in blood flow as a proxy for neural activity. In particular, brain areas involved in processing emotions and sensations such as pain were calmed, says Tillisch, co­author of the study, published in 2013 in Gastroenterology. “In this small group, we saw that the brain responded differently” when shown the pictures, she says. It’s not clear whether a blunted response would be good or bad, particularly since the study participants were all healthy women who didn’t suffer from anxiety. Nonetheless, Tillisch says, the results raise the questions: “Can probiotics change your mood? Can they make you feel better if you feel bad?”

So far, the human studies have been very small. But coupled with the increasing number of animal studies, the results are hard to ignore, Tillisch says. “Most of us in this field think there is something definitely happening,” she says. “But it’s pretty complicated and probably quite subtle…. Otherwise, we’d all be aware of this.” Anyone who has taken a course of anti­biotics, or fallen ill from a bacterial infection, or even changed diets would have noticed an obvious change in mood, she says.

Two-way traffic
If it turns out that bacteria can influence our brains and behaviors, even if just in subtle ways, it doesn’t mean we are passive vessels at the mercy of our gut residents. Our behavior can influence the microbiome right back.

“We usually give up our power pretty quickly in this conversation,” Tillisch says. “We say, ‘Oh, we’re at the mercy of the bacteria that we got from our mothers when we were born and the antibiotics we got at the pediatrician’s office.’ ” But our microbes aren’t our destiny, she says. “We can mess with them too.”

One of the easiest ways to do so is through food: eating probiotics, such as yogurt or kefir, that contain bacteria and choosing a diet packed with “prebiotic” foods, such as fiber and garlic, onion and asparagus. Prebiotics nourish what are thought to be beneficial microbes, offering a simple way to cultivate the microbiome, and in turn, health.
That a good diet is a gateway to good health is not a new idea, Cryan says. Take the old adage: “Let food be thy medicine and let medicine be thy food.” He suspects that it’s our microbiome that makes this advice work.

Combating stress may be another way to change the microbiome, Tillisch and others suspect. Mouse studies have shown that stress, particularly early in life, can change microbial communities, and not in a good way.

She and her colleagues are testing a relaxation technique called mindfulness-based stress reduction to influence the microbiome. In people with gut pain and discomfort, the meditation-based practice reduced symptoms and changed their brains in clinically interesting ways, according to unpublished work. The researchers suspect that the microbiome was also altered by the meditation. They are testing that hypothesis now.

If the mind can affect the microbiome and the microbiome can affect the mind, it makes little sense to talk about who is in charge, Bordenstein says. In an essay in PLOS Biology last year, he and colleague Kevin Theis, of Wayne State University in Detroit, make the case that the definition of “I” should be expanded. An organism, Bordenstein and Theis argued, includes the microbes that live in and on it, a massive conglomerate of diverse parts called a holobiont. Giving a name to this complex and diverse consortium could shift scientists’ views of humans in a way that leads to deeper insights. “What we need to do,” Bordenstein says, “is add microbes to the ‘me, myself and I’ concept.”

Japan’s new X-ray space telescope has gone silent

A new X-ray telescope run by the Japan Aerospace Agency has gone silent a little more than a month after its launch. JAXA reported online March 27 that the telescope, ASTRO-H (aka Hitomi), stopped communicating with Earth. U.S. Strategic Command’s Joint Space Operations Center also reported seeing five pieces of debris alongside the satellite on March 26.

Attempts to figure out what went wrong with the spacecraft, which launched February 17, have not been successful. Up until now though, ASTRO-H seemed to be functioning. In late February, mission operators successfully switched on the spacecraft’s cooling system and tested some of its instruments.

ASTRO-H carries four instruments to study cosmic X-rays over an energy range from 0.3 to 600 kiloelectron volts. By studying X-rays, astronomers hope to learn more about some of the more feisty denizens of the universe such as exploding stars, gorging black holes, and dark matter swirling around within galaxy clusters. Earth’s atmosphere absorbs X-rays, so the only way to see them is to put a telescope in space.

In the Coral Triangle, clownfish figured out how to share

Clownfish and anemones depend on one another. The stinging arms of the anemones provide clownfish with protection against predators. In return, the fish keep the anemone clean and provide nutrients, in the form of poop. Usually, several individual clownfish occupy a single anemone — a large and dominant female, an adult male and several subordinates — all from the same species. But with 28 species of clownfish and 10 species of anemone, there can be a lot of competition for who gets to occupy which anemone.

In the highly diverse waters of the Coral Triangle of Southeast Asia, however, clownfish have figured out how to share, researchers report March 30 in the Proceedings of the Royal Society B. Anemones in these waters are often home to multiple species of clownfish that live together peacefully.

From 2005 to 2014, Emma Camp, of the University of Technology Sydney and colleagues gathered data on clownfish and their anemone homes from 20 locations that had more than one species of clownfish residents. In 981 underwater survey transects, they encountered 1,508 clownfish, 377 of which lived in groups consisting of two or more fish species in a single anemone.

Most of those cohabiting clownfish could be found in the waters of the Coral Triangle, the team found, with the highest levels of species cohabitation occurring off Hoga Island in Indonesia. There, the researchers found 437 clownfish from six species living among 114 anemones of five species. Every anemone was occupied by clownfish, and half had two species of the fish.

In general, “when the number of clownfish species exceeded the number of host anemone species, cohabitation was almost always documented,” the researchers write.

The multiple-species groups divvied up space in an anemone similar to the way that a single-species group does, with subordinate fish sticking to the peripheries. That way, those subordinate fish can avoid fights — and potentially getting kicked off the anemone or even dying. “Living on the periphery of an anemone, despite the higher risk of predation, is a better option than having no host anemone,” the team writes.

These multi-species groups might even be better for both of the clownfish species, since they wouldn’t have to compete so much over mates, and perhaps even less over food, if the species had different diets.

This isn’t the first time that scientists have found cohabitation to be an effective strategy in an area of high biodiversity. This has also been demonstrated with scorpions in the Amazon. But it does show how important it is to conserve species in regions such as this, the researchers say — because losing one species can easily wipe out several more.

Lip-readers ‘hear’ silent words

NEW YORK — Lip-readers’ minds seem to “hear” the words their eyes see being formed. And the better a person is at lipreading, the more neural activity there is in the brain’s auditory cortex, scientists reported April 4 at the annual meeting of the Cognitive Neuroscience Society.

Earlier studies have found that auditory brain areas are active during lipreading. But most of those studies focused on small bits of language — simple sentences or even single words, said study coauthor Satu Saalasti of Aalto University in Finland. In contrast, Saalasti and colleagues studied lipreading in more natural situations. Twenty-nine people read the silent lips of a person who spoke Finnish for eight minutes in a video. “We can all lip-read to some extent,” Saalasti said, and the participants, who had no lipreading experience, varied widely in their comprehension of the eight-minute story.

In the best lip-readers, activity in the auditory cortex was quite similar to that evoked when the story was read aloud, brain scans revealed. The results suggest that lipreading success depends on a person’s ability to “hear” the words formed by moving lips, Saalasti said.

Zika’s role as a cause of severe birth defects confirmed

It’s official: Zika virus causes microcephaly and other birth defects.

A new analysis by the U.S. Centers for Disease Control and Prevention confirms what many earlier studies had suggested: The virus, typically passed via the bite of an infected mosquito, can travel from a pregnant woman to her fetus and wreak havoc in the brain.

“There is no longer any doubt that Zika causes microcephaly,” CDC director Tom Frieden said in a news briefing Wednesday. The findings, reported April 13 in the New England Journal of Medicine, follow a March 31 report from the World Health Organization that concluded nearly the same thing.

Because the connection between a mosquito-borne illness and such birth defects is so unprecedented, the CDC took time to carefully weigh the evidence, Frieden said. “Never before in history has there been a situation where a bite from a mosquito could result in a devastating malformation.”

In the NEJM analysis, researchers factored in molecular, epidemiological and clinical data, including recent reports of babies born with microcephaly in Colombia. The country has been suffering from a Zika outbreak for months, and thousands of pregnant women have been infected with the virus. Based on what scientists know about the virus, now is about the time they would have expected to see birth defects, said CDC public health researcher and study coauthor Sonja Rasmussen. WHO reports 50 cases of microcephaly in Colombia, seven of which have a confirmed link to Zika.

Researchers still can’t pin down the odds that an infection during pregnancy will lead to microcephaly, though. “What we don’t know right now is if the risk is somewhere in the range of 1 percent or in the range of 30 percent,” Rasmussen said.

Scientists do believe, however, that women who aren’t pregnant would probably clear a Zika infection within eight weeks, and not have problems with future pregnancies, Rasmussen said.

Ions may be in charge of when you sleep and wake

To rewrite an Alanis Morissette song, the brain has a funny way of waking you up (and putting you to sleep). Isn’t it ionic? Some scientists think so.

Changes in ion concentrations, not nerve cell activity, switch the brain from asleep to awake and back again, researchers report in the April 29 Science. Scientists knew that levels of potassium, calcium and magnesium ions bathing brain cells changed during sleep and wakefulness. But they thought neurons — electrically active cells responsible for most of the brain’s processing power — drove those changes.
Instead, the study suggests, neurons aren’t the only sandmen or roosters in the brain. “Neuromodulator” brain chemicals, which pace neuron activity, can bypass neurons altogether to directly wake the brain or lull it to sleep by changing ion concentrations.

Scientists hadn’t found this direct connection between ions and sleep and wake before because they were mostly focused on what neurons were doing, says neuroscientist Maiken Nedergaard, who led the study. She got interested in sleep after her lab at the University of Rochester in New York found a drainage system that washes the brain during sleep (SN: 11/16/13, p. 7).When measuring changes in the fluid between brain cells, Nedergaard and colleagues realized that ion changes followed predictable patterns: Potassium ion levels are high when mice (and presumably people) are awake, and drop during sleep. Calcium and magnesium ions follow the opposite pattern; they are higher during sleep and lower when mice are awake.
In the study, Nedergaard’s group administered a “wake cocktail” of neuromodulator chemicals to mouse brains. Levels of potassium ions floating between brain cells increased rapidly after the treatment, the researchers found. That ion change happened even when the researchers added tetrodotoxin to stop neuron activity. The results suggest that the brain chemicals — norepinephrine, acetylcholine, dopamine, orexin and histamine — directly affect ion levels with no help from neurons. Exactly how the chemicals manage ion levels still isn’t known.
Similar changes happen under anesthesia. When awake mice were anesthetized, potassium ion levels in their brains dropped sharply, while levels of calcium and magnesium rose, the researchers found. As mice awoke from anesthesia, potassium ion levels rose quickly. But calcium and magnesium levels took longer to drop. As a result, the mice “are totally confused,” says Nedergaard. “They bump into their cages, they run around and they don’t know what they are doing.”

Those results may help explain why people are groggy after waking up from anesthesia; their ion levels haven’t returned to “awake” levels yet, says Amita Sehgal, a sleep researcher at the University of Pennsylvania School of Medicine.

Learning more about how ions affect wake and sleep may eventually lead to a better understanding of sleep, consciousness and coma, Nedergaard says.

But, says neuroscientist Chiara Cirelli of the University of Wisconsin‒Madison, practical implications of the work, such as improved sleep drugs, are probably far in the future. “How they make use of it will take some time, but just knowing this is certainly very eye-opening.” It would be interesting to find out what happens to ion concentrations during REM sleep, when neurons are as active as they are when a person is awake, she says.

High-fashion goes high-tech in ‘#techstyle’

techstyle

Through July 10
Museum of Fine Arts, Boston

You wouldn’t expect wardrobe classics like leather jackets or denim jeans at an exhibit celebrating fashion at its most forward. But “#techstyle” at the Museum of Fine Arts in Boston features those sartorial mainstays and others, each with a technological twist.

A feast for the eyes, the diversity of pieces is matched by the diversity of artists and approaches. Yet a single theme unites: The fusion of technology and fashion will increasingly influence both. Visitors are introduced to this theme via a room featuring works by prominent designers already known for merging fashion and tech: A digitally printed silk dress by Alexander McQueen hangs next to a fiberglass “airplane dress” by Hussein Chalayan that has flaps that open and shut via remote control.
The largest part of the exhibit focuses on how technology is changing design and construction strategies. In addition to clothes made with mainstream techniques like laser-cutting, several 3-D printed garments are on display. These include a kinematic dress made of more than 1,600 interlocking pieces that can be customized to a wearer’s body via a 3-D scan. The dress comes off the printer fully assembled. Other pieces are made with technologies still being developed, such as the laser-welded fabrics from sustainable textile researcher Kate Goldsworthy.
The real standouts are in the “Performance” section, which displays attire that uses data from the immediate environment to generate some visible aspect of the garment. These interactive pieces “reveal something to the eye that you wouldn’t see normally, something that science often captures with graphs and charts,” says Pamela Parmal, a curator of the exhibit. For instance, the interactive dress “Incertitudes” is adorned with pins that flex in response to nearby voices, creating waves in the fabric; a dress embedded with thousands of tiny LEDs can display tweeted messages or other illuminated patterns.
And there are two leather jackets that, at first glance, look like their innovation is merely a stylish cut. But the jackets are coated in reactive inks that shimmer with iridescent colors in response to the wind and heat generated by heat guns in the display case. (These creations were born after designer and trained chemist Lauren Bowker used the reactive compounds to reveal the aerodynamics of race cars in a wind tunnel in a project for Formula One.)

Visitors seeking in-depth explanations of the science behind the fashions will have to look elsewhere. But “#techstyle” still has something for everyone, whether fashionista or engineer. And while the fashions represented are all cutting edge, the show harks back to an era when clothes were custom-made. Technology might have brought us mass-produced cookie-cutter clothing, but it can also enable clothing tailored to the individual.

Bayesian reasoning implicated in some mental disorders

From within the dark confines of the skull, the brain builds its own version of reality. By weaving together expectations and information gleaned from the senses, the brain creates a story about the outside world. For most of us, the brain is a skilled storyteller, but to spin a sensible yarn, it has to fill in some details itself.

“The brain is a guessing machine, trying at each moment of time to guess what is out there,” says computational neuroscientist Peggy Seriès.
Guesses just slightly off — like mistaking a smile for a smirk — rarely cause harm. But guessing gone seriously awry may play a part in mental illnesses such as schizophrenia, autism and even anxiety disorders, Seriès and other neuroscientists suspect. They say that a mathematical expression known as Bayes’ theorem — which quantifies how prior expectations can be combined with current evidence — may provide novel insights into pernicious mental problems that have so far defied explanation.
Bayes’ theorem “offers a new vocabulary, new tools and a new way to look at things,” says Seriès, of the University of Edinburgh.

Experiments guided by Bayesian math reveal that the guessing process differs in people with some disorders. People with schizophrenia, for instance, can have trouble tying together their expectations with what their senses detect. And people with autism and high anxiety don’t flexibly update their expectations about the world, some lab experiments suggest. That missed step can muddy their decision-making abilities.
Given the complexity of mental disorders such as schizophrenia and autism, it is no surprise that many theories of how the brain works have fallen short, says psychiatrist and neuroscientist Rick Adams of University College London. Current explanations for the disorders are often vague and untestable. Against that frustrating backdrop, Adams sees great promise in a strong mathematical theory, one that can be used to make predictions and actually test them.

“It’s really a step up from the old-style cognitive psychology approach, where you had flowcharts with boxes and labels on them with things like ‘attention’ or ‘reading,’ but nobody having any idea about what was going on in [any] box,” Adams says.

Applying math to mental disorders “is a very young field,” he adds, pointing to Computational Psychiatry, which plans to publish its first issue this summer. “You know a field is young when it gets its first journal.”

A mind for math
Bayesian reasoning may be new to the mental illness scene, but the math itself has been around for centuries. First described by the Rev. Thomas Bayes in the 18th century, this computational approach truly embraces history: Evidence based on previous experience, known as a “prior,” is essential to arriving at a good answer, Bayes argued. He may have been surprised to see his math meticulously applied to people with mental illness, but the logic holds. To make a solid guess about what’s happening in the world, the brain must not rely just on current input from occasionally unreliable senses. The brain must also use its knowledge about what has happened before. Merging these two streams of information correctly is at the heart of perceiving the world as accurately as possible.

Bayes figured out a way to put numbers to this process. By combining probabilities that come from prior evidence and current observations, Bayes’ formula can be used to calculate an overall estimate of the likelihood that a given suspicion is true. A properly functioning brain seems to do this calculation intuitively, behaving in many cases like a skilled Bayesian statistician, some studies show (SN: 10/8/11, p. 18).

This reckoning requires the brain to give the right amount of weight to prior expectations and current information. Depending on the circumstances, those weights change. When the senses falter, for instance, the brain should lean more heavily on prior expectations. Say the mail carrier comes each day at 4 p.m. On a stormy afternoon when visual cues are bad, we rely less on sight and more on prior knowledge to guess that the late-afternoon noise on the front porch is probably the mail carrier delivering letters. In certain mental illnesses, this flexible balancing act may falter.

People with schizophrenia often suffer from hallucinations and delusions, debilitating symptoms that arise when lines between reality and imagination blur. That confusion can lead to hearing voices that aren’t there and believing things that can’t possibly be true. These departures from reality could arise from differences in how people integrate new evidence with previous beliefs.
There’s evidence for such distorted calculations. People with schizophrenia don’t fall for certain visual illusions that trick most people, for instance. When shown a picture of the inside of a hollowed-out face mask, most people’s brains mistakenly convert the image to a face that pops outward off the page. People with schizophrenia, however, are more likely to see the face as it actually is — a concave mask. In that instance, people with schizophrenia give more weight to information that’s coming from their eyes than to their expectation that noses protrude from the rest of the face.
To complicate matters, the opposite can be true, too, says neuropsychologist Chris Frith of the Wellcome Trust Centre for Neuroimaging at University College London. “In this case, their prior is too weak, but in other cases, their prior is too strong,” he says.

In a recent study, healthy people and those who recently began experiencing psychosis, a symptom of schizophrenia, were shown confusing shadowy black-and-white images. Participants then saw color versions of the images that were easier to interpret. When shown the black-and-white images again, people with early psychosis were better at identifying the images, suggesting that they used their prior knowledge — the color pictures — to truly “see” the images. For people without psychosis, the color images weren’t as much help. That difference suggests that the way people with schizophrenia balance past knowledge and present observations is distinct from the behavior of people without the disorder. Sometimes the balance tips too far — in either direction.

In a talk at the annual Computational and Systems Neuroscience meeting in February in Salt Lake City, Seriès described the results of a different visual test: A small group of people with schizophrenia had to describe which way a series of dots were moving on a screen. The dots moved in some directions more frequently than others — a statistical feature that let the scientists see how well people could learn to predict the dots’ directions. The 11 people with schizophrenia seemed just as good at learning which way the dots were likely to move as the 10 people without, Seriès said. In this situation, people with schizophrenia seemed able to learn priors just fine.

But when another trick was added, a split between the two groups emerged. Sometimes, the dots were almost impossible to see, and sometimes, there were no dots at all. People with schizophrenia were less likely to claim that they saw dots when the screen was blank. Perhaps they didn’t hallucinate dots because of the medication they were on, Seriès says. In fact, very early results from unmedicated people with schizophrenia suggest that they actually see dots that aren’t there more than healthy volunteers.
Preliminary results so far on schizophrenia are sparse and occasionally conflicting, Seriès admits. “It’s the beginning,” she says. “We don’t understand much.”

The research is so early that no straightforward story exists yet. But that’s not unexpected. “If 100 years of schizophrenia research have taught us anything, it’s that there’s not going to be a nice, simple explanation,” Adams says. But using math to describe how people perceive the world may lead to new hunches about how that process goes wrong in mental illnesses, he argues.

“You can instill expectations in subjects in many different ways, and you can control what evidence they see,” Adams says. Bayesian theory “tells you what they should conclude from those prior beliefs and that evidence.” If their conclusions diverge from predictions, scientists can take the next step. Brain scans, for instance, may reveal how the wrong answers arise. With a clear description of these differences, he says, “we might be able to measure people’s cognition in a new way, and diagnose their disorders in a new way.”

Now vs. then
The way the brain combines incoming sensory information with existing knowledge may also be different in autism, some researchers argue. In some cases, people with autism might put excess weight on what their senses take in about the world and rely less on their expectations. Old observations fit with this idea. In the 1960s, psychologists had discovered that children with autism were just as good at remembering nonsense sentences (“By is go tree stroke lets”) as meaningful ones (“The fish swims in the pond”). Children without autism struggled to remember the non sequiturs. But the children with autism weren’t thrown by the random string of words, suggesting that their expectations of sentence meaning weren’t as strong as their ability to home in on each word in the series.

Another study supports the notion that sensory information takes priority in people with autism. People with and without autism were asked to judge whether a sight and a sound happened at the same time. They saw a white ring on a screen, and a tone played before, after or at the same time. Adults without autism were influenced by previous trials in which the ring and tone were slightly off. But adults with autism were not swayed by earlier trials, researchers reported in February in Scientific Reports.

This literal perception might get in the way of speech perception, Marco Turi of the University of Pisa in Italy and colleagues suggest. Comprehending speech requires a listener to mentally stitch together sights and sounds that may not arrive at the eyes and ears at the same time. Losing that flexibility could make speech harder to understand.

A different study found that children with autism perceive moving dots more clearly than children without autism (SN Online: 5/5/15). The brains of people with autism seem to prioritize incoming sensory information over expectations about how things ought to work. Elizabeth Pellicano of University College London and David Burr of the University of Western Australia in Perth described the concept in 2012 in an opinion paper in Trends in Cognitive Sciences. Intensely attuned to information streaming in from the senses, people with autism experience the world as “too real,” Pellicano and Perth wrote.

New data, however, caution against a too-simple explanation. In an experiment presented in New York City in April at the annual meeting of the Cognitive Neuroscience Society, 20 adults with and without autism had to quickly hit a certain key on a keyboard when they saw its associated target on a screen. Their job was made easier because the targets came in a certain sequence. All of the participants improved as they learned which keys to expect. But when the sequence changed to a new one, people with autism faltered. This result suggests that they learned prior expectations just fine, but had trouble updating them as conditions changed, said cognitive neuroscientist Owen Parsons of the University of Cambridge.
Distorted calculations — and the altered versions of the world they create — may also play a role in depression and anxiety, some researchers think. While suffering from depression, people may hold on to distorted priors — believing that good things are out of reach, for instance. And people with high anxiety can have trouble making good choices in a volatile environment, neuroscientist Sonia Bishop of the University of California, Berkeley and colleagues reported in 2015 in Nature Neuroscience.

In their experiment, people had to choose a shape, which sometimes  came with a shock. People with low anxiety quickly learned to avoid the shock, even when the relationship between shape and shock changed. But people with high anxiety performed worse when those relationships changed, the researchers found. “High-anxious individuals didn’t seem able to adjust their learning to handle how volatile or how stable the environment was,” Bishop says.
Scientists can’t yet say what causes this difficulty adjusting to a new environment in anxious people and in people with autism. It could be that once some rule is learned (a sequence of computer keys, or the link between a shape and a shock), these two groups struggle to update that prior with newer information.

This rigidity might actually contribute to anxiety in the first place, Bishop speculates. “When something unexpected happens that is bad, you wouldn’t know how to respond,” and that floundering “is likely to be a huge source of anxiety and stress.”

Recalculating
“There’s been a lot of frustration with a failure to make progress” on psychiatric disorders, Bishop says. Fitting mathematical theories to the brain may be a way to move forward. Researchers “are very excited about computational psychiatry in general,” she says.

Computational psychiatrist Quentin Huys of the University of Zurich is one of those people. Math can help clarify mental illnesses in a way that existing approaches can’t, he says. In the March issue of Nature Neuroscience, Huys and colleagues argued that math can demystify psychiatric disorders, and that thinking of the brain as a Bayesian number cruncher might lead to a more rigorous understanding of mental illness. Huys says that a computational approach is essential. “We can’t get away without it.” If people with high anxiety perform differently on a perceptual test, then that test could be used to both diagnose people and monitor how well a treatment works, for instance.

Scientists hope that a deeper description of mental illnesses may lead to clearer ways to identify a disorder, chart how well treatments work and even improve therapies. Bishop raises the possibility of developing apps to help people with high anxiety evaluate situations  — outsourcing the decision making for people who have trouble. Frith points out that cognitive behavioral therapy could help depressed people recalculate their experiences by putting less weight on negative experiences and perhaps breaking out of cycles of despondence.

Beyond these potential interventions, simply explaining to people how their brains are working might ease distress, Adams says. “If you can give people an explanation that makes sense of some of the experiences they’ve had, that can be a profoundly helpful thing,” he says. “It destigmatizes the experience.”

The center of Earth is younger than the outer surface

Our home planet is young at heart. According to new calculations, Earth’s center is more than two years younger than its surface.

In Einstein’s general theory of relativity, massive objects warp the fabric of spacetime, creating a gravitational pull and slowing time nearby. So a clock placed at Earth’s center will tick ever-so-slightly slower than a clock at its surface. Such time shifts are determined by the gravitational potential, a measure of the amount of work it would take to move an object from one place to another. Since climbing up from Earth’s center would be a struggle against gravity, clocks down deep would run slow relative to surface timepieces.
Over the 4.5 billion years of Earth’s history, the gradual shaving off of fractions of a second adds up to a core that’s 2.5 years younger than the planet’s crust, researchers estimate in the May European Journal of Physics. Theoretical physicist Richard Feynman had suggested in the 1960s that the core was younger, but only by a few days.
The new calculation neglects geological processes, which have a larger impact on the planet’s age. For example, Earth’s core probably formed earlier than its crust. Instead, says study author Ulrik Uggerhøj of Aarhus University in Denmark, the calculation serves as an illustration of gravity’s influence on time — very close to home.