Scientists investigating what keeps lungs from overinflating can quit holding their breath.
Experiments in mice have identified a protein that senses when the lungs are full of air. This protein helps regulate breathing in adult mice and gets breathing going in newborn mice, researchers report online December 21 in Nature.
If the protein plays a similar role in people — and a few studies suggest that it does — exploring its activity could help explain disorders such as sleep apnea or chronic obstructive pulmonary disease. “These are extremely well done, very elegant studies,” says neonatologist Shabih Hasan of the University of Calgary in Canada, a specialist in breathing disorders in newborns. Researchers knew that feedback between the lungs and brain maintains normal breathing. But “this research give us an understanding at the cellular level,” says Hasan. “It’s a major advance.”
Called Piezo2, the protein forms channels in the membranes of nerve cells in the lungs. When the lungs stretch, the Piezo2 channels detect the distortion caused by the mechanical force of breathing and spring open, triggering the nerves to send a signal. Led by neuroscientist Ardem Patapoutian, researchers discovered that the channels send signals along three different pathways. Mice bred to lack Piezo2 in a cluster of nerve cells that send messages to the spinal cord had trouble breathing and died within 24 hours. Similarly, newborn mice missing Piezo2 channels in nerves that communicate with the brain stem via a structure called the jugular ganglion also died. Mice lacking Piezo2 in the nodose ganglion, a structure that also links to the brain stem, lived to adulthood. But their breathing was abnormal and an important safety mechanism in the lungs of these mice didn’t work. Called the Hering-Breuer reflex, it kicks in when the lungs are in danger of overinflating. When functioning properly, Piezo2’s signal prevents potentially harmful overinflation by temporarily halting breathing. Known as apnea, this cessation of breathing can be dangerous in other instances but prevents damage in this case.
“Breathing is a mechanical process,” says Patapoutian, a Howard Hughes Medical Institute investigator at the Scripps Research Institute in La Jolla, Calif. “Intuitively, you could imagine that lung stretch sensors could play an important role in regulating breathing pattern. Amazingly, however, no definitive proof for such a feedback mechanism existed.”
Previous work in mice by Patapoutian and colleagues found that Piezo2 channels play a major role in sensing touch. The channels also function in proprioception, the sense of where body parts are in relation to each other, Patapoutian and colleagues reported last year.
Two recent studies by different research teams have found that people with mutations in a Piezo2 gene have problems with touch, proprioception and, in one study, breathing. Although small, the studies suggested that investigating Piezo2 in people could shed light on breathing disorders or other problems. The protein channels might play a role in sensing the “fullness” of the stomach and bladder and perhaps other mechanical processes such as heart rate control, Patapoutian says.
Investigating Piezo2 could also help explain how newborn lungs transition from being fluid-filled to breathing air, says neuroscientist Christo Goridis of the École des Neurosciences Paris.
New research is stirring the pot about an ancient Egyptian burial practice.
Many ancient peoples, including Egyptians, buried some of their dead in ceramic pots or urns. Researchers have long thought these pot burials, which often recycled containers used for domestic purposes, were a common, make-do burial for poor children.
But at least in ancient Egypt, the practice was not limited to children or to impoverished families, according to a new analysis. Bioarchaeologist Ronika Power and Egyptologist Yann Tristant, both of Macquarie University in Sydney, reviewed published accounts of pot burials at 46 sites, most near the Nile River and dating from about 3300 B.C. to 1650 B.C. Their results appear in the December Antiquity. A little over half of the sites contained the remains of adults. For children, pot burials were less common than expected: Of 746 children, infants and fetuses interred in some type of burial container, 338 were buried in wooden coffins despite wood’s relative scarcity and cost. Another 329 were buried in pots. Most of the rest were placed in baskets or, in a few cases, containers fashioned from materials such as reeds or limestone.
In the tomb of a wealthy governor, an infant was found in a pot containing beads covered in gold foil. Other pot burials held myriad goods — gold, ivory, ostrich eggshell beads, clothing or ceramics. Bodies were either placed directly into urns, or sometimes pots were broken or cut to fit the deceased.
People deliberately chose the containers, in part for symbolic reasons, the researchers now propose. The hollow vessels, which echo the womb, may have been used to represent a rebirth into the afterlife, the scientists say.
Everybody wants more juice from their batteries. Smartphones and laptops always need recharging. Electric car drivers must carefully plan their routes to avoid being stranded far from a charging station. Anyone who struggles with a tangle of chargers every night would prefer a battery that can last for weeks or months.
For researchers who specialize in batteries, though, the drive for a better battery is less about the luxury of an always-charged iPad (though that would be nice) and more about kicking our fossil fuel habit. Given the right battery, smog-belching cars and trucks could be replaced with vehicles that run whisper-quiet on electricity alone. No gasoline engine, no emissions. Even airplanes could go electric. And the power grid could be modernized to use cheaper, greener fuels such as sunlight or wind even on days when the sun doesn’t shine bright enough or the wind doesn’t blow hard enough to meet electricity demand.
A better battery has the potential to jolt people into the future, just like the lithium-ion battery did. When they became popular in the early 1990s, lithium-ion batteries offered twice as much energy as the next best alternative. They changed the way people communicate.
“What the lithium-ion battery did to personal electronics was transformational,” says materials scientist George Crabtree, director of the Joint Center for Energy Storage Research at Argonne National Laboratory in Illinois. “The cell phone not only made landlines obsolete for many, but [the lithium-ion battery] put cameras and the internet into the hands of millions.” That huge leap didn’t happen overnight. “It was the sum of many incremental steps forward, and decades of work,” says Crabtree, who coordinates battery research in dozens of U.S. labs. Lithium-ion batteries have their limits, however, especially for use in the power grid and in electric vehicles. Fortunately, like their Energizer mascot, battery researchers never rest. Over the last 10 years, universities, tech companies and car manufacturers have explored hundreds of new battery technologies, reaching for an elusive and technically difficult goal: next-generation batteries that hold more energy, last longer and are cheaper, safer and easier to recharge.
A decade of incremental steps are beginning to pay off. In late 2017, scientists will introduce a handful of prototype batteries to be developed by manufacturers for potential commercialization. Some contain new ingredients — sulfur and magnesium — that help store energy more efficiently, delivering power for longer periods. Others will employ new designs. “These prototypes are proof-of-principle batteries, miniature working versions,” Crabtree says. Getting the batteries into consumer hands will take five to 10 years. Making leaps in battery technology, he says, is surprisingly hard to do.
Power struggle Batteries operate like small chemical plants. Technically, a battery is a combination of two or more “electrochemical cells” in which energy released by chemical reactions produces a flow of electrons. The greater the energy produced by the chemical reactions, the greater the electron flow. Those electrons provide a current to whatever the battery is powering — kitchen clock, smoke alarm, car engine.
To power any such device, the electrons must flow through a circuit connecting two electrodes, known as an anode and a cathode, separated by a substance called an electrolyte. At the anode, chemical oxidation reactions release electrons. At the cathode, electrons are taken up in reduction reactions. The electrolyte enables ions created by the oxidation and reduction reactions to pass back and forth between the two electrodes, completing the circuit.
Depending on the materials used for the electrodes and the electrolyte, a battery may be recharged by supplying current that drives the chemical reactions in reverse. In creating new recipes for a rechargeable electrochemical soup, though, battery researchers must beware of side reactions that can spoil everything. “There’s the chemical reaction you want — the one that stores energy and releases it,” Crabtree says. “But there are dozens of other … reactions that also take place.” Those side reactions can disable a battery, or worse, lead to a risk of catastrophic discharge. (Consider the recent fires in Samsung’s Galaxy Note 7 smartphones.)
Early versions of the lithium-ion battery from the 1970s carried an anode made of pure lithium metal. Through repeated use, lithium ions were stripped off and replated onto the anode, creating fingerlike extensions that reached across to the cathode, shorting out the battery. Today’s lithium-ion batteries have an anode made of graphite (a form of carbon) so that loose lithium ions can snuggle in between sheets of carbon atoms.
Lithium-ion batteries were originally developed with small electronics in mind; they weren’t designed for storing electricity on the grid or powering electric vehicles. Electric cars need lots of power, a quick burst of energy to move from stop to start. Electric car manufacturers now bundle thousands of such batteries together to provide power for up to 200 miles before recharging, but that range still falls far short of what a tank of gas can offer. And lithium-ion batteries drain too quickly to feed long hours of demand on the grid.
Simply popping more batteries into a car or the grid isn’t the answer, Crabtree says. Stockpiling doesn’t improve the charging time or the lifetime of the battery. It’s also bulky. Carmakers have to leave drivers room for their passengers plus some trunk space. To make electric vehicles competitive with, or better than, vehicles run by internal-combustion engines, manufacturers will need low-cost, high-energy batteries that last up to 15 years. Likewise, grid batteries need to store energy for later use at low cost, and stand up to decades of use.
“There’s no one battery that’s going to meet all our needs,” says MIT materials scientist Yet-Ming Chiang. Batteries needed for portable devices are very different from those needed for transportation or grid-scale storage. Expect to see a variety of new battery types, each designed for a specific application. Switching to sulfur For electric vehicles, lithium-sulfur batteries are the next great hope. The cathode is made mainly of sulfur, an industrial waste product that is cheap, abundant and environmentally friendly. The anode is made of lithium metal.
During discharge, lithium ions break away from the anode and swim through the liquid electrolyte to reach the sulfur cathode. There, the ions form a covalent bond with the sulfur atoms. Each sulfur atom bonds to two lithium ions, rather than just one, doubling the number of bonds in the cathode of the battery. More chemical bonds means more stored energy, so a lithium-sulfur battery creates more juice than a lithium-ion one. That, combined with sulfur’s light weight, means that, in principle, manufacturers can pack more punch for a given amount of weight, storing four or five times as much energy per gram.
Ultimately, that upgrade could boost an electric vehicle’s range up to as much as 500 miles on a single charge. But first, researchers have to get past the short lifetime of existing lithium-sulfur batteries, which, Crabtree says, is due to a loss of lithium and sulfur in each charge-discharge cycle.
When lithium combines with sulfur, it also forms compounds called polysulfides, which quickly gum up the battery’s insides. Polysulfides form within the cathode during battery discharge, when stored energy is released. Once they dissolve in the battery’s liquid electrolyte, the polysulfides shuttle to the anode and react with it, forming a film that renders the battery useless within a few dozen cycles — or one to two months of use.
At Sandia National Laboratories in Albuquerque, N.M., a team led by Kevin Zavadil is trying to block the formation of polysulfides in the electrolyte. The electrolyte consists of salt and a solvent, and current lithium-sulfur batteries require a large amount of electrolyte to achieve a moderate life span. Zavadil and his team are developing “lean” electrolyte mixtures less likely to dissolve the sulfur molecules that create polysulfides.
Described September 9 in ACS Energy Letters, the new electrolyte mix contains a higher-than-usual salt concentration and a “sparing” amount of solvent. The researchers also reduced the overall amount of electrolyte in the batteries. In test runs, the tweaks dropped the concentration of polysulfides by several orders of magnitude, Zavadil says. “We [also] have some ideas on how to use membranes to protect the lithium surface to prevent polysulfides from happening in the first place,” Zavadil says. The goal is to produce a working prototype of the new battery — one that can last through thousands of cycles — by the end of 2017.
At the University of Texas at Austin, materials engineer Guihua Yu, along with colleagues at Zhejiang University of Technology in Hangzhou, China, is investigating another work-around for this battery type: replacing the solid sulfur cathode with an intricate structure that encapsulates the sulfur in an array of nanotubes. Reported in the November issue of Nano Letters, the nanotubes that encase the sulfur are fashioned from manganese dioxide, a material that can attract and hold on to polysulfides. The nanotubes are coated with polypyrrole, a conductive polymer that helps boost the flow of electrons.
This approach reduces buildup and boosts overall conductivity and efficiency, Yu says. So far, with the group’s new architecture, the battery loses less than 0.07 percent of its capacity per charge and discharge cycle. After 500 cycles, the battery maintained about 65 percent of its original capacity, a great improvement over the short lifetime of current lithium-sulfur batteries. Still, for use in electric vehicles, scientists want batteries that can last through thousands of cycles, or 10 to 15 years. Scientists at Argonne are constructing another battery type: one that replaces lithium ions at the anode with magnesium. This switch could instantly boost the electrical energy released for the same volume, says Argonne materials engineer Brian Ingram, noting that a magnesium ion has a charge of +2, double that of lithium’s +1. Magnesium’s ability to produce twice the electrical current of lithium ions could allow for smaller, more energy-dense batteries, Ingram says.
Magnesium comes with its own challenge, however. Whereas lithium ions zip through a battery’s electrolyte, magnesium ions slowly trudge. A team of researchers at Argonne, Northwestern University and Oak Ridge National Laboratory shot high-energy X-rays at magnesium in various batteries and learned that the drag is due to interactions with molecules that the magnesium attracts within the electrolyte. Ingram and his group are experimenting with new materials to find a molecular recipe that reduces such drag.
Ingram’s team is trying to nudge its “highly functioning, long-lasting” prototype to 3 volts by December. Today’s typical lithium-ion battery has 3.8 to 4 volts. At 3 volts, Ingram says, the magnesium battery would pack more power than a 4-volt lithium-ion battery and “create a tremendous amount of excitement within the field.”
Going with the flow Together, transportation and the electricity grid account for about two-thirds of U.S. energy use. But today, only 10 percent of the electricity on the grid is from renewable sources, according to the U.S. Energy Information Administration. If wind and solar power are ever to wrestle energy production away from fossil fuels, big changes must happen in energy storage. What is needed, Crabtree says, is a battery that can store energy, and lots of it, for later use. “Though the sun shines in the middle of the afternoon, peak demand comes at sunset when people go home, turn on lights and cook dinner,” he says.
To reliably supply electricity at night or on cloudy, windless days requires a different type of battery. By design, flow batteries fit the bill. Instead of having solid electrodes, flow batteries store energy in two separate tanks filled with chemicals — one positively charged, the other negatively charged. Pumps move the liquids from the tanks into a central chamber, or “stack,” where dissolved molecules in the liquids undergo chemical reactions that store and give up energy. A membrane located in the stack keeps the positive and negative ions separated. Flow batteries can store energy for a long time and provide power as needed. Because the energy-storing liquids are kept in external tanks, the batteries are unlikely to catch fire, and can be built large or small depending on need. To store more power, use a larger tank.
So far, however, flow batteries are expensive to make and maintain, and have had limited use for providing backup power on the grid. Today’s flow batteries are packed with rare and toxic metal components, usually vanadium. With many moving parts — tanks, pumps, seals and sensors — breakdowns and leakage are common.
At MIT, Chiang and colleagues are developing flow batteries that can bypass those drawbacks. One, an hourglass flow battery, does away with the need for costly and troublesome pumps. The stack where the chemical reactions occur is in the constricted middle, with tanks at either end. Gravity allows the liquids to flow through the stack, like sand in an hourglass. A motor adjusts the battery’s angle to speed or slow the flow.
The hourglass design is like a “concept car,” Chiang says. Though the final product is likely to take a slightly different shape, the design could serve as a model for future flow batteries. Simply changing the tilt of the device could add a short infusion of power to the grid during periods of peak demand, or slowly release energy over a period of many hours to keep air conditioners and heaters running when the sun is down.
In another design, the group has replaced vanadium with sulfur, which is inexpensive and abundant. Dissolved in water (also cheap and plentiful), sulfur is circulated into and out of the battery’s stack, creating a reaction that stores or gives up energy, similar to commercial flow batteries. The group is now refining the battery, first described in 2014 in Nano Letters, aiming for higher levels of energy.
Another challenge in developing flow batteries is finding ways to keep active materials confined to their tanks. That’s the job of the battery membrane, but because the organic molecules under consideration for battery use are almost always small, they too easily slip through the membrane, reducing the battery’s lifetime and performance. Rather than change the membrane, a group led by chemist Joaquín Rodríguez-López of the University of Illinois at Urbana-Champaign devised ways to bulk up the battery’s active materials by changing their size or configuration. The scientists linked tens to millions of active molecules together to create large, ringed structures, long strings of molecules hooked onto a polymer backbone, or suspensions of polymers containing up to a billion molecules, they reported in the Journal of the American Chemical Society in 2014.
With the oversized molecules, even “simple, inexpensive porous membranes are effective at preventing crossover,” Crabtree says. A prototype flow battery that provides low-cost power and lasts 20 to 30 years is expected to be completed in the coming year.
Getting air Looking beyond 2017, scientists envision a new generation of batteries made of low-cost, or even no-cost, materials. The lithium-air battery, still in early development, uses oxygen sucked in from the atmosphere to drive the chemical reaction that produces electricity. In the process, oxygen combines with lithium ions to form a solid compound (lithium peroxide). During charging, solid oxygen reverts back to its gaseous form.
“Lithium-air potentially offers the highest energy density possible,” says MIT materials engineer Ju Li. “You basically react lithium metal with oxygen in air, and in principle you get as much useful energy as gasoline.”
Lithium-air has problems, though. The batteries are hard to recharge, losing much of their power during the process. And the chemical reaction that powers the battery generates heat, cutting the battery’s energy-storage capacity and life span.
Using electron microscopy to study the reaction products of a lithium-air prototype, Li and his group came up with a possible solution: Keep oxygen in a solid form sealed within the battery to prevent the oxygen from forming a gas. By encasing oxygen and lithium in tiny glasslike particles, the scientists created a fully sealed battery. The new strategy, published July 25 online in Nature Energy, curbed energy loss during recharging and prevented heat buildup.
“If it works on a large scale, we would have an electrical vehicle that’s competitive with gasoline-driven cars,” Li says. Reaching that goal would be a big step toward a greener planet.
One hundred and ninety-two atoms have tied the knot.
Chains of carbon, hydrogen, oxygen and nitrogen atoms, woven together in a triple braid, form the most complex molecular knot ever described, chemists from the University of Manchester in England report in the Jan. 13 Science.
Learning how to tie such knots could one day help researchers weave molecular fabrics with all sorts of snazzy properties. “We might get the strength of Kevlar with a lighter and more flexible material,” says study coauthor David Leigh. That’s still a long way away, but molecular knot tying has an appeal that’s purely intellectual, too, says University of Cambridge chemist Jeremy Sanders. “It’s like the answer to why you climb Everest,” he says. “It’s a challenge.”
Mathematicians know of more than six billion types of prime knots, which, like prime numbers, cannot be broken down into simpler components. “Prime knots can’t be built up by sticking other knots together,” Leigh explains. For years, chemists were able to synthesize just one type of prime knot out of small molecules. “We thought that was pretty ridiculous,” says Leigh.
That molecular knot was a trefoil, like a three-leaf clover. Jean-Pierre Sauvage and colleagues wove it from chemical strands in 1989. Sauvage won a Nobel Prize in 2016 for earlier work that used the same principles explored in his knots (SN: 10/29/16, p. 6).
In the decades since Sauvage’s trefoil, chemists have tried to synthesize other types of molecular knots, but “they’ve always found it incredibly difficult,” says chemist Sophie Jackson, also at the University of Cambridge.
Persuading nanoscale strands to interlock together in an orderly fashion isn’t simple. “You can’t just grab the ends and tie them like you would a shoelace,” Leigh says. Instead, scientists choose molecular ingredients that assemble themselves. In 2012, Leigh and colleagues used the self-assembly technique to make a molecular pentafoil knot, a star-shaped structure made up of 160 atoms and with strands that cross five times (SN: 1/28/12, p. 12). This latest knot, with eight crossing points, is even more intricate.
Leigh’s team mixed together building blocks containing carbon, hydrogen, oxygen and nitrogen atoms with iron ions and chloride ions. “You dump them all in, heat them all up and they self-assemble,” he says. Sticky metal ions hold the building blocks in the correct position, and a single chloride ion sitting in the middle of the structure anchors it all together. Then, a chemical catalyst links the building blocks, forming the completed knot. The new knot is the tightest ever created, Leigh says, with just 24 atoms between each crossing point.
It’s beautiful, Sanders says. “It’s a string of atoms rolled up in a spherical shape, with an astonishing amount of symmetry.” Sanders is reluctant to speculate how such a knot might be used, but it’s round and very dense, he says. That could give it some interesting materials properties.
Leigh suspects that different molecular knots might behave differently, like the various knots used by fishermen and sailors. “We want to make specific knots, see what they do and then figure out how to best exploit that,” he says.
Gold’s glimmer is not the only reason the element is so captivating. For decades, scientists have puzzled over why theoretical predictions of gold’s properties don’t match up with experiments. Now, highly detailed calculations have erased the discrepancy, according to a paper published in the Jan. 13 Physical Review Letters.
At issue was the energy required to remove an electron from a gold atom, or ionize it. Theoretical calculations of this ionization energy differed from scientists’ measurements. Likewise, the energy released when adding an electron — a quantity known as the electron affinity — was also off the mark. How easily an atom gives up or accepts electrons is important for understanding how elements react with other substances. “It was well known that gold is a difficult system,” says chemist Sourav Pal of the Indian Institute of Technology Bombay, who was not involved with the study. Even gold’s most obvious feature can’t be explained without calling Einstein’s special theory of relativity into play: The theory accounts for gold’s yellowish color. (Special relativity shifts around the energy levels of electrons in gold atoms, causing the metal to absorb blue light, and thereby making reflected light appear more yellow.)
With this new study, scientists have finally resolved the lingering questions about the energy involved in removing or adding an electron to the atom. “That is the main significance of this paper,” Pal says.
Early calculations, performed in the 1990s, differed from the predicted energies by more than a percent, and improved calculations since then still didn’t match the measured value. “Every time I went to a conference, people discussed that and asked, ‘What the hell is going on?’” says study coauthor Peter Schwerdtfeger, a chemist at Massey University Auckland in New Zealand.
The solution required a more complete consideration of the complex interplay among gold’s 79 electrons. Using advanced supercomputers to calculate the interactions of up to five of gold’s electrons at a time, the scientists resolved the discrepancy. Previous calculations had considered up to three electrons at a time. Also essential to include in the calculation were the effects of special relativity and the theory of quantum electrodynamics, which describes the quantum physics of particles like electrons.
The result indicates that gold indeed adheres to expectations — when calculations are detailed enough. “Quantum theory works perfectly well, and that makes me extremely happy,” says Schwerdtfeger.
WASHINGTON — Researchers have devised a test to see if pairs of black holes — famous for creating gravitational waves when they merge — themselves formed from multiple mergers of smaller black holes.
The Advanced Laser Interferometer Gravitational-Wave Observatory, LIGO, has detected spacetime ripples from two sets of merging black holes (SN: 7/9/16, p. 8). Scientists typically assume that such black holes formed in the collapse of a massive star. But in especially crowded patches of the universe, black holes could have formed over generations of unions, astrophysicist Maya Fishbach of the University of Chicago explained January 28 at a meeting of the American Physical Society. Or the merging cycle could have occurred in the very early universe, starting with primordial black holes — objects that may have formed when extremely dense pockets of matter directly collapsed. Fishbach and colleagues studied how quickly black holes whirl around. In simulations, black holes that repeatedly merged reached a high rate of spin, the scientists found. That result didn’t depend on certain properties of the initial black holes, like whether they were spinning to begin with or not. “It’s cool,” says Fishbach. “The predictions from this in terms of spin are very robust,” making the idea easy to test.
So far, the spins of LIGO’s black holes are lower than the predictions. If the multiple merging process occurs, it could be very rare, so to conclusively test the idea would require tens to hundreds of black hole detections, Fishbach says.
WASHINGTON — Before astronomers could discover the expansion of the universe, they had to expand their minds.
When the 20th century began, astronomers not only didn’t know the universe was expanding, they didn’t even care.
“Astronomers in the late 19th century and the very start of the 20th century were very little interested in what we would call the broader universe or its history,” says historian of science Robert Smith of the University of Alberta in Canada.
Some astronomers were interested in the structure of the Milky Way galaxy, the vast collection of stars in which the sun, Earth and known planets reside. “But astronomers played next to no part in the debates at the end of the 19th century about the wider nature of the cosmos,” Smith said in a talk January 28 at a meeting of the American Physical Society. In fact, many scientists believed there was no wider cosmos. Majority opinion held that the Milky Way galaxy, more or less, constituted the entire universe.
“As far as almost all astronomers were concerned, the universe beyond our own limited system of stars was the realm of metaphysics, and working astronomers did not engage in metaphysics,” Smith said.
Astronomers left others to do the wondering.
“The infinite universe beyond our stellar system was territory that professional astronomers really were very happy to leave to mathematicians, physicists, philosophers and some popularizers,” Smith said.
Even among those groups, pre-20th century consensus limited the universe to the Milky Way and its immediate environs. Clues to the contrary were mostly dismissed. Most prominent among those clues was the existence of “spiral nebulae,” fuzzy patches of light clearly distinct from the pointlike stars. Photos of the spiral-shaped blobs suggested that they were solar systems in the making within or around the Milky Way; many people believed the galaxy was home to countless populated planets. Very few people believed that the nebulae were distant replicas of the Milky Way, galaxies in their own right. In a book published in 1890, for instance, astronomer and respected science popularizer Agnes Clerke wrote that “no competent thinker” believed that nebulae could be galaxies. She retained that view in a later edition published in 1905, Smith said.
But after around 1905, he said, the modern conception of the cosmos began to emerge. Philanthropic contributions in support of new, large telescopes, particularly in the American West, led to observations that slowly transformed the restricted view of a one-galaxy universe into the current commodious cosmos, with billions and billions (technically, gazillions) of galaxies. At Lick Observatory in California, for instance, James Keeler undertook the task of counting the spiral nebulae. At the time, astronomers knew of a few dozen. Keeler found hundreds of thousands.
“So the spiral nebulae are elevated in importance by Keeler,” Smith said.
By 1912, Vesto Slipher, at Lowell Observatory in Arizona, began reporting measurements of the light emanating from the nebulae, determining how far colors were shifted to the red end of the spectrum, a way to measure how fast the nebulae were flying away from the Earth.
“He would actually start arguing that the spiral nebulae were distant galaxies,” Smith said.
By the 1920s, more and more astronomers took the idea of distant galaxies seriously. Finally Edwin Hubble, at the Mount Wilson Observatory in Southern California, provided the deathblow to the one-galaxy universe. In 1923, his observations of the Andromeda nebula turned up a couple of Cepheid variable stars. Because Cepheids varied in brightness on a regular schedule that depended on their intrinsic brightness, they provided surefire clues to Andromeda’s distance from Earth. Andromeda resided 900,000 light-years away, vastly farther than even the most exaggerated estimates of the Milky Way’s diameter.
Hubble’s use of Cepheids depended on the earlier pioneering work of Henrietta Swan Leavitt at the Harvard observatory. “Her discovery of the period-luminosity relationship in Cepheid variable stars is absolutely fundamental in transforming people’s ideas about first, our own galactic system and second, providing the means to demonstrate that galaxies do in fact exist,” Smith said.
By the end of the 1920s Hubble, combining his distance measurements with velocity measurement made by astronomer Milton Humason, had demonstrated that the farther a nebula was from Earth, the faster it appeared to fly away. That relationship formed the observational basis for the expanding universe. Hubble suggested as much in 1929. Others also realized that the new view of the cosmos implied an expanding universe; one, Georges Lemaître, proposed something very much like today’s Big Bang theory of the universe’s origin.
It took a while, though, for the idea of the universe as the expanding aftermath of a big explosion to open everybody’s mind. In 1935, for instance, the astronomer J.S. Plaskett called Lemaître’s ideas “speculation run wild without a shred of evidence.” Even Hubble was not entirely sure of his own discovery. In 1938, Smith pointed out, Hubble assessed the evidence as consistent with a static universe, while acknowledging that expansion could not be ruled out.
Today’s claims that other big bangs may have happened many times, creating a multitude of cosmic spacetime bubbles known as the multiverse, face similar objections. It’s true that the evidence for a multiverse is not conclusive, just as evidence in the 19th century was not conclusive that spiral nebulae were distant galaxies or “island universes” of their own. But given the historical precedent, it would be silly to say that “no competent thinker” would believe in the possibility of multiple universes today.
A small, poorly understood segment of the population stays mentally healthy from age 11 to 38, a new study of New Zealanders finds. Everyone else encounters either temporary or long-lasting mental disorders.
Only 171 of 988 participants, or 17 percent, experienced no anxiety disorders, depression or other mental ailments from late childhood to middle age, researchers report in the February Journal of Abnormal Psychology. Of the rest, half experienced a transient mental disorder, typically just a single bout of depression, anxiety or substance abuse by middle age. “For many, an episode of mental disorder is like influenza, bronchitis, kidney stones, a broken bone or other highly prevalent conditions,” says study coauthor Jonathan Schaefer, a psychologist at Duke University. “Sufferers experience impaired functioning, many seek medical care, but most recover.”
The remaining 408 individuals (41 percent) experienced one or more mental disorders that lasted several years or more. Their diagnoses included more severe conditions such as bipolar and psychotic disorders.
Researchers analyzed data for individuals born between April 1972 and March 1973 in Dunedin, New Zealand. Each participant’s general health and behavior were assessed 13 times from birth to age 38. Eight mental health assessments occurred from age 11 to 38. Surprisingly, those who experienced lasting mental health did not display several characteristics previously linked to a lower likelihood of developing mental disorders. Those attributes consist of growing up in unusually affluent families, enjoying especially sound physical health and scoring exceptionally high on intelligence tests. Instead, mentally healthy participants tended to possess advantageous personality traits starting in childhood, Schaefer and colleagues found. These participants rarely expressed strongly negative emotions, had lots of friends and displayed superior self-control. Kiwis with rock-solid mental health also had fewer first- and second-degree relatives with mental disorders compared with their peers.
As adults, participants with enduring mental health reported, on average, more education, better jobs, higher-quality relationships and more satisfaction with their lives than their peers did. But lasting mental health doesn’t guarantee an exceptional sense of well-being, Schaefer says. Nearly one-quarter of never-diagnosed individuals scored below the entire sample’s average score for life satisfaction.
Less surprising was the 83 percent overall prevalence rate for mental disorders. That coincides with recent estimates from four other long-term projects. In those investigations — two in the United States, one in Switzerland and another in New Zealand — between 61 percent and 85 percent of participants developed mental disorders over 12- to 30-year spans.
Comparably high rates of emotional disorders were reported in 1962 for randomly selected Manhattan residents. Many researchers doubted those findings, which relied on a diagnostic system that was less strict than the three versions of psychiatry’s diagnostic manual that were introduced and used to evaluate New Zealand participants as they got older, says psychiatric epidemiologist William Eaton of Johns Hopkins Bloomberg School of Public Health. But the Manhattan study appears to have been on the right track, Eaton says.
Increased awareness that most people will eventually develop a mental disorder (SN: 10/10/09, p. 5), at least briefly, can reduce stigma attached to these conditions (SN Online: 10/13/16), he suspects.
Psychiatric epidemiologist Ronald Kessler suspects the numbers of people experiencing a mental disorder may be even higher than reported. Many participants deemed to have enduring mental health likely developed brief mental disorders that got overlooked, such as a couple of weeks of serious depression after a romantic breakup, says Kessler of Harvard Medical School, who directs U.S. surveys of mental disorders. Rather than focusing on rare cases of lasting mental health, “the more interesting thing is to compare people with persistent mental illness to those with temporary disorders,” he says.
Hydras, petite pond polyps known for their seemingly eternal youth, exemplify the art of bouncing back (SN: 7/23/16, p. 26). The animals’ cellular scaffolding, or cytoskeleton, can regrow from a slice of tissue that’s just 2 percent of the original hydra’s full body size. Researchers thought that molecular signals told cells where and how to rebuild, but new evidence suggests there are other forces at play.
Physicist Anton Livshits and colleagues at the Technion-Israel Institute of Technology in Haifa genetically engineered Hydra vulgaris specimens so that stretchy protein fibers called actins, which form the cytoskeleton, lit up under a microscope. Then, the team sliced and diced to look for mechanical patterns in the regeneration process. Actin fibers in pieces of hydra exert mechanical force that lines up new cells and guides the growth of the animal’s head and tentacles, the researchers found. Turning off motor proteins that move actin stopped regeneration, and physically manipulating actin fiber alignment resulted in hydras with multiple heads. Providing hydras with further structural stability encouraged tissue slices to grow normally. Both mechanical and molecular forces may mold hydras in regeneration, the researchers report in the Feb. 7 Cell Reports. When researchers anchored rings of hydra tissue to a wire (right), they found that the added mechanical stability made a hydra grow normally along one body axis, and thus grow one head. Without this stability, the actin scaffolding was more disrupted and the animal grew two heads (left).
WASHINGTON — A nasty stomach virus that can linger on fruits and veggies may have met its match in cold plasma.
In experiments, the ionized gas, created by filtering room-temperature air through an electric field, virtually eliminated norovirus from lettuce, researchers reported February 7 at the American Society for Microbiology Biothreats meeting.
Norovirus is the leading cause of foodborne illness in the United States, infecting more than 20 million people every year. Sterilizing food with heat is one way to kill the virus, but that approach doesn’t work for fresh produce. Cold plasma could be a way to sterilize fruits and vegetables without damaging them, said Hamada Aboubakr, a food microbiologist at the University of Minnesota in St. Paul. Aboubakr and colleagues used a cold plasma device to blast contaminated romaine lettuce leaves and stainless steel surfaces. After five minutes, the plasma wiped out about 99 percent of norovirus particles.
The researchers are testing the device on other foodborne viruses such as hepatitis A, which sickened more than 140 people last year after they ate contaminated strawberries. Unpublished experiments have shown that cold plasma also can destroy drug-resistant bacteria on chicken breasts and leafy greens. Aboubakr hopes to adapt the technology for use in restaurants, on cruise ships and in the produce aisles of grocery stores.