An oddball superconductor is the first of its kind — and if scientists are lucky, its discovery may lead to others.
At a frigid temperature 5 ten-thousandths of a degree above absolute zero, bismuth becomes a superconductor — a material that conducts electricity without resistance — physicists from the Tata Institute of Fundamental Research in Mumbai, India, report online December 1 in Science.
Bismuth, a semimetallic element, conducts electricity less efficiently than an ordinary metal. It is unlike most other known superconductors in that it has very few mobile electrons. Consequently, the prevailing theory of superconductivity doesn’t apply. The result is “quite important,” says theoretical physicist Marvin Cohen of the University of California, Berkeley. New ideas — either a different theory or a tweak to the standard one — are needed to explain bismuth’s superconductivity. “It might lead us to a better theory of superconductivity with more details,” Cohen says.
An improved theoretical understanding might lead scientists to other superconductors, potentially ones that work at more practical temperatures, says Srinivasan Ramakrishnan, a coauthor of the paper. “It opens a new path for discovering new superconducting materials.”
Physicists’ ultimate goal is to find a superconductor that operates at room temperature. Such a material could be used to replace standard metals in wires and electronics, providing massive energy savings and technological leaps, from advanced supercomputers to magnetically levitating trains.
To confirm that bismuth was superconducting, Ramakrishnan and collaborators chilled ultrapure crystals of bismuth, while shielding the crystals from magnetic fields. Below 0.00053 kelvins (about –273° Celsius), the researchers observed a hallmark of superconductivity known as the Meissner effect, in which the superconductor expunges magnetic fields from within itself.
In the standard theory of superconductivity, electrons partner up in a fashion that removes resistance to their flow, thanks to the electrons’ interactions with ions in the material. But the theory, known as the Bardeen-Cooper-Schrieffer, or BCS, theory, works only for materials with many free-floating electrons. A typical superconductor has about one mobile electron for each atom in the material, while in bismuth each electron is shared by 100,000 atoms. Bismuth has previously been made to superconduct when subjected to high pressure or when formed into nanoparticles, or when its atoms are disordered, rather than neatly arranged in a crystal. But under those conditions, bismuth behaves differently, so the BCS theory still applies. The new result is the first sign of superconducting bismuth in its normal form.
Another class of superconductors, known as high-temperature superconductors, likewise remains enigmatic (SN: 8/8/15, p. 12). Scientists have yet to reach a consensus on how they work. Though these superconductors must be cooled, they operate at relatively high temperatures, above the boiling point of liquid nitrogen (77 kelvins, or –196° Celsius).
Bismuth’s unusual behavior provides another handle with which to investigate the still-mysterious phenomenon of superconductivity. In addition to its low electron density and unexpected superconductivity, bismuth has several anomalous properties, including unusual optical and magnetic behavior. “A good global picture is missing” for explaining the abnormal element, says theoretical physicist Ganapathy Baskaran of the Institute of Mathematical Sciences in Chennai, India. “I think it’s only a tip of an iceberg.”
A child mummy buried in a church crypt in Lithuania could hold the oldest genetic evidence of smallpox.
Traces of the disease-causing variola virus linger in the mummy, which dates to about 1654, evolutionary geneticist Ana Duggan and colleagues report December 8 in Current Biology. Previously, a team of researchers had reported variola DNA in a roughly 300-year-old Siberian mummy.
Some Egyptian mummies, dating back more than 3,000 years, have pockmarks that scientists have interpreted as signs of smallpox, indicating the disease may have tormented humans for millennia. “The definitive feature of smallpox is a pustular rash,” says Duggan of McMaster University in Hamilton, Canada. “But it isn’t easy to say whether a rash comes from smallpox or chicken pox or measles.” Duggan’s team analyzed skin from the mummy, believed to be a boy who died between ages 2 and 4. They found DNA from an ancient strain of variola, and compared it with dozens of strains from the 20th century. The ancient and modern strains weren’t all that different, the researchers found. They shared a common ancestor that dates to around the late 16th century, not long before the boy died.
“It’s a little bit curious,” Duggan says. More diversity might be expected of a virus that had been kicking around since ancient Egyptian times. The find could suggest that “the timeline of smallpox existing in humans isn’t that deep at all.”
In fact, historical mortality records suggest that around the late 16th century, smallpox seemed to go “from something that occasionally caused infection to more of an epidemic disease,” Duggan says.
But researchers can’t say for sure when smallpox started affecting humans on a large-scale, she cautions. Whether or not those ancient Egyptian mummies had smallpox is still an open question, she says. “We haven’t closed it.”
Self-driving cars promise to transform roadways. There’d be fewer traffic accidents and jams, say proponents, and greater mobility for people who can’t operate a vehicle. The cars could fundamentally change the way we think about getting around.
The technology is already rolling onto American streets: Uber has introduced self-driving cabs in Pittsburgh and is experimenting with self-driving trucks for long-haul commercial deliveries. Google’s prototype vehicles are also roaming the roads. (In all these cases, though, human supervisors are along for the ride.) Automakers like Subaru, Toyota and Tesla are also including features such as automatic braking and guided steering on new cars. “I don’t think the ‘self-driving car train’ can be stopped,” says Sebastian Thrun, who established and previously led Google’s self-driving car project.
But don’t sell your minivan just yet. Thrun estimates 15 years at least before self-driving cars outnumber conventional cars; others say longer. Technical and scientific experts have weighed in on what big roadblocks remain, and how research can overcome them. To a computer, a highway on a clear day looks completely different than it does in fog or at dusk. Self-driving cars have to detect road features in all conditions, regardless of weather or lighting. “I’ve seen promising results for rain, but snow is a hard one,” says John Leonard, a roboticist at MIT. Sensors need to be reliable, compact and reasonably priced — and paired with detailed maps so a vehicle can make sense of what it sees. Leonard is working with Toyota to help cars respond safely in variable environments, while others are using data from cars’ onboard cameras to create up-to-date maps. “Modern algorithms run on data,” he says. “It’s their fuel.” Self-driving cars struggle to interpret unusual situations, like a traffic officer waving vehicles through a red light. Simple rule-based programming won’t always work because it’s impossible to code for every scenario in advance, says Missy Cummings, who directs a Duke University robotics lab. Body language and other contextual clues help people navigate these situations, but it’s challenging for a computer to tell if, for example, a kid is about to dart into the road. The car “has to be able to abstract; that’s what artificial intelligence is all about,” Cummings says.
In a new approach, her team is investigating whether displays on the car can instead alert pedestrians to what the car is going to do. But results suggest walkers ignore the newfangled displays in favor of more old-fashioned cues — say, eyeballing the speed of the car. Even with fully autonomous vehicles on the horizon, most self-driving cars will be semiautonomous for at least the foreseeable future. But figuring out who has what responsibilities at what time can be tricky. How does the car notify a passenger who has been reading or taking a nap that it’s time to take over a task, and how does the car confirm that the passenger is ready to act? “In a sense, you are still concentrating on some of the driving, but you are not really driving,” says Chris Janssen, a cognitive scientist at Utrecht University in the Netherlands.
His lab is studying how people direct their attention in these scenarios. One effort uses EEG machines to look at how people’s brains respond to an alert sound when the people are driving versus riding as a passive passenger (as they would in a self-driving car). Janssen is also interested in the best time to deliver instructions and how explicit the instructions should be.
In exploring the ethical questions of self-driving cars, Iyad Rahwan, an MIT cognitive scientist, has confirmed that people are selfish: “People buying these cars, they want cars that prioritize the passenger,” says Rahwan — but they want other people’s cars to protect pedestrians instead (SN Online: 6/23/16). In an online exercise called the Moral Machine, players choose whom to save in different scenarios. Does it matter if the pedestrian is an elderly woman? What if she is jaywalking? Society will need to decide what rules and regulations should govern self-driving cars. For the technology to catch on, decisions will have to incorporate moral judgments while still enticing consumers to embrace automation. In 2015, hackers brought a Jeep to a halt on a St. Louis highway by wirelessly accessing its braking and steering via the onboard entertainment system. The demonstration proved that even conventional vehicles have vulnerabilities that, if exploited, could lead to accidents. Self-driving cars, which would get updates and maps through the cloud, would be at even greater risk. “The more computing permeates into everyday objects, the harder it is going to be to keep track of the vulnerabilities,” says Sean Smith, a computer scientist at Dartmouth College.
And while terrorists might want to crash cars, Smith can imagine other nefarious acts: For instance, hackers could disable someone’s car and hold it for ransom until receiving a digital payment.
In a better world, it would be the big news of the year just to report that Arctic sea ice shrank to 4.14 million square kilometers this summer, well below the 1981–2010 average of 6.22 million square kilometers (SN Online: 9/19/16). But in this world of changing climate, extreme summer ice loss has become almost expected. More novel in 2016 were glimpses of the complex biological consequences of melting at the poles and the opening of Arctic passageways, talked about for at least a decade and now well under way.
With top-of-the-world trade and tourist shortcuts opening, less ice means more travel. Europe-to-Asia shipping routes will typically shorten by about 10 days by midcentury, a report in Geophysical Research Letters predicted. Hopes for Northwest Passage routes obsessed (and killed) explorers in previous centuries, but in 2016, the thousand-passenger cruise ship Crystal Serenity offered the first megascale tourist trip from Alaska to New York with fine dining, casino gambling and an escort icebreaker vessel. Biologists are delving into consequences for organisms other than human tourists — or the much-discussed polar bear. “There’s been a marked shift in the research community,” says climate change ecologist Eric Post of the University of California, Davis. There’s new interest in considering more than just species that dwell on sea ice, with researchers looking for the less direct effects of declining ice. In the February Global Change Biology, eight scientists issued a call for observations of what could be early signs of faunal exchange: the mingling of Atlantic and Pacific species. One possible indicator is the sighting of gray whales off the coast of Namibia and also off Israel, even though that species went extinct in the Atlantic two centuries ago. These whales feed by snouting around in soft ocean bottoms, adding another predator to the system but also creating new habitat opportunities for some creatures (SN: 1/23/16, p. 14).
Since the call was published, biodiversity scientist Seabird McKeon of Colby College in Waterville, Maine, has heard new reports, such as a sighting of an ancient murrelet off the coast of Maine. It’s not the first wrong-coast report for the bird, which typically resides in the northern Pacific, but repeat sightings could be important, too. “What I think we’re seeing is not just new species coming across, but also perhaps an increased chance of survival and reproduction if more come over,” McKeon says. He is hoping to get new data from the online Encyclopedia of Life’s upcoming Fresh Data system, which connects scientists to people reporting nature observations. For terrestrial northerners, melting ice often means loss of mobility. Peary caribou on the 36,000 or more islands of Canada’s northern archipelago occasionally use ice bridges to travel to new territories and mix genes with other populations. Yet ice losses since 1979 have
made it some 15 percent harder
to find traveling paths, researchers reported in September inBiology Letters
(
SN: 10/29/16, p. 8
).
Even some plants such as dwarf birch probably travel by ice, scientists also reported in September in Biology Letters. Reconstructing long-ago sea ice extent and plant colonization dates suggests that seeds hitchhiked on slowly creeping frozen conveyors around northern Europe to colonize new territory at the end of the Ice Age. Losing ice roads could lead to tattered, disconnected populations as recolonization becomes less likely. Yet, there are pluses and minuses, says Post, who is helping to develop a package of scientific articles for Biology Letters on the biological effects of sea ice loss. Reseeding populations after a wipeout could be more difficult with tattered ice, but for the highly specialized and vulnerable plants very far north, the loss of sea ice could slow the arrival of invasive species that threaten the natives.
The minimum summer sea ice extent since 1979 has declined by about 87,000 square kilometers per year, equivalent to an area more than three times the size of New Jersey disappearing annually, as Post has put it. The September 2016 sea ice minimum didn’t break a record, as some had expected it might. It tied for second worst, behind the 2012 minimum, and roughly equaled the 2007 minimum. 2016 did set a new record low for winter Arctic ice extent (SN Online: 3/28/16). Sea ice changes reverberate through the ecosystem. Ice melting cues the springtime phytoplankton blooms that feed copepods and other tiny marine grazers. The grazers feed their predators and, in turn, the predators of those predators. In years when spring warming brings an early ice retreat, the phyto-plankton bloom is not a huge, rich burst. It favors smaller grazing zooplankton that don’t fuel as much of a boom in their predators, marine ecologist Martin Renner of Homer, Alaska, and colleagues reported in a paper for the Biology Letters special collection.
Tracing the effects of shrinking ice through these grazers to fish to seabirds revealed a tangled web of ups and downs and shifting foraging grounds. In the end, Renner and colleagues predict “a very different eastern Bering Sea ecosystem and fishery than we know today.” And that may be far from the only sea change in the far north.
Slight variations in the moon’s gravitational tug have hinted that kilometers-wide caverns lurk beneath the lunar surface. Like the lava tubes of Hawaii and Iceland, these structures probably formed when underground rivers of molten rock ran dry, leaving behind a cylindrical channel. On Earth, such structures max out at around 30 meters across, but the gravitational data suggest that the moon’s tubes are vastly wider.
Assessing the sturdiness of lava tubes under lunar gravity, planetary geophysicist Dave Blair of Purdue University in West Lafayette, Ind., and colleagues estimate that the caves could remain structurally sound up to 5 kilometers across. That’s wide enough to fit the Golden Gate Bridge, Brooklyn Bridge and London Bridge end to end.
Such colossal caves will be prime real estate for lunar pioneers, the researchers report in the Jan. 15 Icarus. Lava tubes could offer protection from the extreme temperatures, harsh radiation and meteorite impacts on the surface.
Scientists investigating what keeps lungs from overinflating can quit holding their breath.
Experiments in mice have identified a protein that senses when the lungs are full of air. This protein helps regulate breathing in adult mice and gets breathing going in newborn mice, researchers report online December 21 in Nature.
If the protein plays a similar role in people — and a few studies suggest that it does — exploring its activity could help explain disorders such as sleep apnea or chronic obstructive pulmonary disease. “These are extremely well done, very elegant studies,” says neonatologist Shabih Hasan of the University of Calgary in Canada, a specialist in breathing disorders in newborns. Researchers knew that feedback between the lungs and brain maintains normal breathing. But “this research give us an understanding at the cellular level,” says Hasan. “It’s a major advance.”
Called Piezo2, the protein forms channels in the membranes of nerve cells in the lungs. When the lungs stretch, the Piezo2 channels detect the distortion caused by the mechanical force of breathing and spring open, triggering the nerves to send a signal. Led by neuroscientist Ardem Patapoutian, researchers discovered that the channels send signals along three different pathways. Mice bred to lack Piezo2 in a cluster of nerve cells that send messages to the spinal cord had trouble breathing and died within 24 hours. Similarly, newborn mice missing Piezo2 channels in nerves that communicate with the brain stem via a structure called the jugular ganglion also died. Mice lacking Piezo2 in the nodose ganglion, a structure that also links to the brain stem, lived to adulthood. But their breathing was abnormal and an important safety mechanism in the lungs of these mice didn’t work. Called the Hering-Breuer reflex, it kicks in when the lungs are in danger of overinflating. When functioning properly, Piezo2’s signal prevents potentially harmful overinflation by temporarily halting breathing. Known as apnea, this cessation of breathing can be dangerous in other instances but prevents damage in this case.
“Breathing is a mechanical process,” says Patapoutian, a Howard Hughes Medical Institute investigator at the Scripps Research Institute in La Jolla, Calif. “Intuitively, you could imagine that lung stretch sensors could play an important role in regulating breathing pattern. Amazingly, however, no definitive proof for such a feedback mechanism existed.”
Previous work in mice by Patapoutian and colleagues found that Piezo2 channels play a major role in sensing touch. The channels also function in proprioception, the sense of where body parts are in relation to each other, Patapoutian and colleagues reported last year.
Two recent studies by different research teams have found that people with mutations in a Piezo2 gene have problems with touch, proprioception and, in one study, breathing. Although small, the studies suggested that investigating Piezo2 in people could shed light on breathing disorders or other problems. The protein channels might play a role in sensing the “fullness” of the stomach and bladder and perhaps other mechanical processes such as heart rate control, Patapoutian says.
Investigating Piezo2 could also help explain how newborn lungs transition from being fluid-filled to breathing air, says neuroscientist Christo Goridis of the École des Neurosciences Paris.
New research is stirring the pot about an ancient Egyptian burial practice.
Many ancient peoples, including Egyptians, buried some of their dead in ceramic pots or urns. Researchers have long thought these pot burials, which often recycled containers used for domestic purposes, were a common, make-do burial for poor children.
But at least in ancient Egypt, the practice was not limited to children or to impoverished families, according to a new analysis. Bioarchaeologist Ronika Power and Egyptologist Yann Tristant, both of Macquarie University in Sydney, reviewed published accounts of pot burials at 46 sites, most near the Nile River and dating from about 3300 B.C. to 1650 B.C. Their results appear in the December Antiquity. A little over half of the sites contained the remains of adults. For children, pot burials were less common than expected: Of 746 children, infants and fetuses interred in some type of burial container, 338 were buried in wooden coffins despite wood’s relative scarcity and cost. Another 329 were buried in pots. Most of the rest were placed in baskets or, in a few cases, containers fashioned from materials such as reeds or limestone.
In the tomb of a wealthy governor, an infant was found in a pot containing beads covered in gold foil. Other pot burials held myriad goods — gold, ivory, ostrich eggshell beads, clothing or ceramics. Bodies were either placed directly into urns, or sometimes pots were broken or cut to fit the deceased.
People deliberately chose the containers, in part for symbolic reasons, the researchers now propose. The hollow vessels, which echo the womb, may have been used to represent a rebirth into the afterlife, the scientists say.
Everybody wants more juice from their batteries. Smartphones and laptops always need recharging. Electric car drivers must carefully plan their routes to avoid being stranded far from a charging station. Anyone who struggles with a tangle of chargers every night would prefer a battery that can last for weeks or months.
For researchers who specialize in batteries, though, the drive for a better battery is less about the luxury of an always-charged iPad (though that would be nice) and more about kicking our fossil fuel habit. Given the right battery, smog-belching cars and trucks could be replaced with vehicles that run whisper-quiet on electricity alone. No gasoline engine, no emissions. Even airplanes could go electric. And the power grid could be modernized to use cheaper, greener fuels such as sunlight or wind even on days when the sun doesn’t shine bright enough or the wind doesn’t blow hard enough to meet electricity demand.
A better battery has the potential to jolt people into the future, just like the lithium-ion battery did. When they became popular in the early 1990s, lithium-ion batteries offered twice as much energy as the next best alternative. They changed the way people communicate.
“What the lithium-ion battery did to personal electronics was transformational,” says materials scientist George Crabtree, director of the Joint Center for Energy Storage Research at Argonne National Laboratory in Illinois. “The cell phone not only made landlines obsolete for many, but [the lithium-ion battery] put cameras and the internet into the hands of millions.” That huge leap didn’t happen overnight. “It was the sum of many incremental steps forward, and decades of work,” says Crabtree, who coordinates battery research in dozens of U.S. labs. Lithium-ion batteries have their limits, however, especially for use in the power grid and in electric vehicles. Fortunately, like their Energizer mascot, battery researchers never rest. Over the last 10 years, universities, tech companies and car manufacturers have explored hundreds of new battery technologies, reaching for an elusive and technically difficult goal: next-generation batteries that hold more energy, last longer and are cheaper, safer and easier to recharge.
A decade of incremental steps are beginning to pay off. In late 2017, scientists will introduce a handful of prototype batteries to be developed by manufacturers for potential commercialization. Some contain new ingredients — sulfur and magnesium — that help store energy more efficiently, delivering power for longer periods. Others will employ new designs. “These prototypes are proof-of-principle batteries, miniature working versions,” Crabtree says. Getting the batteries into consumer hands will take five to 10 years. Making leaps in battery technology, he says, is surprisingly hard to do.
Power struggle Batteries operate like small chemical plants. Technically, a battery is a combination of two or more “electrochemical cells” in which energy released by chemical reactions produces a flow of electrons. The greater the energy produced by the chemical reactions, the greater the electron flow. Those electrons provide a current to whatever the battery is powering — kitchen clock, smoke alarm, car engine.
To power any such device, the electrons must flow through a circuit connecting two electrodes, known as an anode and a cathode, separated by a substance called an electrolyte. At the anode, chemical oxidation reactions release electrons. At the cathode, electrons are taken up in reduction reactions. The electrolyte enables ions created by the oxidation and reduction reactions to pass back and forth between the two electrodes, completing the circuit.
Depending on the materials used for the electrodes and the electrolyte, a battery may be recharged by supplying current that drives the chemical reactions in reverse. In creating new recipes for a rechargeable electrochemical soup, though, battery researchers must beware of side reactions that can spoil everything. “There’s the chemical reaction you want — the one that stores energy and releases it,” Crabtree says. “But there are dozens of other … reactions that also take place.” Those side reactions can disable a battery, or worse, lead to a risk of catastrophic discharge. (Consider the recent fires in Samsung’s Galaxy Note 7 smartphones.)
Early versions of the lithium-ion battery from the 1970s carried an anode made of pure lithium metal. Through repeated use, lithium ions were stripped off and replated onto the anode, creating fingerlike extensions that reached across to the cathode, shorting out the battery. Today’s lithium-ion batteries have an anode made of graphite (a form of carbon) so that loose lithium ions can snuggle in between sheets of carbon atoms.
Lithium-ion batteries were originally developed with small electronics in mind; they weren’t designed for storing electricity on the grid or powering electric vehicles. Electric cars need lots of power, a quick burst of energy to move from stop to start. Electric car manufacturers now bundle thousands of such batteries together to provide power for up to 200 miles before recharging, but that range still falls far short of what a tank of gas can offer. And lithium-ion batteries drain too quickly to feed long hours of demand on the grid.
Simply popping more batteries into a car or the grid isn’t the answer, Crabtree says. Stockpiling doesn’t improve the charging time or the lifetime of the battery. It’s also bulky. Carmakers have to leave drivers room for their passengers plus some trunk space. To make electric vehicles competitive with, or better than, vehicles run by internal-combustion engines, manufacturers will need low-cost, high-energy batteries that last up to 15 years. Likewise, grid batteries need to store energy for later use at low cost, and stand up to decades of use.
“There’s no one battery that’s going to meet all our needs,” says MIT materials scientist Yet-Ming Chiang. Batteries needed for portable devices are very different from those needed for transportation or grid-scale storage. Expect to see a variety of new battery types, each designed for a specific application. Switching to sulfur For electric vehicles, lithium-sulfur batteries are the next great hope. The cathode is made mainly of sulfur, an industrial waste product that is cheap, abundant and environmentally friendly. The anode is made of lithium metal.
During discharge, lithium ions break away from the anode and swim through the liquid electrolyte to reach the sulfur cathode. There, the ions form a covalent bond with the sulfur atoms. Each sulfur atom bonds to two lithium ions, rather than just one, doubling the number of bonds in the cathode of the battery. More chemical bonds means more stored energy, so a lithium-sulfur battery creates more juice than a lithium-ion one. That, combined with sulfur’s light weight, means that, in principle, manufacturers can pack more punch for a given amount of weight, storing four or five times as much energy per gram.
Ultimately, that upgrade could boost an electric vehicle’s range up to as much as 500 miles on a single charge. But first, researchers have to get past the short lifetime of existing lithium-sulfur batteries, which, Crabtree says, is due to a loss of lithium and sulfur in each charge-discharge cycle.
When lithium combines with sulfur, it also forms compounds called polysulfides, which quickly gum up the battery’s insides. Polysulfides form within the cathode during battery discharge, when stored energy is released. Once they dissolve in the battery’s liquid electrolyte, the polysulfides shuttle to the anode and react with it, forming a film that renders the battery useless within a few dozen cycles — or one to two months of use.
At Sandia National Laboratories in Albuquerque, N.M., a team led by Kevin Zavadil is trying to block the formation of polysulfides in the electrolyte. The electrolyte consists of salt and a solvent, and current lithium-sulfur batteries require a large amount of electrolyte to achieve a moderate life span. Zavadil and his team are developing “lean” electrolyte mixtures less likely to dissolve the sulfur molecules that create polysulfides.
Described September 9 in ACS Energy Letters, the new electrolyte mix contains a higher-than-usual salt concentration and a “sparing” amount of solvent. The researchers also reduced the overall amount of electrolyte in the batteries. In test runs, the tweaks dropped the concentration of polysulfides by several orders of magnitude, Zavadil says. “We [also] have some ideas on how to use membranes to protect the lithium surface to prevent polysulfides from happening in the first place,” Zavadil says. The goal is to produce a working prototype of the new battery — one that can last through thousands of cycles — by the end of 2017.
At the University of Texas at Austin, materials engineer Guihua Yu, along with colleagues at Zhejiang University of Technology in Hangzhou, China, is investigating another work-around for this battery type: replacing the solid sulfur cathode with an intricate structure that encapsulates the sulfur in an array of nanotubes. Reported in the November issue of Nano Letters, the nanotubes that encase the sulfur are fashioned from manganese dioxide, a material that can attract and hold on to polysulfides. The nanotubes are coated with polypyrrole, a conductive polymer that helps boost the flow of electrons.
This approach reduces buildup and boosts overall conductivity and efficiency, Yu says. So far, with the group’s new architecture, the battery loses less than 0.07 percent of its capacity per charge and discharge cycle. After 500 cycles, the battery maintained about 65 percent of its original capacity, a great improvement over the short lifetime of current lithium-sulfur batteries. Still, for use in electric vehicles, scientists want batteries that can last through thousands of cycles, or 10 to 15 years. Scientists at Argonne are constructing another battery type: one that replaces lithium ions at the anode with magnesium. This switch could instantly boost the electrical energy released for the same volume, says Argonne materials engineer Brian Ingram, noting that a magnesium ion has a charge of +2, double that of lithium’s +1. Magnesium’s ability to produce twice the electrical current of lithium ions could allow for smaller, more energy-dense batteries, Ingram says.
Magnesium comes with its own challenge, however. Whereas lithium ions zip through a battery’s electrolyte, magnesium ions slowly trudge. A team of researchers at Argonne, Northwestern University and Oak Ridge National Laboratory shot high-energy X-rays at magnesium in various batteries and learned that the drag is due to interactions with molecules that the magnesium attracts within the electrolyte. Ingram and his group are experimenting with new materials to find a molecular recipe that reduces such drag.
Ingram’s team is trying to nudge its “highly functioning, long-lasting” prototype to 3 volts by December. Today’s typical lithium-ion battery has 3.8 to 4 volts. At 3 volts, Ingram says, the magnesium battery would pack more power than a 4-volt lithium-ion battery and “create a tremendous amount of excitement within the field.”
Going with the flow Together, transportation and the electricity grid account for about two-thirds of U.S. energy use. But today, only 10 percent of the electricity on the grid is from renewable sources, according to the U.S. Energy Information Administration. If wind and solar power are ever to wrestle energy production away from fossil fuels, big changes must happen in energy storage. What is needed, Crabtree says, is a battery that can store energy, and lots of it, for later use. “Though the sun shines in the middle of the afternoon, peak demand comes at sunset when people go home, turn on lights and cook dinner,” he says.
To reliably supply electricity at night or on cloudy, windless days requires a different type of battery. By design, flow batteries fit the bill. Instead of having solid electrodes, flow batteries store energy in two separate tanks filled with chemicals — one positively charged, the other negatively charged. Pumps move the liquids from the tanks into a central chamber, or “stack,” where dissolved molecules in the liquids undergo chemical reactions that store and give up energy. A membrane located in the stack keeps the positive and negative ions separated. Flow batteries can store energy for a long time and provide power as needed. Because the energy-storing liquids are kept in external tanks, the batteries are unlikely to catch fire, and can be built large or small depending on need. To store more power, use a larger tank.
So far, however, flow batteries are expensive to make and maintain, and have had limited use for providing backup power on the grid. Today’s flow batteries are packed with rare and toxic metal components, usually vanadium. With many moving parts — tanks, pumps, seals and sensors — breakdowns and leakage are common.
At MIT, Chiang and colleagues are developing flow batteries that can bypass those drawbacks. One, an hourglass flow battery, does away with the need for costly and troublesome pumps. The stack where the chemical reactions occur is in the constricted middle, with tanks at either end. Gravity allows the liquids to flow through the stack, like sand in an hourglass. A motor adjusts the battery’s angle to speed or slow the flow.
The hourglass design is like a “concept car,” Chiang says. Though the final product is likely to take a slightly different shape, the design could serve as a model for future flow batteries. Simply changing the tilt of the device could add a short infusion of power to the grid during periods of peak demand, or slowly release energy over a period of many hours to keep air conditioners and heaters running when the sun is down.
In another design, the group has replaced vanadium with sulfur, which is inexpensive and abundant. Dissolved in water (also cheap and plentiful), sulfur is circulated into and out of the battery’s stack, creating a reaction that stores or gives up energy, similar to commercial flow batteries. The group is now refining the battery, first described in 2014 in Nano Letters, aiming for higher levels of energy.
Another challenge in developing flow batteries is finding ways to keep active materials confined to their tanks. That’s the job of the battery membrane, but because the organic molecules under consideration for battery use are almost always small, they too easily slip through the membrane, reducing the battery’s lifetime and performance. Rather than change the membrane, a group led by chemist Joaquín Rodríguez-López of the University of Illinois at Urbana-Champaign devised ways to bulk up the battery’s active materials by changing their size or configuration. The scientists linked tens to millions of active molecules together to create large, ringed structures, long strings of molecules hooked onto a polymer backbone, or suspensions of polymers containing up to a billion molecules, they reported in the Journal of the American Chemical Society in 2014.
With the oversized molecules, even “simple, inexpensive porous membranes are effective at preventing crossover,” Crabtree says. A prototype flow battery that provides low-cost power and lasts 20 to 30 years is expected to be completed in the coming year.
Getting air Looking beyond 2017, scientists envision a new generation of batteries made of low-cost, or even no-cost, materials. The lithium-air battery, still in early development, uses oxygen sucked in from the atmosphere to drive the chemical reaction that produces electricity. In the process, oxygen combines with lithium ions to form a solid compound (lithium peroxide). During charging, solid oxygen reverts back to its gaseous form.
“Lithium-air potentially offers the highest energy density possible,” says MIT materials engineer Ju Li. “You basically react lithium metal with oxygen in air, and in principle you get as much useful energy as gasoline.”
Lithium-air has problems, though. The batteries are hard to recharge, losing much of their power during the process. And the chemical reaction that powers the battery generates heat, cutting the battery’s energy-storage capacity and life span.
Using electron microscopy to study the reaction products of a lithium-air prototype, Li and his group came up with a possible solution: Keep oxygen in a solid form sealed within the battery to prevent the oxygen from forming a gas. By encasing oxygen and lithium in tiny glasslike particles, the scientists created a fully sealed battery. The new strategy, published July 25 online in Nature Energy, curbed energy loss during recharging and prevented heat buildup.
“If it works on a large scale, we would have an electrical vehicle that’s competitive with gasoline-driven cars,” Li says. Reaching that goal would be a big step toward a greener planet.
One hundred and ninety-two atoms have tied the knot.
Chains of carbon, hydrogen, oxygen and nitrogen atoms, woven together in a triple braid, form the most complex molecular knot ever described, chemists from the University of Manchester in England report in the Jan. 13 Science.
Learning how to tie such knots could one day help researchers weave molecular fabrics with all sorts of snazzy properties. “We might get the strength of Kevlar with a lighter and more flexible material,” says study coauthor David Leigh. That’s still a long way away, but molecular knot tying has an appeal that’s purely intellectual, too, says University of Cambridge chemist Jeremy Sanders. “It’s like the answer to why you climb Everest,” he says. “It’s a challenge.”
Mathematicians know of more than six billion types of prime knots, which, like prime numbers, cannot be broken down into simpler components. “Prime knots can’t be built up by sticking other knots together,” Leigh explains. For years, chemists were able to synthesize just one type of prime knot out of small molecules. “We thought that was pretty ridiculous,” says Leigh.
That molecular knot was a trefoil, like a three-leaf clover. Jean-Pierre Sauvage and colleagues wove it from chemical strands in 1989. Sauvage won a Nobel Prize in 2016 for earlier work that used the same principles explored in his knots (SN: 10/29/16, p. 6).
In the decades since Sauvage’s trefoil, chemists have tried to synthesize other types of molecular knots, but “they’ve always found it incredibly difficult,” says chemist Sophie Jackson, also at the University of Cambridge.
Persuading nanoscale strands to interlock together in an orderly fashion isn’t simple. “You can’t just grab the ends and tie them like you would a shoelace,” Leigh says. Instead, scientists choose molecular ingredients that assemble themselves. In 2012, Leigh and colleagues used the self-assembly technique to make a molecular pentafoil knot, a star-shaped structure made up of 160 atoms and with strands that cross five times (SN: 1/28/12, p. 12). This latest knot, with eight crossing points, is even more intricate.
Leigh’s team mixed together building blocks containing carbon, hydrogen, oxygen and nitrogen atoms with iron ions and chloride ions. “You dump them all in, heat them all up and they self-assemble,” he says. Sticky metal ions hold the building blocks in the correct position, and a single chloride ion sitting in the middle of the structure anchors it all together. Then, a chemical catalyst links the building blocks, forming the completed knot. The new knot is the tightest ever created, Leigh says, with just 24 atoms between each crossing point.
It’s beautiful, Sanders says. “It’s a string of atoms rolled up in a spherical shape, with an astonishing amount of symmetry.” Sanders is reluctant to speculate how such a knot might be used, but it’s round and very dense, he says. That could give it some interesting materials properties.
Leigh suspects that different molecular knots might behave differently, like the various knots used by fishermen and sailors. “We want to make specific knots, see what they do and then figure out how to best exploit that,” he says.
Gold’s glimmer is not the only reason the element is so captivating. For decades, scientists have puzzled over why theoretical predictions of gold’s properties don’t match up with experiments. Now, highly detailed calculations have erased the discrepancy, according to a paper published in the Jan. 13 Physical Review Letters.
At issue was the energy required to remove an electron from a gold atom, or ionize it. Theoretical calculations of this ionization energy differed from scientists’ measurements. Likewise, the energy released when adding an electron — a quantity known as the electron affinity — was also off the mark. How easily an atom gives up or accepts electrons is important for understanding how elements react with other substances. “It was well known that gold is a difficult system,” says chemist Sourav Pal of the Indian Institute of Technology Bombay, who was not involved with the study. Even gold’s most obvious feature can’t be explained without calling Einstein’s special theory of relativity into play: The theory accounts for gold’s yellowish color. (Special relativity shifts around the energy levels of electrons in gold atoms, causing the metal to absorb blue light, and thereby making reflected light appear more yellow.)
With this new study, scientists have finally resolved the lingering questions about the energy involved in removing or adding an electron to the atom. “That is the main significance of this paper,” Pal says.
Early calculations, performed in the 1990s, differed from the predicted energies by more than a percent, and improved calculations since then still didn’t match the measured value. “Every time I went to a conference, people discussed that and asked, ‘What the hell is going on?’” says study coauthor Peter Schwerdtfeger, a chemist at Massey University Auckland in New Zealand.
The solution required a more complete consideration of the complex interplay among gold’s 79 electrons. Using advanced supercomputers to calculate the interactions of up to five of gold’s electrons at a time, the scientists resolved the discrepancy. Previous calculations had considered up to three electrons at a time. Also essential to include in the calculation were the effects of special relativity and the theory of quantum electrodynamics, which describes the quantum physics of particles like electrons.
The result indicates that gold indeed adheres to expectations — when calculations are detailed enough. “Quantum theory works perfectly well, and that makes me extremely happy,” says Schwerdtfeger.