A coral pollution study unexpectedly helped explain Hurricane Maria’s fury

Hurricane Maria struck the island of Puerto Rico early on September 20, 2017, with 250-kilometer-per-hour winds, torrential rains and a storm surge up to three meters high. In its wake: nearly 3,000 people dead, an almost yearlong power outage and over $90 billion in damages to homes, businesses and essential infrastructure, including roads and bridges.

Geologist and diver Milton Carlo took shelter at his house in Cabo Rojo on the southwest corner of the island with his wife, daughter and infant grandson. He watched the raging winds of the Category 4 hurricane lift his neighbor’s SUV into the air, and remembers those hours as some of the worst of his life.
For weeks, the rest of the world was in the dark about the full extent of the devastation, because Maria had destroyed the island’s main weather radar and almost all cell phone towers.

Far away on the U.S. West Coast, in Santa Cruz, Calif., oceanographer Olivia Cheriton watched satellite radar images of Maria passing over the instruments she and her U.S. Geological Survey team had anchored a few kilometers southwest of Puerto Rico. The instruments, placed offshore from the seaside town of La Parguera, were there to track pollution circulating around some of the island’s endangered corals.

More than half a year went by before she learned the improbable fate of those instruments: They had survived and had captured data revealing hurricane-related ocean dynamics that no scientist had ever recorded.

The wind-driven coastal currents interacted with the seafloor in a way that prevented Maria from drawing cold water from the depths of the sea up to the surface. The sea surface stayed as warm as bathwater. Heat is a hurricane’s fuel source, so a warmer sea surface leads to a more intense storm. As Cheriton figured out later, the phenomenon she stumbled upon likely played a role in maintaining Maria’s Category 4 status as it raked Puerto Rico for eight hours.

“There was absolutely no plan to capture the impact of a storm like Maria,” Cheriton says. “In fact, if we somehow could’ve known that a storm like that was going to occur, we wouldn’t have put hundreds of thousands of dollars’ worth of scientific instrumentation in the water.”

A storm’s path is guided by readily observable, large-scale atmospheric features such as trade winds and high-pressure zones. Its intensity, on the other hand, is driven by weather events inside the hurricane and wave action deep below the ocean’s surface. The findings by Cheriton and colleagues, published May 2021 in Science Advances, help explain why hurricanes often get stronger before making landfall and can therefore help forecasters make more accurate predictions.

Reef pollution
Cheriton’s original research objective was to figure out how sea currents transport polluted sediments from Guánica Bay — where the Lajas Valley drains into the Caribbean Sea — to the pristine marine ecosystems 10 kilometers west in La Parguera Natural Reserve, famous for its bioluminescent waters.

Endangered elkhorn and mountainous star corals, called “the poster children of Caribbean reef decline” by marine geologist Clark Sherman, live near shore in some of the world’s highest recorded concentrations of now-banned industrial chemicals. Those polychlorinated biphenyls, or PCBs, hinder coral reproduction, growth, feeding and defensive responses, says Sherman, of the University of Puerto Rico–Mayagüez.
Half of corals in the Caribbean have died since monitoring began in the 1970s, and pollution is a major cause, according to an April 2020 study in Science Advances. Of particular interest to Cheriton, Sherman and their colleagues was whether the pollution had reached deepwater, or mesophotic, reefs farther offshore, which could be a refuge for coral species that were known to be dying in shallower areas.

The main artery for this pollution is the Rio Loco — which translates to “Crazy River.” It spews a toxic runoff of eroded sediments from the Lajas Valley’s dirt roads and coffee plantations into Guánica Bay, which supports a vibrant fishing community. Other possible contributors to the pollution — oil spills, a fertilizer plant, sewage and now-defunct sugar mills — are the subject of investigations by public health researchers and the U.S. Environmental Protection Agency.

In June 2017, the team convened in La Parguera to install underwater sensors to measure and track the currents in this threatened marine environment. From Sherman’s lab on a tiny islet overrun with iguanas the size of house cats, he and Cheriton, along with team leader and USGS research geologist Curt Storlazzi and USGS physical scientist Joshua Logan, launched a boat into choppy seas.
At six sites near shore, Storlazzi, Sherman and Logan dove to the seafloor and used epoxy to anchor pressure gauges and batonlike current meters. Together the instruments measured hourly temperature, wave height and current speed. The team then moved farther offshore where the steep island shelf drops off at a 45-degree angle to a depth of 60 meters, but the heavy ocean chop scuttled their efforts to install instruments there.
For help working in the difficult conditions, Sherman enlisted two expert divers for a second attempt: Carlo, the geologist and diving safety officer, and marine scientist Evan Tuohy, both of the University of Puerto Rico–­Mayagüez. The two were able to install the most important and largest piece, a hydroacoustic instrument comprising several drums fastened to a metal grid, which tracked the direction and speed of currents every minute using pulsating sound waves. A canister containing temperature and salinity sensors took readings every two minutes. Above this equipment, an electric thermometer extended to within 12 meters of the surface, registering temperature every five meters vertically every few seconds.

Working in concert, the instruments gave a high-resolution, seafloor-to-surface snapshot of the ocean’s hydrodynamics on a near-continuous basis. The equipment had to sit level on the sloping seafloor so as not to skew the measurements and remain firmly in place. Little did the researchers know that the instruments would soon be battered by one of the most destructive storms in history.
Becoming Maria
The word hurricane derives from the Caribbean Taino people’s Huricán, god of evil. Some of the strongest of these Atlantic tropical cyclones begin where scorching winds from the Sahara clash with moist subtropical air over the island nation of Cape Verde off western Africa. The worst of these atmospheric disturbances create severe thunderstorms with giant cumulonimbus clouds that flatten out against the stratosphere. Propelled by the Earth’s rotation, they begin to circle counterclockwise around each other — a phenomenon known as the Coriolis effect.

Weather conditions that summer had already spawned two monster hurricanes: Harvey and Irma. By late September, the extremely warm sea surface — 29º Celsius or hotter in some places — gave up its heat energy by way of evaporation into Maria’s rushing winds. All hurricanes begin as an area of low pressure, which in turn sucks in more wind, accelerating the rise of hot air, or convection. Countervailing winds known as shear can sometimes topple the cone of moist air spiraling upward. But that didn’t happen, so Maria continued to grow in size and intensity.

Meteorologists hoped that Maria would lose force as it moved across the Caribbean, weakened by the wake of cooler water Irma had churned up two weeks earlier. Instead, Maria tracked south, steaming toward the eastern Caribbean island of Dominica. Within 15 hours of making landfall, its maximum sustained wind speed doubled, reaching a house-leveling 260 kilometers per hour. That doubling intensified the storm from a milder (still dangerous) Category 1 to a strong Category 5.

NOAA’s computer forecasting models did not anticipate such rapid intensification. Irma had also raged with unforeseen intensity.

After striking Dominica hard, Maria’s eyewall broke down, replaced by an outer band of whipping thunderstorms. This slightly weakened Maria to 250 kilometers per hour before it hit Puerto Rico, while expanding the diameter of the storm’s eyewall — the area of strong winds and heaviest precipitation — to 52 kilometers. That’s close to the width of the island.
It’s still not fully understood why Maria had suddenly gone berserk. Various theories point to the influence of hot towers — convective bursts of heat energy from thunderclouds that punch up into the stratosphere — or deep warm pools, buoyant freshwater eddies spilling out of the Amazon and Orinoco rivers into the Atlantic, where currents carry these pockets of hurricane-fueling heat to the Gulf of Mexico and the Caribbean Sea.

But even though these smaller-scale events may have a big impact on intensity, they aren’t fully accounted for in weather models, says Hua Leighton, a scientist at the National Oceanic and Atmospheric Administration’s hurricane research division and the University of Miami’s Cooperative Institute for Marine and Atmospheric Studies. Leighton develops forecasting models and investigates rapid intensification of hurricanes.

“We cannot measure everything in the atmosphere,” Leighton says.

Without accurate data on all the factors that drive hurricane intensity, computer models can’t easily predict when the catalyzing events will occur, she says. Nor can models account for everything that happens inside the ocean during a hurricane. They don’t have the data.

Positioning instruments just before a hurricane hits is a major challenge. But NOAA is making progress. It has launched a new generation of hurricane weather buoys in the western North Atlantic and remote control surface sensors called Saildrones that examine the air-sea interface between hurricanes and the ocean (SN: 6/8/19, p. 24).

Underwater, NOAA uses other drones, or gliders, to profile the vast areas regularly traversed by tropical storms. These gliders collected 13,200 temperature and salinity readings in 2020. By contrast, the instruments that the team set in Puerto Rico’s waters in 2017 collected over 250 million data points, including current velocity and direction — a rare and especially valuable glimpse of hurricane-induced ocean dynamics at a single location.

A different view
After the storm passed, Storlazzi was sure the hurricane had destroyed his instruments. They weren’t designed to take that kind of punishment. The devices generally work in much calmer conditions, not the massive swells generated by Maria, which could increase water pressure to a level that would almost certainly crush instrument sensors.

But remarkably, the instruments were battered but not lost. Sherman, Carlo and Touhy retrieved them after Maria passed and put them in crates awaiting the research group’s return.
When Storlazzi and USGS oceanographer Kurt Rosenberger pried open the instrument casings in January 2018, no water gushed out. Good sign. The electronics appeared intact. And the lithium batteries had powered the rapid-fire sampling enterprise for the entire six-month duration. The researchers quickly downloaded a flood of data, backed it up and started transmitting it to Cheriton, who began sending back plots and graphs of what the readings showed.

Floodwaters from the massive rains brought by Maria had pushed a whole lot of polluted sediment to the reefs outside Guánica Bay, spiking PCB concentrations and threatening coral health. As of a few months after the storm, the pollution hadn’t reached the deeper reefs.

Then the researchers realized that their data told another story: what happens underwater during a massive hurricane. They presumed that other researchers had previously captured a profile of the churning ocean depths beneath a hurricane at the edge of a tropical island.

Remarkably, that was not the case.

“Nobody’s even measured this, let alone reported it in any published literature,” Cheriton says. The team began to explore the hurricane data not knowing where it might lead.

“What am I looking at here?” Cheriton kept asking herself as she plotted and analyzed temperature, current velocity and salinity values using computer algorithms. The temperature gradient that showed the ocean’s internal or underwater waves was different than anything she’d seen before.
During the hurricane, the top 20 meters of the Caribbean Sea had consistently remained at or above 26º C, a few degrees warmer than the layers beneath. But the surface waters should have been cooled if, as expected, Maria’s winds had acted like a big spoon, mixing the warm surface with cold water stirred up from the seafloor 50 to 80 meters below. Normally, the cooler surface temperature restricts the heat supply, weakening the hurricane. But the cold water wasn’t reaching the surface.

To try to make sense of what she was seeing, Cheriton imagined herself inside the data, in a protective bubble on the seafloor with the instruments as Maria swept over. Storlazzi worked alongside her analyzing the data, but focused on the sediments circulating around the coral reefs.

Cheriton was listening to “An Awesome Wave” by indie-pop band Alt-J and getting goosebumps while the data swirled before them. Drawing on instincts from her undergraduate astronomy training, she focused her mind’s eye on a constellation of data overhead and told Storlazzi to do the same.

“Look up Curt!” she said.

Up at the crest of the island shelf, where the seafloor drops off, the current velocity data revealed a broad stream of water gushing from the shore at almost 1 meter per second, as if from a fire hose. Several hours before Maria arrived, the wind-driven current had reversed direction and was now moving an order of magnitude faster. The rushing surface water thus became a barrier, trapping the cold water beneath it.

As a result, the surface stayed warm, increasing the force of the hurricane. The cooler layers below then started to pile up vertically into distinct layers, one on top of the other, beneath the gushing waters above.

Cheriton calculated that with the fire hose phenomenon the contribution from coastal waters in this area to Maria’s intensity was, on average, 65 percent greater, compared with what it would have been otherwise.
Oceanographer Travis Miles of Rutgers University in New Brunswick, N.J., who was not involved in the research, calls Cheriton and the team’s work a “frontier study” that draws researchers’ attention to near-shore processes. Miles can relate to Cheriton and her team’s accidental hurricane discovery from personal experience: When his water quality–sampling gliders wandered into Hurricane Irene’s path in 2011, they revealed that the ocean off the Jersey Shore had cooled in front of the storm. Irene’s onshore winds had induced seawater mixing across the broad continental shelf and lowered sea surface temperatures.

The Puerto Rico data show that offshore winds over a steep island shelf produced the opposite effect and should help researchers better understand storm-induced mixing of coastal areas, says NOAA senior scientist Hyun-Sook Kim, who was not involved in the research. It can help with identifying deficiencies in the computer models she relies on when providing guidance to storm-tracking meteorologists at the National Hurricane Center in Miami and the Joint Typhoon Warning Center in Hawaii.

And the unexpected findings also could help scientists get a better handle on coral reefs and the role they play in protecting coastlines. “The more we study the ocean, especially close to the coast,” Carlo says, “the more we can improve conditions for the coral and the people living on the island.”

Meet the fungal friends and foes that surround us

When it comes to hoisting water, plants are real power lifters.

For a tall tree, slurping hundreds of liters of water each day up to its leaves or needles, where photosynthesis takes place, can be quite a haul. Even for short grasses and shrubs, rising sap must somehow overcome gravity and resistance from plant tissues. Now, a first-of-its-kind study has estimated the power needed to lift sap to plants’ foliage worldwide — and it’s a prodigious amount, almost as much as all hydroelectric power generated globally.
Over the course of a year, plants harness 9.4 quadrillion watt-hours of sap-pumping power, climatologist Gregory Quetin and colleagues report August 17 in the Journal of Geophysical Research: Biogeosciences. That’s about 90 percent of the amount of hydroelectric power produced worldwide in 2019.

Evaporation of water from foliage drives the suction that pulls sap upward, says Quetin, of the University of California, Santa Barbara (SN: 3/24/22). To estimate the total evaporative power for all plants on Earth annually, the team divided up a map of the world’s land area into cells that span 0.5° of latitude by 0.5° of longitude and analyzed data for the mix of plants in each cell that were actively pumping sap each month. The power required was highest, unsurprisingly, in tree-rich areas, especially in the rainforests of the tropics.
If plants in forest ecosystems had to tap their own energy stores rather than rely on evaporation to pump sap, they’d need to expend about 14 percent of the energy they generated via photosynthesis, the researchers found. Grasses and other plants in nonforest ecosystems would need to expend just over 1 percent of their energy stores, largely because such plants are far shorter and have less resistance to the flow of sap within their tissues than woody plants do.

Indigenous Americans ruled democratically long before the U.S. did

On sunny summer days, powerboats pulling water-skiers zip across Georgia’s Lake Oconee, a reservoir located about an hour-and-a-half drive east of Atlanta. For those without a need for speed, fishing beckons.

Little do the lake’s visitors suspect that here lie the remains of a democratic institution that dates to around 500 A.D., more than 1,200 years before the founding of the U.S. Congress.

Reservoir waters, which flooded the Oconee Valley in 1979 after the construction of a nearby dam, partly cover remnants of a 1,500-year-old plaza once bordered by flat-topped earthen mounds and at least three large, circular buildings. Such structures, which have been linked to collective decision making, are known from other southeastern U.S. sites that date to as early as around 1,000 years ago.
At the Oconee site, called Cold Springs, artifacts were excavated before the valley became an aquatic playground. Now, new older-than-expected radiocarbon dates for those museum-held finds push back the origin of democratic institutions in the Americas several centuries, a team led by archaeologist Victor Thompson of the University of Georgia in Athens reported May 18 in American Antiquity.

Institutions such as these highlight a growing realization among archaeologists that early innovations in democratic rule emerged independently in many parts of the world. In specific, these findings add to evidence that Native American institutions devoted to promoting broad participation in political decisions emerged in various regions, including what’s now Canada, the United States and Mexico, long before 18th century Europeans took up the cause of democratic rule by the people.

That conclusion comes as no surprise to members of some Indigenous groups today. “Native people have been trying to convey for centuries that many communities have long-standing institutions [of] democratic and/or republican governance,” says University of Alberta archaeologist S. Margaret Spivey-Faulkner, a citizen of the Pee Dee Indian Nation of Beaver Creek in South Carolina.

Democratic innovations
Scholars have traditionally thought that democracy — generally referring to rule by the people, typically via elected representatives — originated around 2,500 years ago in Greece before spreading elsewhere in Europe. From that perspective, governments in the Americas that qualified as democratic didn’t exist before Europeans showed up.

That argument is as misguided as Christopher Columbus’ assumption that he had arrived in East India, not the Caribbean, in 1492, says archaeologist Jacob Holland-Lulewicz of Penn State, a coauthor of the Cold Springs report. Institutions that enabled representatives of large communities to govern collectively, without kings or ruling chiefs, characterized an unappreciated number of Indigenous societies long before the Italian explorer’s fateful first voyage, Holland-Lulewicz asserts.

In fact, collective decision-making arrangements that kept anyone from amassing too much power and wealth go back thousands, and probably tens of thousands of years in many parts of the world (SN: 11/9/21). The late anthropologist David Graeber and archaeologist David Wengrow of University College London describe evidence for that scenario in their 2021 book The Dawn of Everything.

But only in the last 20 years have archaeologists begun to take seriously claims that ancient forms of democratic rule existed. Scientific investigations informed by Indigenous partners will unveil past political realities “most of us in Indian country take for granted,” Spivey-Faulkner says.

Early consensus
Thompson’s Cold Springs project shows how such a partnership can work.

Ancestors of today’s Muscogee people erected Cold Springs structures within their original homelands, which once covered a big chunk of southeastern North America before the government-forced exodus west along the infamous Trail of Tears. Three members of the Muscogee Nation’s Department of Historic and Cultural Preservation in Okmulgee, Okla., all study coauthors, provided archaeologists with first-hand knowledge of Muscogee society. They emphasized to the researchers that present-day Muscogee councils where open debate informs consensus decisions carry on a tradition that goes back hundreds of generations.

A set of 44 new radiocarbon dates going back 1,500 years for material previously unearthed at the Georgia site, including what were likely interior posts from some structures, then made perfect sense. Earlier analyses in the 1970s of excavated pottery and six radiocarbon dates from two earthen mounds at Cold Springs suggested that they had been constructed at least 1,000 years ago.

Based on the new dating, Thompson’s team found that from roughly 500 A.D. to 700 A.D, Indigenous people at Cold Springs constructed not only earthen mounds but at least three council-style roundhouses — each 12 to 15 meters in diameter — and several smaller structures possibly used as temporary housing during meetings and ceremonies.

Small communities spread across the Oconee Valley formed tight-knit social networks called clans that gathered at council houses through the 1700s, Thompson’s group suspects. Spanish expeditions through the region from 1539 to 1543 did not cause those societies and their traditions to collapse, as has often been assumed, the researchers contend.
Excavations and radiocarbon dating at another Oconee Valley Muscogee site called Dyar support that view. A square ground connected to Dyar includes remains of a council house. Activity at the site began as early as 1350 and continued until as late as about 1670, or about 130 years after first encounters with the Spanish, Holland-Lulewicz and colleagues reported in the October 2020 American Antiquity.

Spanish historical accounts mistakenly assumed that powerful chiefs ran Indigenous communities in what have become known as chiefdoms. Many archaeologists have similarly, and just as wrongly, assumed that starting around 1,000 years ago, chiefs monopolized power in southeastern Native American villages, the scientists argue.

Today, members of the Muscogee (Creek) Nation in Oklahoma gather, sometimes by the hundreds or more, in circular structures called council houses to reach collective decisions about various community issues. Council houses typically border public square grounds. That’s a modern-day parallel to the story being told by the ancient architecture at Cold Springs.

“Muscogee councils are the longest-surviving democratic institution in the world,” Holland-Lulewicz says.

Indigenous influencers
Political consensus building by early Muscogee people didn’t occur in a vacuum. Across different regions of precontact North America, institutions that enabled broad participation in democratic governing characterized Indigenous societies that had no kings, central state governments or bureaucracies, Holland-Lulewicz and colleagues, report March 11 in Frontiers in Political Science.

The researchers dub such organizations keystone institutions. Representatives of households, communities, clans and religious societies, to name a few, met on equal ground at keystone institutions. Here, all manner of groups and organizations followed common rules to air their opinions and hammer out decisions about, say, distributing crops, organizing ceremonial events and resolving disputes.
For example, in the early 1600s, nations of the neighboring Wendat (Huron) and Haudenosaunee people in northeastern North America had formed political alliances known as confederacies, says coauthor Jennifer Birch, a University of Georgia archaeologist. Each population contained roughly 20,000 to 30,000 people. Despite their size, these confederacies did not hold elections in which individuals voted for representatives to a central governing body. Governing consisted of negotiations among intertwined segments of society orchestrated by clans, which claimed members across society.

Clans, in which membership was inherited through the female line, were — and still are — the social glue holding together Wendat (Huron) and Haudenosaunee politics. Residents of different villages or nations among, say, the Haudenosaunee, could belong to the same clan, creating a network of social ties. Excavations of Indigenous villages in eastern North America suggest that the earliest clans date to at least 3,000 years ago, Birch says.

Within clans, men and women held separate council meetings. Some councils addressed civil affairs. Others addressed military and foreign policy, typically after receiving counsel from senior clan women.

Clans controlled seats on confederacy councils of the Wendat and Haudenosaunee. But decisions hinged on negotiation and consensus. A member of a particular clan had no right to interfere in the affairs of any other clan. Members of villages or nations could either accept or reject a clan leader as their council representative. Clans could also join forces to pursue political or military objectives.

Some researchers, including Graeber and Wengrow, suspect a Wendat philosopher and statesman named Kandiaronk influenced ideas about democracy among Enlightenment thinkers in France and elsewhere. A 1703 book based on a French aristocrat’s conversations with Kandiaronk critiqued authoritarian European states and provided an Indigenous case for decentralized, representative governing.

Although Kandiaronk was a real person, it’s unclear whether that book presented his actual ideas or altered them to resemble what Europeans thought of as a “noble savage,” Birch says.

Researchers also debate whether writers of the U.S. Constitution were influenced by how the Haudenosaunee Confederacy distributed power among allied nations. Benjamin Franklin learned about Haudenosaunee politics during the 1740s and 1750s as colonists tried to establish treaties with the confederacy.

Colonists took selected political ideas from the Haudenosaunee Confederacy without grasping its underlying cultural concerns, says University of Alberta anthropological archaeologist Kisha Supernant, a member of an Indigenous population in Canada called Métis. The U.S. Constitution stresses individual freedoms, whereas the Indigenous system addresses collective responsibilities to manage the land, water, animals and people, she says.

Anti-Aztec equality
If democratic institutions are cultural experiments in power sharing, one of the most interesting examples emerged around 700 years ago in central Mexico.

In response to growing hostilities from surrounding allies of the Aztec Empire, a multi-ethnic confederation of villages called Tlaxcallan built a densely occupied city of the same name. When Spaniards arrived in 1519, they wrote of Tlaxcallan as a city without kings, rulers or wealthy elites.
Until the last decade, Mexican historians had argued that Tlaxcallan was a minor settlement, not a city. They dismissed historical Spanish accounts as exaggerations of the newcomers’ exploits.

Opinions changed after a team led by archaeologist Lane Fargher of Mexico’s Centro de Investigación y Estudios Avanzados del Instituto Polytécnico Nacional (Cinvestav del IPN) in Merida surveyed and mapped visible remains of Tlaxcallan structures from 2007 to 2010. Excavations followed from 2015 through 2018, revealing a much larger and denser settlement than previously suspected.

The ancient city covers a series of hilltops and hillsides, Fargher says. Large terraces carved out of hillsides supported houses, public structures, plazas, earthen mounds and roadways. Around 35,000 people inhabited an area of about 4.5 square kilometers in the early 1500s.

Artifacts recovered at plazas indicate that those open spaces hosted commercial, political and religious activities. Houses clustered around plazas. Even the largest residences were modest in size, not much larger than the smallest houses. Palaces of kings and political big shots in neighboring societies, including the Aztecs, dwarfed Tlaxcallan houses.
Excavations and Spanish accounts add up to a scenario in which all Tlaxcallan citizens could participate in governmental affairs. Anyone known to provide good advice on local issues could be elected by their neighbors in a residential district to a citywide ruling council, or senate, consisting of between 50 and 200 members. Council meetings were held at a civic-ceremonial center built on a hilltop about one kilometer from Tlaxcallan.

As many as 4,000 people attended council meetings regarding issues of utmost importance, such as launching military campaigns, Fargher says.

Those chosen for council positions had to endure a public ceremony in which they were stripped naked, shoved, hit and insulted as a reminder that they served the people. Political officials who accumulated too much wealth could be publicly punished, replaced or even killed.

Tlaxcallan wasn’t a social utopia. Women, for instance, had limited political power, possibly because the main route to government positions involved stints of military service. But in many ways, political participation at Tlaxcallan equaled or exceeded that documented for ancient Greek democracy, Fargher and colleagues reported March 29 in Frontiers of Political Science. Greeks from all walks of life gathered in public spaces to speak freely about political issues. But commoners and the poor could not hold the highest political offices. And again, women were excluded.

Good government
Tlaxcallan aligned itself with Spanish conquerors against their common Aztec enemy. Then in 1545, the Spanish divided the Tlaxcallan state into four fiefdoms, ending Tlaxcallan’s homegrown style of democratic rule.

The story of this fierce, equality-minded government illustrates the impermanence of political systems that broadly distribute power, Fargher says. Research on past societies worldwide “shows us how bad the human species is at building and maintaining democratic governments,” he contends.

Archaeologist Richard Blanton of Purdue University and colleagues, including Fargher, analyzed whether 30 premodern societies dating to as early as around 3,000 years ago displayed signs of “good government.” An overall score of government quality included evidence of systems for providing equal justice, fair taxation, control over political officials’ power and a political voice for all citizens.

Only eight societies received high scores, versus 12 that scored low, Blanton’s group reported in the February 2021 Current Anthropology. The remaining 10 societies partly qualified as good governments. Many practices of societies scoring highest on good government mirrored policies of liberal democracies over the past century, the researchers concluded.

That’s only a partial view of how past governments operated. But surveys of modern nations suggest that no more than half feature strong democratic institutions, Fargher says.

Probing the range of democratic institutions that societies have devised over the millennia may inspire reforms to modern democratic nations facing growing income disparities and public distrust of authorities, Holland-Lulewicz suspects. Leaders and citizens of stressed democracies today might start with a course on power sharing in Indigenous societies. School will be in session at the next meeting of the Muscogee National Council.

Nobel laureate foresees mind-expanding future of physics

A century from now, when biologists are playing games of clones and engineers are playing games of drones, physicists will still pledge their loyalty to the Kingdoms of Substance and Force.

Physicists know the subjects of these kingdoms as fermions and bosons. Fermions are the fundamental particles of matter; bosons transmit forces that govern the behavior of the matter particles. The math describing these particles and their relationships forms the “standard model” of particle physics. Or as Nobel laureate Frank Wilczek calls it, “The Core Theory.”
Wilczek’s core theory differs from the usual notion of standard model. His core includes gravity, as described by Einstein’s general theory of relativity. General relativity is an exquisite theory of gravity, but it doesn’t fit in with the math for the standard model’s three forces (the strong and weak nuclear forces and electromagnetism). But maybe someday it will. Perhaps even by 100 years from now.

At least, that’s among the many predictions that Wilczek has made for the century ahead. In a recent paper titled “Physics in 100 Years,” he offers a forecast for future discoveries and inventions that science writers of the future will be salivating over. (The paper is based on a talk celebrating the 250th anniversary of Brown University. He was asked to make predictions for 250 years from now, but regarded 100 as more reasonable.)

Wilczek does not claim that his forecast will be accurate. He considers it more an exercise in imagination, anchored in thorough knowledge of today’s major questions and the latest advances in scientific techniques and capabilities. Where those two factors meet, Wilczek sees the potential for premonition. His ruminations result in a vision of the future suitable for a trilogy or two of science fiction films. They would involve the unification of the kingdoms of physics and a more intimate relationship between them and the human mind.

Among Wilczek’s prognostications is the discovery of supersymmetric particles, heavyweight partners to the matter and force particles of the Core Theory. Such partner particles would reveal a deep symmetry underlying matter and force, thereby combining the kingdoms and further promoting the idea of unification as a key path to truth about nature. Wilczek also foresees the discovery of proton decay, even though exhaustive searches for it have so far failed to find it. If protons disintegrate (after, on average, trillions upon trillions of years), matter as we know it has a limited lease on life. On the other hand, lack of finding proton decay has been a barrier to figuring out a theory that successfully unifies the math for all of nature’s particles and forces. And Wilczek predicts that:

The unification of gravity with the other forces will become more intimate, and have observable consequences.

He also anticipates that gravity waves will be observed and used to probe the physics of the distant (and early) universe; that the laws of physics, rather than emphasizing energy, will someday be rewritten in terms of “information and its transformations”; and that “biological memory, cognitive processing, motivation, and emotion will be understood at the molecular level.”

And all that’s just the beginning. He then assesses the implications of future advances in computing. Part of the coming computation revolution, he foresees, will focus on its use for doing science:

Calculation will increasingly replace experimentation in design of useful materials, catalysts, and drugs, leading to much greater efficiency and new opportunities for creativity.

Advanced calculational power will also be applied to understanding the atomic nucleus more precisely, conferring the ability…

to manipulate atomic nuclei dexterously … enabling (for example) ultradense energy storage and ultrahigh energy lasers.

Even more dramatically, computing power will be employed to enhance itself:

Capable three-dimensional, fault-tolerant, self-repairing computers will be developed.… Self-assembling, self-reproducing, and autonomously creative machines will be developed.

And those achievements will imply that:

Bootstrap engineering projects wherein machines, starting from crude raw materials, and with minimal human supervision, build other sophisticated machines (notably including titanic computers) will be underway.

Ultimately, such sophisticated computing machines will enable artificial intelligence that would even impress Harold Finch on Person of Interest (which is probably Edward Snowden’s favorite TV show).

Imagine, for instance, the ways that superpowerful computing could enhance the human senses. Aided by electronic prosthetics, people could experience the full continuous range of colors in the visible part of the electromagnetic spectrum, not just those accessible to the tricolor-sensitive human eye. Perhaps the beauty that physicists and mathematicians “see” in their equations can be transformed into works of art beamed directly into the brain.

Artificial intelligence endowed with such power would enable many other futuristic fantasies. As Wilczek notes, the “life of mind” could be altered in strange new ways. For one thing, computationally precise knowledge of a state of mind would permit new possibilities for manipulating it. “An entity capable of accurately recording its state could purposefully enter loops, to re-live especially enjoyable episodes,” Wilczek points out.

And if all that doesn’t sound weird enough, we haven’t even invoked quantum mechanics yet. Wilczek forecasts that large-scale quantum computers will be realized, in turn leading to “quantum artificial intelligence.”

“A quantum mind could experience a superposition of ‘mutually contradictory’ states, or allow different parts of its wave function to explore vastly different scenarios in parallel,” Wilczek points out. “Being based on reversible computation, such a mind could revisit the past at will, and could be equipped to superpose past and present.”

And with quantum artificial intelligence at its disposal, the human mind’s sensory tentacles will not merely be enhanced but also dispersed. With quantum communication, humans can be linked by quantum messaging to sensory devices at vast distances from their bodies. “An immersive experience of ‘being there’ will not necessarily involve being there, physically,” Wilczek writes. “This will be an important element of the expansion of human culture beyond Earth.”

In other words, it will be a web of intelligence, rather than a network of physical settlements, that will expand human culture throughout the cosmos. Such “expanded identities” will be able to comprehend the kingdoms of substance and force on their own quantum terms, as the mind itself merges with space and time.

Wilczek’s visions imply a future existence in which nature is viewed from a vastly different perspective, conditioned by a radical reorientation of the human mind to its world. And perhaps messing with the mind so drastically should be worrisome. But let’s not forget that the century gone by has also messed with the mind and its perspectives in profound ways — with television, for instance, talk radio, the Internet, smartphones and blogs. A little quantum computer mind manipulation is unlikely to make things any worse.

Editing human germline cells sparks ethics debate

Sci-fi novels and films like Gattaca no longer have a monopoly on genetically engineered humans. Real research scripts about editing the human genome are now appearing in scientific and medical journals. But the reviews are mixed.

In Gattaca, nearly everyone was genetically altered, their DNA adjusted to prevent disease, enhance intelligence and make them look good. Today, only people treated with gene therapy have genetically engineered DNA. But powerful new gene editing tools could expand the scope of DNA alteration, forever changing humans’ genetic destiny.

Not everyone thinks scientists should wield that power. Kindling the debate is a report by scientists from Sun Yat-sen University in Guangzhou, China, who have edited a gene in fertilized human eggs, called zygotes. The team used new gene editing technology known as the CRISPR/Cas9 system. That technology can precisely snip out a disease-causing mutation and replace it with healthy DNA. CRISPR/Cas9 has edited DNA in stem cells and cancer cells in humans. Researchers have also deployed the molecules to engineer other animals, including mice and monkeys (SN Online: 3/31/14; SN: 3/8/14, p. 7). But it had never before been used to alter human embryos.
The team’s results, reported April 18 in Protein & Cell, sparked a flurry of headlines because their experiment modified human germline tissue (SN Online: 4/23/15). While most people think it is all right to fix faulty genes in mature body, or somatic, cells, tinkering with the germ line — eggs, sperm or tissues that produce those reproductive cells — crosses an ethical line for many. Germline changes can be passed on to future generations, and critics worry that allowing genetic engineering to correct diseases in germline tissues could pave the way for creating designer babies or other abuses that will persist forever.

“How do you draw a clear, meaningful line between therapy and enhancement?” ponders Marcy Darnovsky, executive director of the Center for Genetics and Society in Berkeley, Calif. About 40 countries ban or restrict such inherited DNA modifications.

Rumors about human germline editing experiments prompted scientists to gather in January in Napa, Calif. Discussions there led two groups to publish recommendations. One group, reporting March 26 in Nature, called for scientists to “agree not to modify the DNA of human reproductive cells,” including the nonviable zygotes used in the Chinese study. A second group, writing in Science April 3, called for a moratorium on the clinical use of human germline engineering, but stopped short of saying the technology shouldn’t be used in research. Those researchers say that while CRISPR technology is still too primitive for safe use in patients, further research is needed to improve it. But those publishing in Nature disagreed.

“Are there ever any therapeutic uses that would demand … modification of the human germ line? We don’t think there are any,” says Edward Lanphier, president of Sangamo BioSciences in Richmond, Calif. “Modifying the germ line is crossing the line that most countries on our planet have said is never appropriate to cross.”

If germline editing is never going to be allowed, there is no reason to conduct research using human embryos or reproductive cells, he says. Sangamo BioSciences is developing gene editing tools for use in somatic cells, an approach that germline editing might render unneeded. Lanphier denies that financial interests play a role in his objection to germline editing.

Other researchers, including Harvard University geneticist George Church, think germline editing may well be the only solution for some people with rare, inherited diseases. “What people want is safety and efficacy,” says Church. “If you ban experiments aimed at improving safety and efficacy, we’ll never get there.”

The zygote experiments certainly demonstrate that CRISPR technology is not ready for daily use yet. The researchers attempted to edit the beta globin, or HBB, gene. Mutations in that gene cause the inherited blood disorder beta-thalassemia. CRISPR/Cas9 molecules were engineered to seek out HBB and cut it where a piece of single-stranded DNA could heal the breach, creating a copy of the gene without mutations. That strategy succeeded in only four of the 86 embryos that the researchers attempted to edit. Those edited embryos contained a mix of cells, some with the gene edited and some without.

In an additional seven embryos, the HBB gene cut was repaired using the nearby HBD gene instead of the single-stranded DNA. The researchers also found that the molecular scissors snipped other genes that the researchers never intended to touch.

“Taken together, our work highlights the pressing need to further improve the fidelity and specificity of the CRISPR/Cas9 platform, a prerequisite for any clinical applications,” the researchers wrote.

The Chinese researchers crossed no ethical lines, Church contends. “They tried to dot i’s and cross t’s on the ethical questions.” The zygotes could not develop into a person, for instance: They had three sets of chromosomes, having been fertilized by two sperm in lab dishes.

Viable or not, germline cells should be off limits, says Darnovsky. She opposes all types of human germline modification, including a procedure approved in the United Kingdom in February for preventing mitochondrial diseases. The U.K. prohibits all other germline editing.

Mitochondria, the power plants that churn out energy in a cell, each carry a circle of DNA containing genes necessary for the organelle’s function. Mothers pass mitochondria on to their offspring through the egg. About one in 5,000 babies worldwide are born with mitochondrial DNA mutations that cause disease, particularly in energy-greedy organs such as the muscles, heart and brain.

Such diseases could be circumvented with a germline editing method known as mitochondrial replacement therapy (SN: 11/17/12, p. 5). In a procedure pioneered by scientists at Oregon Health & Science University, researchers first pluck the nucleus, where the bulk of genetic instructions for making a person are stored, out of the egg of a woman who carries mutant mitochondria. That nucleus is then inserted into a donor egg containing healthy mitochondria. The transfer would produce a person with three parents; most of their genes inherited from the mother and father, with mitochondrial DNA from the anonymous donor. The first babies produced through that technology could be born in the U.K. next year.

Yet another new gene-editing technique could eliminate the need to use donor eggs by specifically destroying only disease-carrying mitochondria, researchers from the Salk Institute for Biological Studies in La Jolla, Calif., reported April 23 in Cell (SN Online: 4/23/15).

Such unproven technologies shouldn’t be attempted when alternatives already exist, Darnovsky says, such as screening embryos created through in vitro fertilization and discarding those likely to develop the disease.

But banning genome-altering technology could leave people with genetic diseases, and society in general, in the lurch, says molecular biologist Matthew Porteus of Stanford University.

“There is no benefit in my mind of having a child born with a devastating genetic disease,” he says.

Alternatives to germline editing come with their own ethical quandaries, he says. Gene testing of embryos may require creating a dozen or more embryos before finding one that doesn’t carry the disease. The rest of the embryos would be destroyed. Many people find that prospect ethically questionable.

But that doesn’t argue for sliding into Gattaca territory, where genetic modification becomes mandatory. “If we get there,” says Porteus, “we’ve really screwed up.”

This pitcher plant species sets its deathtraps underground

Biologist Martin Dančák didn’t set out to find a plant species new to science. But on a hike through a rainforest in Borneo, he and colleagues stumbled on a subterranean surprise.

Hidden beneath the soil and inside dark, mossy pockets below tree roots, carnivorous pitcher plants dangled their deathtraps underground. The pitchers can look like hollow eggplants and probably lure unsuspecting prey into their sewer hole-like traps. Once an ant or a beetle steps in, the insect falls to its death, drowning in a stew of digestive juices (SN: 11/22/16). Until now, scientists had never observed pitcher plants with traps almost exclusively entombed in earth.
“We were, of course, astonished as nobody would expect that a pitcher plant with underground traps could exist,” says Dančák, of Palacký University in Olomouc, Czech Republic.

That’s because pitchers tend to be fragile. But the new species’ hidden traps have fleshy walls that may help them push against soil as they grow underground, Dančák and colleagues report June 23 in PhytoKeys. Because the buried pitchers stay concealed from sight, the team named the species Nepenthes pudica, a nod to the Latin word for bashful.

The work “highlights how much biodiversity still exists that we haven’t fully discovered,” says Leonora Bittleston, a biologist at Boise State University in Idaho who was not involved with the study. It’s possible that other pitcher plant species may have traps lurking underground and scientists just haven’t noticed yet, she says. “I think a lot of people don’t really dig down.”

A supersensitive dark matter search found no signs of the substance — yet

The next generation of dark matter detectors has arrived.

A massive new effort to detect the elusive substance has reported its first results. Following a time-honored tradition of dark matter hunters, the experiment, called LZ, didn’t find dark matter. But it has done that better than ever before, physicists report July 7 in a virtual webinar and a paper posted on LZ’s website. And with several additional years of data-taking planned from LZ and other experiments like it, physicists are hopeful they’ll finally get a glimpse of dark matter.
“Dark matter remains one of the biggest mysteries in particle physics today,” LZ spokesperson Hugh Lippincott, a physicist at the University of California, Santa Barbara said during the webinar.

LZ, or LUX-ZEPLIN, aims to discover the unidentified particles that are thought to make up most of the universe’s matter. Although no one has ever conclusively detected a particle of dark matter, its influence on the universe can be seen in the motions of stars and galaxies, and via other cosmic observations (SN: 7/24/18).

Located about 1.5 kilometers underground at the Sanford Underground Research Facility in Lead, S.D., the detector is filled with 10 metric tons of liquid xenon. If dark matter particles crash into the nuclei of any of those xenon atoms, they would produce flashes of light that the detector would pick up.

The LZ experiment is one of a new generation of bigger, badder dark matter detectors based on liquid xenon, which also includes XENONnT in Gran Sasso National Laboratory in Italy and PandaX-4T in the China Jinping Underground Laboratory. The experiments aim to detect a theorized type of dark matter called Weakly Interacting Massive Particles, or WIMPs (SN: 12/13/16). Scientists scaled up the search to allow for a better chance of spying the particles, with each detector containing multiple tons of liquid xenon.

Using only about 60 days’ worth of data, LZ has already surpassed earlier efforts to pin down WIMPs (SN: 5/28/18). “It’s really impressive what they’ve been able to pull off; it’s a technological marvel,” says theoretical physicist Dan Hooper of Fermilab in Batavia, Ill, who was not involved with the study.

Although LZ’s search came up empty, “the way something’s going to be discovered is when you have multiple years in a row of running,” says LZ collaborator Matthew Szydagis, a physicist at the University at Albany in New York. LZ is expected to run for about five years, and data from that extended period may provide physicists’ best chance to find the particles.

Now that the detector has proven its potential, says LZ physicist Kevin Lesko of Lawrence Berkeley National Laboratory in California, “we’re excited about what we’re going to see.”

A newfound dinosaur had tiny arms before T. rex made them cool

Tyrannosaurus rex’s tiny arms have launched a thousand sarcastic memes: I love you this much; can you pass the salt?; row, row, row your … oh.

But back off, snarky jokesters. A newfound species of big-headed carnivorous dinosaur with tiny forelimbs suggests those arms weren’t just an evolutionary punchline. Arm reduction — alongside giant heads — evolved independently in different dinosaur lineages, researchers report July 7 in Current Biology.

Meraxes gigas, named for a dragon in George R. R. Martin’s “A Song of Ice and Fire” book series, lived between 100 million and 90 million years ago in what’s now Argentina, says Juan Canale, a paleontologist with the country’s CONICET research network who is based in Buenos Aires. Despite the resemblance to T. rex, M. gigas wasn’t a tyrannosaur; it was a carcharodontosaur — a member of a distantly related, lesser-known group of predatory theropod dinosaurs. M. gigas went extinct nearly 20 million years before T. rex walked on Earth.
The M. gigas individual described by Canale and colleagues was about 45 years old and weighed more than four metric tons when it died, they estimate. The fossilized specimen is about 11 meters long, and its skull is heavily ornamented with crests and bumps and tiny hornlets, ornamentations that probably helped attract mates.

Why these dinosaurs had such tiny arms is an enduring mystery. They weren’t for hunting: Both T. rex and M. gigas used their massive heads to hunt prey (SN: 10/22/18). The arms may have shrunk so they were out of the way during the frenzy of group feeding on carcasses.

But, Canale says, M. gigas’ arms were surprisingly muscular, suggesting they were more than just an inconvenient limb. One possibility is that the arms helped lift the animal from a reclining to a standing position. Another is that they aided in mating — perhaps showing a mate some love.

College COVID-19 testing can reduce coronavirus deaths in local communities

Getting a COVID-19 test has become a regular part of many college students’ lives. That ritual may protect not just those students’ classmates and professors but also their municipal bus drivers, neighbors and other members of the local community, a new study suggests.

Counties where colleges and universities did COVID-19 testing saw fewer COVID-19 cases and deaths than ones with schools that did not do any testing in the fall of 2020, researchers report June 23 in PLOS Digital Health. While previous analyses have shown that counties with colleges that brought students back to campus had more COVID-19 cases than those that continued online instruction, this is the first look at the impact of campus testing on those communities on a national scale (SN: 2/23/21).
“It’s tough to think of universities as just silos within cities; it’s just much more permeable than that,” says Brennan Klein, a network scientist at Northeastern University in Boston.

Colleges that tested their students generally did not see significantly lower case counts than schools that didn’t do testing, Klein and his colleagues found. But the communities surrounding these schools did see fewer cases and deaths. That’s because towns with colleges conducting regular testing had a more accurate sense of how much COVID-19 was circulating in their communities, Klein says, which allowed those towns to understand the risk level and put masking policies and other mitigation strategies in place.

The results highlight the crucial role testing can continue to play as students return to campus this fall, says Sam Scarpino, vice president of pathogen surveillance at the Rockefeller Foundation’s Pandemic Prevention Institute in Washington, D.C. Testing “may not be optional in the fall if we want to keep colleges and universities open safely,” he says.
Finding a flight path
As SARS-CoV-2, the virus that causes COVID-19 rapidly spread around the world in the spring of 2020, it had a swift impact on U.S. college students. Most were abruptly sent home from their dorm rooms, lecture halls, study abroad programs and even spring break outings to spend what would be the remainder of the semester online. And with the start of the fall semester just months away, schools were “flying blind” as to how to bring students back to campus safely, Klein says.

That fall, Klein, Scarpino and their collaborators began to put together a potential flight path for schools by collecting data from COVID-19 dashboards created by universities and the counties surrounding those schools to track cases. The researchers classified schools based on whether they had opted for entirely online learning or in-person teaching. They then divided the schools with in-person learning based on whether they did any testing.

It’s not a perfect comparison, Klein says, because this method groups schools that did one round of testing with those that did consistent surveillance testing. But the team’s analyses still generally show how colleges’ pandemic response impacted their local communities.

Overall, counties with colleges saw more cases and deaths than counties without schools. However, testing helped minimize the increase in cases and deaths. During the fall semester, from August to December, counties with colleges that did testing saw on average 14 fewer deaths per 100,000 people than counties with colleges that brought students back with no testing — 56 deaths per 100,000 versus about 70.
The University of Massachusetts Amherst, with nearly 30,000 undergraduate and graduate students in 2020, is one case study of the value of the testing, Klein says. Throughout the fall semester, the school tested students twice a week. That meant that three times as many tests occurred in the city of Amherst than in neighboring cities, he says. For much of the fall and winter, Amherst had fewer COVID-19 cases per 1,000 residents than its neighboring counties and statewide averages.

Once students left for winter break, campus testing stopped – so overall local testing dropped. When students returned for spring semester in February 2021, area cases spiked — possibly driven by students bringing the coronavirus back from their travels and by being exposed to local residents whose cases may have been missed due to the drop in local testing. Students returned “to a town that has more COVID than they realize” Klein says.

Renewed campus testing not only picked up the spike but quickly prompted mitigation strategies. The university moved classes to Zoom and asked students to remain in their rooms, at one point even telling them that they should not go on walks outdoors. By mid-March, the university reduced the spread of cases on campus and the town once again had a lower COVID-19 case rate than its neighbors for the remainder of the semester, the team found.

The value of testing
It’s helpful to know that testing overall helped protect local communities, says David Paltiel, a public health researcher at the Yale School of Public Health who was not involved with the study. Paltiel was one of the first researchers to call for routine testing on college campuses, regardless of whether students had symptoms.

“I believe that testing and masking and all those things probably were really useful, because in the fall of 2020 we didn’t have a vaccine yet,” he says. Quickly identifying cases and isolating affected students, he adds, was key at the time.
But each school is unique, he says, and the benefit of testing probably varied between schools. And today, two and a half years into the pandemic, the cost-benefit calculation is different now that vaccines are widely available and schools are faced with newer variants of SARS-CoV-2. Some of those variants spread so quickly that even testing twice a week may not catch all cases on campus quickly enough to stop their spread, he says.

As colleges and universities prepare for the fall 2022 semester, he would recommend schools consider testing students as they return to campus with less frequent follow-up surveillance testing to “make sure things aren’t spinning crazy out of control.”

Still, the study shows that regular campus testing can benefit the broader community, Scarpino says. In fact, he hopes to capitalize on the interest in testing for COVID-19 to roll out more expansive public health testing for multiple respiratory viruses, including the flu, in places like college campuses. In addition to PCR tests — the kind that involve sticking a swab up your nose — such efforts might also analyze wastewater and air within buildings for pathogens (SN: 05/28/20).

Unchecked coronavirus transmission continues to disrupt lives — in the United States and globally — and new variants will continue to emerge, he says. “We need to be prepared for another surge of SARS-CoV-2 in the fall when the schools reopen, and we’re back in respiratory season.”

Wiggling metal beams offer a new way to test gravity’s strength

In the quest to measure the fundamental constant that governs the strength of gravity, scientists are getting a wiggle on.

Using a pair of meter-long, vibrating metal beams, scientists have made a new measurement of “Big G,” also known as Newton’s gravitational constant, researchers report July 11 in Nature Physics. The technique could help physicists get a better handle on the poorly measured constant.

Big G is notoriously difficult to determine (SN: 9/12/13). Previous estimates of the constant disagree with one another, leaving scientists in a muddle over its true value. It is the least precisely known of the fundamental constants, a group of numbers that commonly show up in equations, making them a prime target for precise measurements.
Because the vibrating beam test is a new type of experiment, “it might help to understand what’s really going on,” says engineer and physicist Jürg Dual of ETH Zurich.

The researchers repeatedly bent one of the beams back and forth and used lasers to measure how the second beam responded to the first beam’s varying gravitational pull. To help maintain a stable temperature and avoid external vibrations that could stymie the experiment, the researchers performed their work 80 meters underground, in what was once a military fortress in the Swiss Alps.

Big G, according to the new measurement, is approximately 6.82 x 10-11 meters cubed per kilogram per square second. But the estimate has an uncertainty of about 1.6 percent, which is large compared to other measurements (SN: 8/29/18). So the number is not yet precise enough to sway the debate over Big G’s value. But the team now plans to improve their measurement, for example by adding a modified version of the test with rotating bars. That might help cut down on Big G’s wiggle room.