When it comes to hoisting water, plants are real power lifters.
For a tall tree, slurping hundreds of liters of water each day up to its leaves or needles, where photosynthesis takes place, can be quite a haul. Even for short grasses and shrubs, rising sap must somehow overcome gravity and resistance from plant tissues. Now, a first-of-its-kind study has estimated the power needed to lift sap to plants’ foliage worldwide — and it’s a prodigious amount, almost as much as all hydroelectric power generated globally. Over the course of a year, plants harness 9.4 quadrillion watt-hours of sap-pumping power, climatologist Gregory Quetin and colleagues report August 17 in the Journal of Geophysical Research: Biogeosciences. That’s about 90 percent of the amount of hydroelectric power produced worldwide in 2019.
Evaporation of water from foliage drives the suction that pulls sap upward, says Quetin, of the University of California, Santa Barbara (SN: 3/24/22). To estimate the total evaporative power for all plants on Earth annually, the team divided up a map of the world’s land area into cells that span 0.5° of latitude by 0.5° of longitude and analyzed data for the mix of plants in each cell that were actively pumping sap each month. The power required was highest, unsurprisingly, in tree-rich areas, especially in the rainforests of the tropics. If plants in forest ecosystems had to tap their own energy stores rather than rely on evaporation to pump sap, they’d need to expend about 14 percent of the energy they generated via photosynthesis, the researchers found. Grasses and other plants in nonforest ecosystems would need to expend just over 1 percent of their energy stores, largely because such plants are far shorter and have less resistance to the flow of sap within their tissues than woody plants do.
On sunny summer days, powerboats pulling water-skiers zip across Georgia’s Lake Oconee, a reservoir located about an hour-and-a-half drive east of Atlanta. For those without a need for speed, fishing beckons.
Little do the lake’s visitors suspect that here lie the remains of a democratic institution that dates to around 500 A.D., more than 1,200 years before the founding of the U.S. Congress.
Reservoir waters, which flooded the Oconee Valley in 1979 after the construction of a nearby dam, partly cover remnants of a 1,500-year-old plaza once bordered by flat-topped earthen mounds and at least three large, circular buildings. Such structures, which have been linked to collective decision making, are known from other southeastern U.S. sites that date to as early as around 1,000 years ago. At the Oconee site, called Cold Springs, artifacts were excavated before the valley became an aquatic playground. Now, new older-than-expected radiocarbon dates for those museum-held finds push back the origin of democratic institutions in the Americas several centuries, a team led by archaeologist Victor Thompson of the University of Georgia in Athens reported May 18 in American Antiquity.
Institutions such as these highlight a growing realization among archaeologists that early innovations in democratic rule emerged independently in many parts of the world. In specific, these findings add to evidence that Native American institutions devoted to promoting broad participation in political decisions emerged in various regions, including what’s now Canada, the United States and Mexico, long before 18th century Europeans took up the cause of democratic rule by the people.
That conclusion comes as no surprise to members of some Indigenous groups today. “Native people have been trying to convey for centuries that many communities have long-standing institutions [of] democratic and/or republican governance,” says University of Alberta archaeologist S. Margaret Spivey-Faulkner, a citizen of the Pee Dee Indian Nation of Beaver Creek in South Carolina.
Democratic innovations Scholars have traditionally thought that democracy — generally referring to rule by the people, typically via elected representatives — originated around 2,500 years ago in Greece before spreading elsewhere in Europe. From that perspective, governments in the Americas that qualified as democratic didn’t exist before Europeans showed up.
That argument is as misguided as Christopher Columbus’ assumption that he had arrived in East India, not the Caribbean, in 1492, says archaeologist Jacob Holland-Lulewicz of Penn State, a coauthor of the Cold Springs report. Institutions that enabled representatives of large communities to govern collectively, without kings or ruling chiefs, characterized an unappreciated number of Indigenous societies long before the Italian explorer’s fateful first voyage, Holland-Lulewicz asserts.
In fact, collective decision-making arrangements that kept anyone from amassing too much power and wealth go back thousands, and probably tens of thousands of years in many parts of the world (SN: 11/9/21). The late anthropologist David Graeber and archaeologist David Wengrow of University College London describe evidence for that scenario in their 2021 book The Dawn of Everything.
But only in the last 20 years have archaeologists begun to take seriously claims that ancient forms of democratic rule existed. Scientific investigations informed by Indigenous partners will unveil past political realities “most of us in Indian country take for granted,” Spivey-Faulkner says.
Early consensus Thompson’s Cold Springs project shows how such a partnership can work.
Ancestors of today’s Muscogee people erected Cold Springs structures within their original homelands, which once covered a big chunk of southeastern North America before the government-forced exodus west along the infamous Trail of Tears. Three members of the Muscogee Nation’s Department of Historic and Cultural Preservation in Okmulgee, Okla., all study coauthors, provided archaeologists with first-hand knowledge of Muscogee society. They emphasized to the researchers that present-day Muscogee councils where open debate informs consensus decisions carry on a tradition that goes back hundreds of generations.
A set of 44 new radiocarbon dates going back 1,500 years for material previously unearthed at the Georgia site, including what were likely interior posts from some structures, then made perfect sense. Earlier analyses in the 1970s of excavated pottery and six radiocarbon dates from two earthen mounds at Cold Springs suggested that they had been constructed at least 1,000 years ago.
Based on the new dating, Thompson’s team found that from roughly 500 A.D. to 700 A.D, Indigenous people at Cold Springs constructed not only earthen mounds but at least three council-style roundhouses — each 12 to 15 meters in diameter — and several smaller structures possibly used as temporary housing during meetings and ceremonies.
Small communities spread across the Oconee Valley formed tight-knit social networks called clans that gathered at council houses through the 1700s, Thompson’s group suspects. Spanish expeditions through the region from 1539 to 1543 did not cause those societies and their traditions to collapse, as has often been assumed, the researchers contend. Excavations and radiocarbon dating at another Oconee Valley Muscogee site called Dyar support that view. A square ground connected to Dyar includes remains of a council house. Activity at the site began as early as 1350 and continued until as late as about 1670, or about 130 years after first encounters with the Spanish, Holland-Lulewicz and colleagues reported in the October 2020 American Antiquity.
Spanish historical accounts mistakenly assumed that powerful chiefs ran Indigenous communities in what have become known as chiefdoms. Many archaeologists have similarly, and just as wrongly, assumed that starting around 1,000 years ago, chiefs monopolized power in southeastern Native American villages, the scientists argue.
Today, members of the Muscogee (Creek) Nation in Oklahoma gather, sometimes by the hundreds or more, in circular structures called council houses to reach collective decisions about various community issues. Council houses typically border public square grounds. That’s a modern-day parallel to the story being told by the ancient architecture at Cold Springs.
“Muscogee councils are the longest-surviving democratic institution in the world,” Holland-Lulewicz says.
Indigenous influencers Political consensus building by early Muscogee people didn’t occur in a vacuum. Across different regions of precontact North America, institutions that enabled broad participation in democratic governing characterized Indigenous societies that had no kings, central state governments or bureaucracies, Holland-Lulewicz and colleagues, report March 11 in Frontiers in Political Science.
The researchers dub such organizations keystone institutions. Representatives of households, communities, clans and religious societies, to name a few, met on equal ground at keystone institutions. Here, all manner of groups and organizations followed common rules to air their opinions and hammer out decisions about, say, distributing crops, organizing ceremonial events and resolving disputes. For example, in the early 1600s, nations of the neighboring Wendat (Huron) and Haudenosaunee people in northeastern North America had formed political alliances known as confederacies, says coauthor Jennifer Birch, a University of Georgia archaeologist. Each population contained roughly 20,000 to 30,000 people. Despite their size, these confederacies did not hold elections in which individuals voted for representatives to a central governing body. Governing consisted of negotiations among intertwined segments of society orchestrated by clans, which claimed members across society.
Clans, in which membership was inherited through the female line, were — and still are — the social glue holding together Wendat (Huron) and Haudenosaunee politics. Residents of different villages or nations among, say, the Haudenosaunee, could belong to the same clan, creating a network of social ties. Excavations of Indigenous villages in eastern North America suggest that the earliest clans date to at least 3,000 years ago, Birch says.
Within clans, men and women held separate council meetings. Some councils addressed civil affairs. Others addressed military and foreign policy, typically after receiving counsel from senior clan women.
Clans controlled seats on confederacy councils of the Wendat and Haudenosaunee. But decisions hinged on negotiation and consensus. A member of a particular clan had no right to interfere in the affairs of any other clan. Members of villages or nations could either accept or reject a clan leader as their council representative. Clans could also join forces to pursue political or military objectives.
Some researchers, including Graeber and Wengrow, suspect a Wendat philosopher and statesman named Kandiaronk influenced ideas about democracy among Enlightenment thinkers in France and elsewhere. A 1703 book based on a French aristocrat’s conversations with Kandiaronk critiqued authoritarian European states and provided an Indigenous case for decentralized, representative governing.
Although Kandiaronk was a real person, it’s unclear whether that book presented his actual ideas or altered them to resemble what Europeans thought of as a “noble savage,” Birch says.
Researchers also debate whether writers of the U.S. Constitution were influenced by how the Haudenosaunee Confederacy distributed power among allied nations. Benjamin Franklin learned about Haudenosaunee politics during the 1740s and 1750s as colonists tried to establish treaties with the confederacy.
Colonists took selected political ideas from the Haudenosaunee Confederacy without grasping its underlying cultural concerns, says University of Alberta anthropological archaeologist Kisha Supernant, a member of an Indigenous population in Canada called Métis. The U.S. Constitution stresses individual freedoms, whereas the Indigenous system addresses collective responsibilities to manage the land, water, animals and people, she says.
Anti-Aztec equality If democratic institutions are cultural experiments in power sharing, one of the most interesting examples emerged around 700 years ago in central Mexico.
In response to growing hostilities from surrounding allies of the Aztec Empire, a multi-ethnic confederation of villages called Tlaxcallan built a densely occupied city of the same name. When Spaniards arrived in 1519, they wrote of Tlaxcallan as a city without kings, rulers or wealthy elites. Until the last decade, Mexican historians had argued that Tlaxcallan was a minor settlement, not a city. They dismissed historical Spanish accounts as exaggerations of the newcomers’ exploits.
Opinions changed after a team led by archaeologist Lane Fargher of Mexico’s Centro de Investigación y Estudios Avanzados del Instituto Polytécnico Nacional (Cinvestav del IPN) in Merida surveyed and mapped visible remains of Tlaxcallan structures from 2007 to 2010. Excavations followed from 2015 through 2018, revealing a much larger and denser settlement than previously suspected.
The ancient city covers a series of hilltops and hillsides, Fargher says. Large terraces carved out of hillsides supported houses, public structures, plazas, earthen mounds and roadways. Around 35,000 people inhabited an area of about 4.5 square kilometers in the early 1500s.
Artifacts recovered at plazas indicate that those open spaces hosted commercial, political and religious activities. Houses clustered around plazas. Even the largest residences were modest in size, not much larger than the smallest houses. Palaces of kings and political big shots in neighboring societies, including the Aztecs, dwarfed Tlaxcallan houses. Excavations and Spanish accounts add up to a scenario in which all Tlaxcallan citizens could participate in governmental affairs. Anyone known to provide good advice on local issues could be elected by their neighbors in a residential district to a citywide ruling council, or senate, consisting of between 50 and 200 members. Council meetings were held at a civic-ceremonial center built on a hilltop about one kilometer from Tlaxcallan.
As many as 4,000 people attended council meetings regarding issues of utmost importance, such as launching military campaigns, Fargher says.
Those chosen for council positions had to endure a public ceremony in which they were stripped naked, shoved, hit and insulted as a reminder that they served the people. Political officials who accumulated too much wealth could be publicly punished, replaced or even killed.
Tlaxcallan wasn’t a social utopia. Women, for instance, had limited political power, possibly because the main route to government positions involved stints of military service. But in many ways, political participation at Tlaxcallan equaled or exceeded that documented for ancient Greek democracy, Fargher and colleagues reported March 29 in Frontiers of Political Science. Greeks from all walks of life gathered in public spaces to speak freely about political issues. But commoners and the poor could not hold the highest political offices. And again, women were excluded.
Good government Tlaxcallan aligned itself with Spanish conquerors against their common Aztec enemy. Then in 1545, the Spanish divided the Tlaxcallan state into four fiefdoms, ending Tlaxcallan’s homegrown style of democratic rule.
The story of this fierce, equality-minded government illustrates the impermanence of political systems that broadly distribute power, Fargher says. Research on past societies worldwide “shows us how bad the human species is at building and maintaining democratic governments,” he contends.
Archaeologist Richard Blanton of Purdue University and colleagues, including Fargher, analyzed whether 30 premodern societies dating to as early as around 3,000 years ago displayed signs of “good government.” An overall score of government quality included evidence of systems for providing equal justice, fair taxation, control over political officials’ power and a political voice for all citizens.
Only eight societies received high scores, versus 12 that scored low, Blanton’s group reported in the February 2021 Current Anthropology. The remaining 10 societies partly qualified as good governments. Many practices of societies scoring highest on good government mirrored policies of liberal democracies over the past century, the researchers concluded.
That’s only a partial view of how past governments operated. But surveys of modern nations suggest that no more than half feature strong democratic institutions, Fargher says.
Probing the range of democratic institutions that societies have devised over the millennia may inspire reforms to modern democratic nations facing growing income disparities and public distrust of authorities, Holland-Lulewicz suspects. Leaders and citizens of stressed democracies today might start with a course on power sharing in Indigenous societies. School will be in session at the next meeting of the Muscogee National Council.
A century from now, when biologists are playing games of clones and engineers are playing games of drones, physicists will still pledge their loyalty to the Kingdoms of Substance and Force.
Physicists know the subjects of these kingdoms as fermions and bosons. Fermions are the fundamental particles of matter; bosons transmit forces that govern the behavior of the matter particles. The math describing these particles and their relationships forms the “standard model” of particle physics. Or as Nobel laureate Frank Wilczek calls it, “The Core Theory.” Wilczek’s core theory differs from the usual notion of standard model. His core includes gravity, as described by Einstein’s general theory of relativity. General relativity is an exquisite theory of gravity, but it doesn’t fit in with the math for the standard model’s three forces (the strong and weak nuclear forces and electromagnetism). But maybe someday it will. Perhaps even by 100 years from now.
At least, that’s among the many predictions that Wilczek has made for the century ahead. In a recent paper titled “Physics in 100 Years,” he offers a forecast for future discoveries and inventions that science writers of the future will be salivating over. (The paper is based on a talk celebrating the 250th anniversary of Brown University. He was asked to make predictions for 250 years from now, but regarded 100 as more reasonable.)
Wilczek does not claim that his forecast will be accurate. He considers it more an exercise in imagination, anchored in thorough knowledge of today’s major questions and the latest advances in scientific techniques and capabilities. Where those two factors meet, Wilczek sees the potential for premonition. His ruminations result in a vision of the future suitable for a trilogy or two of science fiction films. They would involve the unification of the kingdoms of physics and a more intimate relationship between them and the human mind.
Among Wilczek’s prognostications is the discovery of supersymmetric particles, heavyweight partners to the matter and force particles of the Core Theory. Such partner particles would reveal a deep symmetry underlying matter and force, thereby combining the kingdoms and further promoting the idea of unification as a key path to truth about nature. Wilczek also foresees the discovery of proton decay, even though exhaustive searches for it have so far failed to find it. If protons disintegrate (after, on average, trillions upon trillions of years), matter as we know it has a limited lease on life. On the other hand, lack of finding proton decay has been a barrier to figuring out a theory that successfully unifies the math for all of nature’s particles and forces. And Wilczek predicts that:
The unification of gravity with the other forces will become more intimate, and have observable consequences.
He also anticipates that gravity waves will be observed and used to probe the physics of the distant (and early) universe; that the laws of physics, rather than emphasizing energy, will someday be rewritten in terms of “information and its transformations”; and that “biological memory, cognitive processing, motivation, and emotion will be understood at the molecular level.”
And all that’s just the beginning. He then assesses the implications of future advances in computing. Part of the coming computation revolution, he foresees, will focus on its use for doing science:
Calculation will increasingly replace experimentation in design of useful materials, catalysts, and drugs, leading to much greater efficiency and new opportunities for creativity.
Advanced calculational power will also be applied to understanding the atomic nucleus more precisely, conferring the ability…
to manipulate atomic nuclei dexterously … enabling (for example) ultradense energy storage and ultrahigh energy lasers.
Even more dramatically, computing power will be employed to enhance itself:
Capable three-dimensional, fault-tolerant, self-repairing computers will be developed.… Self-assembling, self-reproducing, and autonomously creative machines will be developed.
And those achievements will imply that:
Bootstrap engineering projects wherein machines, starting from crude raw materials, and with minimal human supervision, build other sophisticated machines (notably including titanic computers) will be underway.
Ultimately, such sophisticated computing machines will enable artificial intelligence that would even impress Harold Finch on Person of Interest (which is probably Edward Snowden’s favorite TV show).
Imagine, for instance, the ways that superpowerful computing could enhance the human senses. Aided by electronic prosthetics, people could experience the full continuous range of colors in the visible part of the electromagnetic spectrum, not just those accessible to the tricolor-sensitive human eye. Perhaps the beauty that physicists and mathematicians “see” in their equations can be transformed into works of art beamed directly into the brain.
Artificial intelligence endowed with such power would enable many other futuristic fantasies. As Wilczek notes, the “life of mind” could be altered in strange new ways. For one thing, computationally precise knowledge of a state of mind would permit new possibilities for manipulating it. “An entity capable of accurately recording its state could purposefully enter loops, to re-live especially enjoyable episodes,” Wilczek points out.
And if all that doesn’t sound weird enough, we haven’t even invoked quantum mechanics yet. Wilczek forecasts that large-scale quantum computers will be realized, in turn leading to “quantum artificial intelligence.”
“A quantum mind could experience a superposition of ‘mutually contradictory’ states, or allow different parts of its wave function to explore vastly different scenarios in parallel,” Wilczek points out. “Being based on reversible computation, such a mind could revisit the past at will, and could be equipped to superpose past and present.”
And with quantum artificial intelligence at its disposal, the human mind’s sensory tentacles will not merely be enhanced but also dispersed. With quantum communication, humans can be linked by quantum messaging to sensory devices at vast distances from their bodies. “An immersive experience of ‘being there’ will not necessarily involve being there, physically,” Wilczek writes. “This will be an important element of the expansion of human culture beyond Earth.”
In other words, it will be a web of intelligence, rather than a network of physical settlements, that will expand human culture throughout the cosmos. Such “expanded identities” will be able to comprehend the kingdoms of substance and force on their own quantum terms, as the mind itself merges with space and time.
Wilczek’s visions imply a future existence in which nature is viewed from a vastly different perspective, conditioned by a radical reorientation of the human mind to its world. And perhaps messing with the mind so drastically should be worrisome. But let’s not forget that the century gone by has also messed with the mind and its perspectives in profound ways — with television, for instance, talk radio, the Internet, smartphones and blogs. A little quantum computer mind manipulation is unlikely to make things any worse.
Sci-fi novels and films like Gattaca no longer have a monopoly on genetically engineered humans. Real research scripts about editing the human genome are now appearing in scientific and medical journals. But the reviews are mixed.
In Gattaca, nearly everyone was genetically altered, their DNA adjusted to prevent disease, enhance intelligence and make them look good. Today, only people treated with gene therapy have genetically engineered DNA. But powerful new gene editing tools could expand the scope of DNA alteration, forever changing humans’ genetic destiny.
Not everyone thinks scientists should wield that power. Kindling the debate is a report by scientists from Sun Yat-sen University in Guangzhou, China, who have edited a gene in fertilized human eggs, called zygotes. The team used new gene editing technology known as the CRISPR/Cas9 system. That technology can precisely snip out a disease-causing mutation and replace it with healthy DNA. CRISPR/Cas9 has edited DNA in stem cells and cancer cells in humans. Researchers have also deployed the molecules to engineer other animals, including mice and monkeys (SN Online: 3/31/14; SN: 3/8/14, p. 7). But it had never before been used to alter human embryos. The team’s results, reported April 18 in Protein & Cell, sparked a flurry of headlines because their experiment modified human germline tissue (SN Online: 4/23/15). While most people think it is all right to fix faulty genes in mature body, or somatic, cells, tinkering with the germ line — eggs, sperm or tissues that produce those reproductive cells — crosses an ethical line for many. Germline changes can be passed on to future generations, and critics worry that allowing genetic engineering to correct diseases in germline tissues could pave the way for creating designer babies or other abuses that will persist forever.
“How do you draw a clear, meaningful line between therapy and enhancement?” ponders Marcy Darnovsky, executive director of the Center for Genetics and Society in Berkeley, Calif. About 40 countries ban or restrict such inherited DNA modifications.
Rumors about human germline editing experiments prompted scientists to gather in January in Napa, Calif. Discussions there led two groups to publish recommendations. One group, reporting March 26 in Nature, called for scientists to “agree not to modify the DNA of human reproductive cells,” including the nonviable zygotes used in the Chinese study. A second group, writing in Science April 3, called for a moratorium on the clinical use of human germline engineering, but stopped short of saying the technology shouldn’t be used in research. Those researchers say that while CRISPR technology is still too primitive for safe use in patients, further research is needed to improve it. But those publishing in Nature disagreed.
“Are there ever any therapeutic uses that would demand … modification of the human germ line? We don’t think there are any,” says Edward Lanphier, president of Sangamo BioSciences in Richmond, Calif. “Modifying the germ line is crossing the line that most countries on our planet have said is never appropriate to cross.”
If germline editing is never going to be allowed, there is no reason to conduct research using human embryos or reproductive cells, he says. Sangamo BioSciences is developing gene editing tools for use in somatic cells, an approach that germline editing might render unneeded. Lanphier denies that financial interests play a role in his objection to germline editing.
Other researchers, including Harvard University geneticist George Church, think germline editing may well be the only solution for some people with rare, inherited diseases. “What people want is safety and efficacy,” says Church. “If you ban experiments aimed at improving safety and efficacy, we’ll never get there.”
The zygote experiments certainly demonstrate that CRISPR technology is not ready for daily use yet. The researchers attempted to edit the beta globin, or HBB, gene. Mutations in that gene cause the inherited blood disorder beta-thalassemia. CRISPR/Cas9 molecules were engineered to seek out HBB and cut it where a piece of single-stranded DNA could heal the breach, creating a copy of the gene without mutations. That strategy succeeded in only four of the 86 embryos that the researchers attempted to edit. Those edited embryos contained a mix of cells, some with the gene edited and some without.
In an additional seven embryos, the HBB gene cut was repaired using the nearby HBD gene instead of the single-stranded DNA. The researchers also found that the molecular scissors snipped other genes that the researchers never intended to touch.
“Taken together, our work highlights the pressing need to further improve the fidelity and specificity of the CRISPR/Cas9 platform, a prerequisite for any clinical applications,” the researchers wrote.
The Chinese researchers crossed no ethical lines, Church contends. “They tried to dot i’s and cross t’s on the ethical questions.” The zygotes could not develop into a person, for instance: They had three sets of chromosomes, having been fertilized by two sperm in lab dishes.
Viable or not, germline cells should be off limits, says Darnovsky. She opposes all types of human germline modification, including a procedure approved in the United Kingdom in February for preventing mitochondrial diseases. The U.K. prohibits all other germline editing.
Mitochondria, the power plants that churn out energy in a cell, each carry a circle of DNA containing genes necessary for the organelle’s function. Mothers pass mitochondria on to their offspring through the egg. About one in 5,000 babies worldwide are born with mitochondrial DNA mutations that cause disease, particularly in energy-greedy organs such as the muscles, heart and brain.
Such diseases could be circumvented with a germline editing method known as mitochondrial replacement therapy (SN: 11/17/12, p. 5). In a procedure pioneered by scientists at Oregon Health & Science University, researchers first pluck the nucleus, where the bulk of genetic instructions for making a person are stored, out of the egg of a woman who carries mutant mitochondria. That nucleus is then inserted into a donor egg containing healthy mitochondria. The transfer would produce a person with three parents; most of their genes inherited from the mother and father, with mitochondrial DNA from the anonymous donor. The first babies produced through that technology could be born in the U.K. next year.
Yet another new gene-editing technique could eliminate the need to use donor eggs by specifically destroying only disease-carrying mitochondria, researchers from the Salk Institute for Biological Studies in La Jolla, Calif., reported April 23 in Cell (SN Online: 4/23/15).
Such unproven technologies shouldn’t be attempted when alternatives already exist, Darnovsky says, such as screening embryos created through in vitro fertilization and discarding those likely to develop the disease.
But banning genome-altering technology could leave people with genetic diseases, and society in general, in the lurch, says molecular biologist Matthew Porteus of Stanford University.
“There is no benefit in my mind of having a child born with a devastating genetic disease,” he says.
Alternatives to germline editing come with their own ethical quandaries, he says. Gene testing of embryos may require creating a dozen or more embryos before finding one that doesn’t carry the disease. The rest of the embryos would be destroyed. Many people find that prospect ethically questionable.
But that doesn’t argue for sliding into Gattaca territory, where genetic modification becomes mandatory. “If we get there,” says Porteus, “we’ve really screwed up.”
Biologist Martin Dančák didn’t set out to find a plant species new to science. But on a hike through a rainforest in Borneo, he and colleagues stumbled on a subterranean surprise.
Hidden beneath the soil and inside dark, mossy pockets below tree roots, carnivorous pitcher plants dangled their deathtraps underground. The pitchers can look like hollow eggplants and probably lure unsuspecting prey into their sewer hole-like traps. Once an ant or a beetle steps in, the insect falls to its death, drowning in a stew of digestive juices (SN: 11/22/16). Until now, scientists had never observed pitcher plants with traps almost exclusively entombed in earth. “We were, of course, astonished as nobody would expect that a pitcher plant with underground traps could exist,” says Dančák, of Palacký University in Olomouc, Czech Republic.
That’s because pitchers tend to be fragile. But the new species’ hidden traps have fleshy walls that may help them push against soil as they grow underground, Dančák and colleagues report June 23 in PhytoKeys. Because the buried pitchers stay concealed from sight, the team named the species Nepenthes pudica, a nod to the Latin word for bashful.
The work “highlights how much biodiversity still exists that we haven’t fully discovered,” says Leonora Bittleston, a biologist at Boise State University in Idaho who was not involved with the study. It’s possible that other pitcher plant species may have traps lurking underground and scientists just haven’t noticed yet, she says. “I think a lot of people don’t really dig down.”
The next generation of dark matter detectors has arrived.
A massive new effort to detect the elusive substance has reported its first results. Following a time-honored tradition of dark matter hunters, the experiment, called LZ, didn’t find dark matter. But it has done that better than ever before, physicists report July 7 in a virtual webinar and a paper posted on LZ’s website. And with several additional years of data-taking planned from LZ and other experiments like it, physicists are hopeful they’ll finally get a glimpse of dark matter. “Dark matter remains one of the biggest mysteries in particle physics today,” LZ spokesperson Hugh Lippincott, a physicist at the University of California, Santa Barbara said during the webinar.
LZ, or LUX-ZEPLIN, aims to discover the unidentified particles that are thought to make up most of the universe’s matter. Although no one has ever conclusively detected a particle of dark matter, its influence on the universe can be seen in the motions of stars and galaxies, and via other cosmic observations (SN: 7/24/18).
Located about 1.5 kilometers underground at the Sanford Underground Research Facility in Lead, S.D., the detector is filled with 10 metric tons of liquid xenon. If dark matter particles crash into the nuclei of any of those xenon atoms, they would produce flashes of light that the detector would pick up.
The LZ experiment is one of a new generation of bigger, badder dark matter detectors based on liquid xenon, which also includes XENONnT in Gran Sasso National Laboratory in Italy and PandaX-4T in the China Jinping Underground Laboratory. The experiments aim to detect a theorized type of dark matter called Weakly Interacting Massive Particles, or WIMPs (SN: 12/13/16). Scientists scaled up the search to allow for a better chance of spying the particles, with each detector containing multiple tons of liquid xenon.
Using only about 60 days’ worth of data, LZ has already surpassed earlier efforts to pin down WIMPs (SN: 5/28/18). “It’s really impressive what they’ve been able to pull off; it’s a technological marvel,” says theoretical physicist Dan Hooper of Fermilab in Batavia, Ill, who was not involved with the study.
Although LZ’s search came up empty, “the way something’s going to be discovered is when you have multiple years in a row of running,” says LZ collaborator Matthew Szydagis, a physicist at the University at Albany in New York. LZ is expected to run for about five years, and data from that extended period may provide physicists’ best chance to find the particles.
Now that the detector has proven its potential, says LZ physicist Kevin Lesko of Lawrence Berkeley National Laboratory in California, “we’re excited about what we’re going to see.”
Tyrannosaurus rex’s tiny arms have launched a thousand sarcastic memes: I love you this much; can you pass the salt?; row, row, row your … oh.
But back off, snarky jokesters. A newfound species of big-headed carnivorous dinosaur with tiny forelimbs suggests those arms weren’t just an evolutionary punchline. Arm reduction — alongside giant heads — evolved independently in different dinosaur lineages, researchers report July 7 in Current Biology.
Meraxes gigas, named for a dragon in George R. R. Martin’s “A Song of Ice and Fire” book series, lived between 100 million and 90 million years ago in what’s now Argentina, says Juan Canale, a paleontologist with the country’s CONICET research network who is based in Buenos Aires. Despite the resemblance to T. rex, M. gigas wasn’t a tyrannosaur; it was a carcharodontosaur — a member of a distantly related, lesser-known group of predatory theropod dinosaurs. M. gigas went extinct nearly 20 million years before T. rex walked on Earth. The M. gigas individual described by Canale and colleagues was about 45 years old and weighed more than four metric tons when it died, they estimate. The fossilized specimen is about 11 meters long, and its skull is heavily ornamented with crests and bumps and tiny hornlets, ornamentations that probably helped attract mates.
Why these dinosaurs had such tiny arms is an enduring mystery. They weren’t for hunting: Both T. rex and M. gigas used their massive heads to hunt prey (SN: 10/22/18). The arms may have shrunk so they were out of the way during the frenzy of group feeding on carcasses.
But, Canale says, M. gigas’ arms were surprisingly muscular, suggesting they were more than just an inconvenient limb. One possibility is that the arms helped lift the animal from a reclining to a standing position. Another is that they aided in mating — perhaps showing a mate some love.
Getting a COVID-19 test has become a regular part of many college students’ lives. That ritual may protect not just those students’ classmates and professors but also their municipal bus drivers, neighbors and other members of the local community, a new study suggests.
Counties where colleges and universities did COVID-19 testing saw fewer COVID-19 cases and deaths than ones with schools that did not do any testing in the fall of 2020, researchers report June 23 in PLOS Digital Health. While previous analyses have shown that counties with colleges that brought students back to campus had more COVID-19 cases than those that continued online instruction, this is the first look at the impact of campus testing on those communities on a national scale (SN: 2/23/21). “It’s tough to think of universities as just silos within cities; it’s just much more permeable than that,” says Brennan Klein, a network scientist at Northeastern University in Boston.
Colleges that tested their students generally did not see significantly lower case counts than schools that didn’t do testing, Klein and his colleagues found. But the communities surrounding these schools did see fewer cases and deaths. That’s because towns with colleges conducting regular testing had a more accurate sense of how much COVID-19 was circulating in their communities, Klein says, which allowed those towns to understand the risk level and put masking policies and other mitigation strategies in place.
The results highlight the crucial role testing can continue to play as students return to campus this fall, says Sam Scarpino, vice president of pathogen surveillance at the Rockefeller Foundation’s Pandemic Prevention Institute in Washington, D.C. Testing “may not be optional in the fall if we want to keep colleges and universities open safely,” he says. Finding a flight path As SARS-CoV-2, the virus that causes COVID-19 rapidly spread around the world in the spring of 2020, it had a swift impact on U.S. college students. Most were abruptly sent home from their dorm rooms, lecture halls, study abroad programs and even spring break outings to spend what would be the remainder of the semester online. And with the start of the fall semester just months away, schools were “flying blind” as to how to bring students back to campus safely, Klein says.
That fall, Klein, Scarpino and their collaborators began to put together a potential flight path for schools by collecting data from COVID-19 dashboards created by universities and the counties surrounding those schools to track cases. The researchers classified schools based on whether they had opted for entirely online learning or in-person teaching. They then divided the schools with in-person learning based on whether they did any testing.
It’s not a perfect comparison, Klein says, because this method groups schools that did one round of testing with those that did consistent surveillance testing. But the team’s analyses still generally show how colleges’ pandemic response impacted their local communities.
Overall, counties with colleges saw more cases and deaths than counties without schools. However, testing helped minimize the increase in cases and deaths. During the fall semester, from August to December, counties with colleges that did testing saw on average 14 fewer deaths per 100,000 people than counties with colleges that brought students back with no testing — 56 deaths per 100,000 versus about 70. The University of Massachusetts Amherst, with nearly 30,000 undergraduate and graduate students in 2020, is one case study of the value of the testing, Klein says. Throughout the fall semester, the school tested students twice a week. That meant that three times as many tests occurred in the city of Amherst than in neighboring cities, he says. For much of the fall and winter, Amherst had fewer COVID-19 cases per 1,000 residents than its neighboring counties and statewide averages.
Once students left for winter break, campus testing stopped – so overall local testing dropped. When students returned for spring semester in February 2021, area cases spiked — possibly driven by students bringing the coronavirus back from their travels and by being exposed to local residents whose cases may have been missed due to the drop in local testing. Students returned “to a town that has more COVID than they realize” Klein says.
Renewed campus testing not only picked up the spike but quickly prompted mitigation strategies. The university moved classes to Zoom and asked students to remain in their rooms, at one point even telling them that they should not go on walks outdoors. By mid-March, the university reduced the spread of cases on campus and the town once again had a lower COVID-19 case rate than its neighbors for the remainder of the semester, the team found.
The value of testing It’s helpful to know that testing overall helped protect local communities, says David Paltiel, a public health researcher at the Yale School of Public Health who was not involved with the study. Paltiel was one of the first researchers to call for routine testing on college campuses, regardless of whether students had symptoms.
“I believe that testing and masking and all those things probably were really useful, because in the fall of 2020 we didn’t have a vaccine yet,” he says. Quickly identifying cases and isolating affected students, he adds, was key at the time. But each school is unique, he says, and the benefit of testing probably varied between schools. And today, two and a half years into the pandemic, the cost-benefit calculation is different now that vaccines are widely available and schools are faced with newer variants of SARS-CoV-2. Some of those variants spread so quickly that even testing twice a week may not catch all cases on campus quickly enough to stop their spread, he says.
As colleges and universities prepare for the fall 2022 semester, he would recommend schools consider testing students as they return to campus with less frequent follow-up surveillance testing to “make sure things aren’t spinning crazy out of control.”
Still, the study shows that regular campus testing can benefit the broader community, Scarpino says. In fact, he hopes to capitalize on the interest in testing for COVID-19 to roll out more expansive public health testing for multiple respiratory viruses, including the flu, in places like college campuses. In addition to PCR tests — the kind that involve sticking a swab up your nose — such efforts might also analyze wastewater and air within buildings for pathogens (SN: 05/28/20).
Unchecked coronavirus transmission continues to disrupt lives — in the United States and globally — and new variants will continue to emerge, he says. “We need to be prepared for another surge of SARS-CoV-2 in the fall when the schools reopen, and we’re back in respiratory season.”
In the quest to measure the fundamental constant that governs the strength of gravity, scientists are getting a wiggle on.
Using a pair of meter-long, vibrating metal beams, scientists have made a new measurement of “Big G,” also known as Newton’s gravitational constant, researchers report July 11 in Nature Physics. The technique could help physicists get a better handle on the poorly measured constant.
Big G is notoriously difficult to determine (SN: 9/12/13). Previous estimates of the constant disagree with one another, leaving scientists in a muddle over its true value. It is the least precisely known of the fundamental constants, a group of numbers that commonly show up in equations, making them a prime target for precise measurements. Because the vibrating beam test is a new type of experiment, “it might help to understand what’s really going on,” says engineer and physicist Jürg Dual of ETH Zurich.
The researchers repeatedly bent one of the beams back and forth and used lasers to measure how the second beam responded to the first beam’s varying gravitational pull. To help maintain a stable temperature and avoid external vibrations that could stymie the experiment, the researchers performed their work 80 meters underground, in what was once a military fortress in the Swiss Alps.
Big G, according to the new measurement, is approximately 6.82 x 10-11 meters cubed per kilogram per square second. But the estimate has an uncertainty of about 1.6 percent, which is large compared to other measurements (SN: 8/29/18). So the number is not yet precise enough to sway the debate over Big G’s value. But the team now plans to improve their measurement, for example by adding a modified version of the test with rotating bars. That might help cut down on Big G’s wiggle room.
No beast on Earth is tougher than the tiny tardigrade. It can survive being frozen at -272° Celsius, being exposed to the vacuum of outer space and even being blasted with 500 times the dose of X-rays that would kill a human.
In other words, the creature can endure conditions that don’t even exist on Earth. This otherworldly resilience, combined with their endearing looks, has made tardigrades a favorite of animal lovers. But beyond that, researchers are looking to the microscopic animals, about the size of a dust mite, to learn how to prepare humans and crops to handle the rigors of space travel. The tardigrade’s indestructibility stems from its adaptations to its environment — which may seem surprising, since it lives in seemingly cushy places, like the cool, wet clumps of moss that dot a garden wall. In homage to such habitats, along with a pudgy appearance, some people call tardigrades water bears or, adorably, moss piglets.
But it turns out that a tardigrade’s damp, mossy home can dry out many times each year. Drying is pretty catastrophic for most living things. It damages cells in some of the same ways that freezing, vacuum and radiation do.
For one thing, drying leads to high levels of peroxides and other reactive oxygen species. These toxic molecules chisel a cell’s DNA into short fragments — just as radiation does. Drying also causes cell membranes to wrinkle and crack. And it can lead delicate proteins to unfold, rendering them as useless as crumpled paper airplanes. Tardigrades have evolved special strategies for dealing with these kinds of damage. As a tardigrade dries out, its cells gush out several strange proteins that are unlike anything found in other animals. In water, the proteins are floppy and shapeless. But as water disappears, the proteins self-assemble into long, crisscrossing fibers that fill the cell’s interior. Like Styrofoam packing peanuts, the fibers support the cell’s membranes and proteins, preventing them from breaking or unfolding.
At least two species of tardigrade also produce another protein found in no other animal on Earth. This protein, dubbed Dsup, short for “damage suppressor,” binds to DNA and may physically shield it from reactive forms of oxygen.
Emulating tardigrades could one day help humans colonize outer space. Food crops, yeast and insects could be engineered to produce tardigrade proteins, allowing these organisms to grow more efficiently on spacecraft where levels of radiation are elevated compared with on Earth.
Scientists have already inserted the gene for the Dsup protein into human cells in the lab. Many of those modified cells survived levels of X-rays or peroxide chemicals that kill ordinary cells (SN: 11/9/19, p. 13). And when inserted into tobacco plants — an experimental model for food crops — the gene for Dsup seemed to protect the plants from exposure to a DNA-damaging chemical called ethyl methanesulfonate. Plants with the extra gene grew more quickly than those without it. Plants with Dsup also incurred less DNA damage when exposed to ultraviolet radiation. Tardigrades’ “packing peanut” proteins show early signs of being protective for humans. When modified to produce those proteins, human cells became resistant to camptothecin, a cell-killing chemotherapy agent, researchers reported in the March 18 ACS Synthetic Biology. The tardigrade proteins did this by inhibiting apoptosis, a cellular self-destruct program that is often triggered by exposure to harmful chemicals or radiation.
So if humans ever succeed in reaching the stars, they may accomplish this feat, in part, by standing on the shoulders of the tiny eight-legged endurance specialists in your backyard.