The infant universe transforms from a featureless landscape to an intricate web in a new supercomputer simulation of the cosmos’s formative years.
An animation from the simulation shows our universe changing from a smooth, cold gas cloud to the lumpy scattering of galaxies and stars that we see today. It’s the most complete, detailed and accurate reproduction of the universe’s evolution yet produced, researchers report in the November Monthly Notices of the Royal Astronomical Society.
This virtual glimpse into the cosmos’s past is the result of CoDaIII, the third iteration of the Cosmic Dawn Project, which traces the history of the universe, beginning with the “cosmic dark ages” about 10 million years after the Big Bang. At that point, hot gas produced at the very beginning of time, about 13.8 billion years ago, had cooled to a featureless cloud devoid of light, says astronomer Paul Shapiro of the University of Texas at Austin. Roughly 100 million years later, tiny ripples in the gas left over from the Big Bang caused the gases to clump together (SN: 2/19/15). This led to long, threadlike strands that formed a web of matter where galaxies and stars were born.
As radiation from the early galaxies illuminated the universe, it ripped electrons from atoms in the once-cold gas clouds during a period called the epoch of reionization, which continued until about 700 million years after the Big Bang (SN: 2/6/17).
CoDaIII is the first simulation to fully account for the complicated interaction between radiation and the flow of matter in the universe, Shapiro says. It spans the time from the cosmic dark ages and through the next several billion years as the distribution of matter in the modern universe formed.
The animation from the simulation, Shapiro says, graphically shows how the structure of the early universe is “imprinted on the galaxies today, which remember their youth, or their birth or their ancestors from the epoch of reionization.”
An ancient hominid dubbed Homo naledi may have lit controlled fires in the pitch-dark chambers of an underground cave system, new discoveries hint.
Researchers have found remnants of small fireplaces and sooty wall and ceiling smudges in passages and chambers throughout South Africa’s Rising Star cave complex, paleoanthropologist Lee Berger announced in a December 1 lecture hosted by the Carnegie Institution of Science in Washington, D.C.
“Signs of fire use are everywhere in this cave system,” said Berger, of the University of the Witwatersrand, Johannesburg.
H. naledi presumably lit the blazes in the caves since remains of no other hominids have turned up there, the team says. But the researchers have yet to date the age of the fire remains. And researchers outside Berger’s group have yet to evaluate the new finds.
H. naledi fossils date to between 335,000 and 236,000 years ago (SN: 5/9/17), around the time Homo sapiens originated (SN: 6/7/17). Many researchers suspect that regular use of fire by hominids for light, warmth and cooking began roughly 400,000 years ago (SN: 4/2/12).
Such behavior has not been attributed to H. naledi before, largely because of its small brain. But it’s now clear that a brain roughly one-third the size of human brains today still enabled H. naledi to achieve control of fire, Berger contends.
Last August, Berger climbed down a narrow shaft and examined two underground chambers where H. naledi fossils had been found. He noticed stalactites and thin rock sheets that had partly grown over older ceiling surfaces. Those surfaces displayed blackened, burned areas and were also dotted by what appeared to be soot particles, Berger said.
Meanwhile, expedition codirector and Wits paleoanthropologist Keneiloe Molopyane led excavations of a nearby cave chamber. There, the researchers uncovered two small fireplaces containing charred bits of wood, and burned bones of antelopes and other animals. Remains of a fireplace and nearby burned animal bones were then discovered in a more remote cave chamber where H. naledi fossils have been found.
Still, the main challenge for investigators will be to date the burned wood and bones and other fire remains from the Rising Star chambers and demonstrate that the fireplaces there come from the same sediment layers as H. naledi fossils, says paleoanthropologist W. Andrew Barr of George Washington University in Washington, D.C., who wasn’t involved in the work.
“That’s an absolutely critical first step before it will be possible to speculate about who may have made fires for what reason,” Barr says.
Marsupials may have richer social lives than previously thought.
Generally considered loners, the pouched animals have a wide diversity of social relationships that have gone unrecognized, a new analysis published October 26 in Proceedings of the Royal Society B suggests. The findings could have implications for how scientists think about the lifestyles of early mammals.
“These findings are helpful to move us away from a linear thinking that used to exist in some parts of evolutionary theory, that species develop from supposedly simple into more complex forms,” says Dieter Lukas, an evolutionary ecologist at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, who was not involved with the study.
Mammals run the gamut of social organization systems, ranging from loose, ephemeral interactions like aggregations of jaguars in the South American wetlands to the antlike subterranean societies of naked mole-rats (SN: 10/13/21; SN: 10/20/20).
But marsupials — a subgroup of mammals that give birth to relatively underdeveloped young reared in pouches — have traditionally been considered largely solitary. Some kangaroo species were known to form transient or permanent groups of dozens of individuals. But among marsupials, long-term bonds between males and females were thought rare and there were no known examples of group members cooperating to raise young. Previous work on patterns of mammalian social evolution regarded about 90 percent of examined marsupial species to be solitary.
“If you look at other [studies] about some specific species, you will see [the researchers] tend to assume that the marsupials are solitary,” says Jingyu Qiu, a behavioral ecologist at CNRS in Strasbourg, France.
Sorting social lives Qiu and her colleagues developed a database of field studies that illuminated marsupial social organization, taking into account how populations vary within a species and delving into the evolutionary history of marsupial social lives. The researchers compiled data from 120 studies on 149 populations of 65 marsupial species, categorizing each population as solitary, living in pairs — such as one male and one female — or falling into four kinds of group living, including one male and multiple females (or vice versa), multiple males and females, or single sex groups.
While 19 species, or 31 percent of those studied, appear to go strictly solo, nearly half of the species always live in pairs or groups. The team also found lots of variation within species; 27 of the 65 species — more than 40 percent — fell into multiple social organization classifications. When the researchers looked at this social variation against climatic conditions in Australia, they found that social variability was more common in drier environments with less predictable rainfall. It’s possible that being able to switch between solitary and group living acts as a buffer against resource unpredictability.
The researchers’ focus on social flexibility “highlights that there is nothing simple even about a supposedly solitary species,” Lukas says.
Implications for the earliest mammals Qiu and her colleagues also ran computer analyses comparing the evolutionary relationships of the marsupials with how they form social relationships. This let the team predict the social organization of the earliest marsupials, which split from placental mammals about 160 million years ago. Because modern marsupials have been considered solitary, the marsupial ancestors — and the earliest mammals on the whole — have generally been assumed to be solitary as well.
The team found that solitary was the most likely social category of the ancestral marsupials, a 35 percent probability. But Qiu points out that the varied combinations where pair and group living are possible options make up the other 65 percent. So “it is more likely that the ancestor was also non-solitary,” she says. The findings also give insights into the range of possible lifestyles experienced by the earliest mammals, she says.
But Robert Voss, a mammalogist at the American Museum of Natural History in New York City, questions the analyses’ insights about a potentially social ancestral marsupial. The uncertainty about the solitary alternative, he says, is largely due to the researchers’ benchmarks for what does and what doesn’t constitute social behavior — thresholds that Voss views as too permissive. For example, Voss disagrees with the team’s characterization of opossum social organization.
“Anecdotal observations of [members of the same species] occasionally denning together is not compelling evidence for social behavior,” says Voss. “None of the cited studies suggest that opossums are anything other than solitary.”
Future work, Qiu says, will involve gathering data on a larger subset of mammals outside of marsupials to get a clearer picture of how social traits have evolved among mammals.
A sacrificed spider monkey is shedding new light on an ancient Mesoamerican relationship.
The remains of a 1,700-year-old monkey found in the ancient city of Teotihuacan outside modern-day Mexico City suggest the primate was a diplomatic gift from the Maya. The find is the earliest evidence of a primate held in captivity in the Americas, researchers report November 21 in Proceedings of the National Academy of Sciences.
Unearthed in 2018 at the base of a pyramid in Teotihuacan, the monkey’s skeleton lay beside the corpses of other animals — including an eagle and several rattlesnakes — in an area of the city where visiting Maya elites may have resided.
Evidence of animal sacrifices, including of predators like jaguars, have been found in the city before. But “up to that point, we did not have any instances of sacrificed primates in Teotihuacan,” says Nawa Sugiyama, an anthropological archaeologist at the University of California, Riverside.
Chemical analysis of the spider monkey’s bones and teeth showed that the female had likely been captured in a humid environment at a young age sometime in the third century. The monkey then lived in captivity for a few years before meeting her end between the years 250 and 300.
The highlands around Mexico City are a long way from the natural habitat of spider monkeys (Ateles geoffroy), which require wet tropical forests to thrive. This fact, along with the presence of Maya murals and vessels, suggests to Sugiyama and her colleagues that the spider monkey was a gift from elite Mayas to the people of Teotihuacan.
The find is an example of diplomatic relations between two cultures that sometimes had violent interactions. Maya hieroglyphs indicated that military forces from Teotihuacan invaded the Maya city of Tikal in 378, marking the start of a roughly 70-year period in which Teotihuacan meddled in Maya politics (SN: 10/22/21).
The “striking” discovery of the monkey shows that relationship between these two cultures far predates the invasion, says David Stuart, an archaeologist and epigraphist at the University of Texas at Austin who was not involved in the study.
“The war of 378 had a long history leading up to it,” he says. “The monkey is a really compelling illustration of this long relationship.”
Josep Cornella doesn’t deal in absolutes. While chemists typically draw rigid lines between organic and inorganic chemistry, Cornella, a researcher at Max-Planck-Institut für Kohlenforschung in Mülheim an der Ruhr, Germany, believes in just the opposite.
“You have to be open to cross boundaries,” he says, “and learn from it.” The fringes are “where the rich new things are.”
Cornella is an organic chemist by industry standards; he synthesizes molecules that contain carbon. But he’s put together a team from a wide range of backgrounds: inorganic chemists, physical organic chemists, computational chemists. Together, the team brainstorms novel approaches to designing new catalysts, so that chemical reactions essential to pharmaceuticals and agriculture can be made more efficient and friendly for the environment. Along the way, Cornella has unlocked mysteries that stumped chemists for years.
“He has told us about catalysts … that we didn’t have before, and which were just pipe dreams,” says Hosea Nelson, a chemist at Caltech who has not worked with Cornella. Bold idea When Cornella heard a speaker at a 2014 conference say that bismuth was nontoxic, he was sure it was a mistake. Bismuth is a heavy metal that sits between toxic lead and polonium on the periodic table. But it is indeed relatively nontoxic — it’s even used in the over-the-counter nausea medicine Pepto-Bismol.
Still, bismuth remains poorly understood. That’s one reason it attracted him. “It was a rather forgotten element of the periodic table,” Cornella says. But, “it’s there for a reason.”
Cornella started wondering if an element like bismuth could be trained for use as a catalyst. For the last century, scientists have been using transition metals, like palladium and iron, as the main catalysts in industrial synthesis. “Could we actually train [bismuth] to do what these guys do so well?” he asked. It was a conceptual question that “was completely naïve, or maybe stupid.”
Far from stupid: His team successfully used bismuth as a catalyst to make a carbon-fluorine bond. And bismuth didn’t just mimic a transition metal’s role — it worked better. Only a small amount of bismuth was required, much less than the amount of transition metal needed to complete the same task.
“A lot of people, including myself and other [researchers] around the world, have spent a lot of time thinking about how to make bismuth reactions catalytic,” Nelson says. “He’s the guy who cracked that nut.”
Standout research While the bismuth research is “weird” and “exciting,” Cornella says, it remains a proof of concept. Bismuth, though cheap, is not as abundant as he had hoped, so it’s not a very sustainable option for industry.
But other Cornella team findings are already being used in the real world. In 2019, the group figured out how to make an alternative to Ni(COD)2, a finicky catalyst commonly used by chemists in the lab. If it’s not kept at freezing temperatures and protected from oxygen by a layer of inert gases, the nickel complex falls apart.
The alternative complex, developed by Lukas Nattmann, a Ph.D. student in Cornella’s lab at the time, stays stable in oxygen at room temperature. It’s a game changer: It saves energy and materials, and it’s universal. “You can basically take all those reactions that were developed for 60 years of Ni(COD)2 and basically replace all of them with our catalyst, and it works just fine,” Cornella says. Cornella’s lab is also developing new reagents, substances that transform one material into another. The researchers are looking to transform atoms in functional groups — specific groupings of atoms that behave in specific ways regardless of the molecules they are found in — into other atoms in a single step. Doing these reactions in one step could cut preparation time from two weeks to a day, which would be very useful in the pharmaceutical industry
Taking risks It’s the success that gets attention, but failure is “our daily basis,” Cornella says. “It’s a lot of failure.” As a student, when he couldn’t get a reaction to work, he’d set up a simple reaction called a TBS protection — the kind of reaction that’s impossible to get wrong — to remind himself that he wasn’t “completely useless.”
Today he runs a lab that champions taking risks. He encourages students to learn from one another about areas they know nothing about. For instance, a pure organic chemist could come into Cornella’s lab and leave with a good understanding of organometallic chemistry after spending long days working alongside a colleague who is an expert in that area.
To Cornella, this sharing of knowledge is crucial. “If you tackle a problem from just one unique perspective,” he says, “maybe you’re missing some stuff.”
While Cornella might not like absolutes, Phil Baran, who advised Cornella during his postdoctoral work at Scripps Research in San Diego, sees Cornella as fitting into one of two distinct categories: “There are chemists who do chemistry in order to eat, like it’s a job. And there are chemists who eat in order to do chemistry,” Baran says. Cornella fits into the latter group. “It’s his oxygen.”
Growing up in Brazil, Marcos Simões-Costa often visited his grandparents’ farm in the Amazon. That immersion in nature — squawking toucans and all — sparked his fascination with science and evolution. But a video of a developing embryo, shown in his middle school science class, cemented his desire to become a developmental biologist.
“It’s such a beautiful process,” he says. “I was always into drawing and art, and it was very visual — the shapes of the embryo changing, the fact that you start with one cell and the complexity is increasing. I just got lost in that video.”
Today, Simões-Costa, of Harvard Medical School and Boston Children’s Hospital, is honoring his younger self by demystifying how the embryo develops. He studies the embryos and stem cells of birds and mice to learn how networks of genes and the elements that control them influence the identity of cells. The work could lead to new treatments for various diseases, including cancer.
“The embryo is our best teacher,” he says. Standout research Simões-Costa focuses on the embryo’s neural crest cells, a population of stem cells that form in the developing central nervous system. The cells migrate to other parts of the embryo and give rise to many different cell types, from the bone cells of the face to muscle cells to brain and nerve cells.
Scientists have wondered for years why, despite being so similar, neural crest cells in the cranial region of the embryo can form bone and cartilage, while those in the trunk region can’t form either. While a postdoc at Caltech, Simões-Costa studied the cascade of molecules that govern how genes are expressed in each cell type. With his adviser, developmental biologist Marianne Bronner, he identified transcription factors — proteins that can turn genes on and off — that were present only in cranial cells. Transplanting the genes for those proteins into trunk cells endowed the cells with the ability to create cartilage and bone.
Now in his own lab, he continues to piece together just how this vast regulatory network influences the specialization of cells. His team reconstructed how neural crest cells’ full set of genetic instructions, or the genome, folds into a compact, 3-D shape. The researchers identified short DNA sequences, called enhancers, that are located in faraway regions of the genome, but end up close to key genes when the genome folds. These enhancers work with transcription factors and other regulatory elements to control gene activity.
Simões-Costa is also using neural crest cells to elucidate a strange behavior shared by cancer cells and some embryonic cells. These cells produce energy anaerobically, without oxygen, even when oxygen is present. Called the Warburg effect, this metabolic process has been studied extensively in cancer cells, but its function remained unclear.
Colored tracks representing cell movements.. Through experiments manipulating the metabolism of neural crest cells, Simões-Costa’s team found that the Warburg effect is necessary for the cells to move around during early development. The mechanism, which should stay turned off in nonembryonic cells, somehow “gets reactivated in adult cells in the context of cancer, leading those cells to become more migratory and more invasive,” Simões-Costa says.
“He’s one of the few people who’s really looked at [this process in neural crest cells] at a molecular level and done a deep dive into the mechanisms underlying it,” says Bronner.
Cleverly combining classical embryological methods with the latest genomic technologies to address fundamental questions in developmental biology is what makes Simões-Costa special, says Kelly Liu, a developmental biologist at Cornell University. He wants to understand not only what individual genes do, but how they work at a systems level, she says.
What’s next How does the genetic blueprint tell cells where they are in the embryo, and what they should be doing? How do cancer cells hijack the Warburg effect, and could understanding of that process lead to new treatments? These are some of the questions Simões-Costa wants to tackle next.
“It’s been 20 years since the Human Genome Project came to a conclusion,” he says, referring to the massive effort to read the human genetic instruction book. “But there’s still so much mystery in the genetic code.”
Those mysteries, plus a deep passion for lab work, fuel Simões-Costa’s research. “Being at the bench is when I’m the happiest,” he says. He likens the delicate craft of performing precise surgeries on tissues and cells to meditation. “It does not get old.”
The glossy leaves and branching roots of mangroves are downright eye-catching, and now a study finds that the moon plays a special role in the vigor of these trees.
Long-term tidal cycles set in motion by the moon drive, in large part, the expansion and contraction of mangrove forests in Australia, researchers report in the Sept. 16 Science Advances. This discovery is key to predicting when stands of mangroves, which are good at sequestering carbon and could help fight climate change, are most likely to proliferate (SN: 11/18/21). Such knowledge could inform efforts to protect and restore the forests. Mangroves are coastal trees that provide habitat for fish and buffer against erosion (SN: 9/14/22). But in some places, the forests face a range of threats, including coastal development, pollution and land clearing for agriculture. To get a bird’s-eye view of these forests, Neil Saintilan, an environmental scientist at Macquarie University in Sydney, and his colleagues turned to satellite imagery. Using NASA and U.S. Geological Survey Landsat data from 1987 to 2020, the researchers calculated how the size and density of mangrove forests across Australia changed over time.
After accounting for persistent increases in these trees’ growth — probably due to rising carbon dioxide levels, higher sea levels and increasing air temperatures — Saintilan and his colleagues noticed a curious pattern. Mangrove forests tended to expand and contract in both extent and canopy cover in a predictable manner. “I saw this 18-year oscillation,” Saintilan says.
That regularity got the researchers thinking about the moon. Earth’s nearest celestial neighbor has long been known to help drive the tides, which deliver water and necessary nutrients to mangroves. A rhythm called the lunar nodal cycle could explain the mangroves’ growth pattern, the team hypothesized.
Over the course of 18.6 years, the plane of the moon’s orbit around Earth slowly tips. When the moon’s orbit is the least tilted relative to our planet’s equator, semidiurnal tides — which consist of two high and two low tides each day — tend to have a larger range. That means that in areas that experience semidiurnal tides, higher high tides and lower low tides are generally more likely. The effect is caused by the angle at which the moon tugs gravitationally on the Earth.
Saintilan and his colleagues found that mangrove forests experiencing semidiurnal tides tended to be larger and denser precisely when higher high tides were expected based on the moon’s orbit. The effect even seemed to outweigh other climatic drivers of mangrove growth, such as El Niño conditions. Other regions with mangroves, such as Vietnam and Indonesia, probably experience the same long-term trends, the team suggests.
Having access to data stretching back decades was key to this discovery, Saintilan says. “We’ve never really picked up before some of these longer-term drivers of vegetation dynamics.”
It’s important to recognize this effect on mangrove populations, says Octavio Aburto-Oropeza, a marine ecologist at the Scripps Institution of Oceanography in La Jolla, Calif., who was not involved in the research.
Scientists now know when some mangroves are particularly likely to flourish and should make an extra effort at those times to promote the growth of these carbon-sequestering trees, Aburto-Oropeza says. That might look like added limitations on human activity nearby that could harm the forests, he says. “We should be more proactive.”
Cocooned within the bowels of the Earth, one mineral’s metamorphosis into another may trigger some of the deepest earthquakes ever detected.
These cryptic tremors — known as deep-focus earthquakes — are a seismic conundrum. They violently rupture at depths greater than 300 kilometers, where intense temperatures and pressures are thought to force rocks to flow smoothly. Now, experiments suggest that those same hellish conditions might also sometimes transform olivine — the primary mineral in Earth’s mantle — into the mineral wadsleyite. This mineral switch-up can destabilize the surrounding rock, enabling earthquakes at otherwise impossible depths, mineral physicist Tomohiro Ohuchi and colleagues report September 15 in Nature Communications. “It’s been a real puzzle for many scientists because earthquakes shouldn’t occur deeper than 300 kilometers,” says Ohuchi, of Ehime University in Matsuyama, Japan.
Deep-focus earthquakes usually occur at subduction zones where tectonic plates made of oceanic crust — rich in olivine — plunge toward the mantle (SN: 1/13/21). Since the quakes’ seismic waves lose strength during their long ascent to the surface, they aren’t typically dangerous. But that doesn’t mean the quakes aren’t sometimes powerful. In 2013, a magnitude 8.3 deep-focus quake struck around 609 kilometers below the Sea of Okhotsk, just off Russia’s eastern coast.
Past studies hinted that unstable olivine crystals could spawn deep quakes. But those studies tested other minerals that were similar in composition to olivine but deform at lower pressures, Ohuchi says, or the experiments didn’t strain samples enough to form faults.
He and his team decided to put olivine itself to the test. To replicate conditions deep underground, the researchers heated and squeezed olivine crystals up to nearly 1100° Celsius and 17 gigapascals. Then the team used a mechanical press to further compress the olivine slowly and monitored the deformation.
From 11 to 17 gigapascals and about 800° to 900° C, the olivine recrystallized into thin layers containing new wadsleyite and smaller olivine grains. The researchers also found tiny faults and recorded bursts of sound waves — indicative of miniature earthquakes. Along subducting tectonic plates, many of these thin layers grow and link to form weak regions in the rock, upon which faults and earthquakes can initiate, the researchers suggest.
“The transformation really wreaks havoc with the [rock’s] mechanical stability,” says geophysicist Pamela Burnley of the University of Nevada, Las Vegas, who was not involved in the research. The findings help confirm that olivine transformations are enabling deep-focus earthquakes, she says.
Next, Ohuchi’s team plans to experiment on olivine at even higher pressures to gain insights into the mineral’s deformation at greater depths.
The key to landing your dream job could be connecting with and then sending a single message to a casual acquaintance on social media.
That’s the conclusion of a five-year study of over 20 million users on the professional networking site LinkedIn, researchers report in the Sept. 16 Science. The study is the first large-scale effort to experimentally test a nearly 50-year-old social science theory that says weak social ties matter more than strong ones for getting ahead in life, including finding a good job. “The weak tie theory is one of the most celebrated and cited findings in social science,” says network scientist Dashun Wang of Northwestern University in Evanston, Ill., who coauthored a perspective piece in the same issue of Science. This study “provides the first causal evidence for this idea of weak ties explaining job mobility.”
Sociologist Mark Granovetter of Stanford University proposed the weak tie theory in 1973. The theory, which has garnered nearly 67,000 scientific citations, hinges on the idea that humans cluster into social spheres that connect via bridges (SN: 8/13/03). Those bridges represent weak social ties between people, and give individuals who cross access to realms of new ideas and information, including about job markets.
But the influential theory has come under fire in recent years. In particular, a 2017 analysis in the Journal of Labor Economics of 6 million Facebook users showed that increasing interaction with a friend online, thereby strengthening that social tie, increased the likelihood of working with that friend.
In the new study, LinkedIn gave Sinan Aral, a managerial economist at MIT, and his team access to data from the company’s People You May Know algorithm, which recommends new connections to users. Over five years, the social media site’s operators used seven variations of the algorithm for users actively seeking connections, each recommending varying levels of weak and strong ties to users. During that time, 2 billion new ties and 600,000 job changes were noted on the site.
Aral and his colleagues measured tie strength via the number of mutual LinkedIn connections and direct messages between users. Job transitions occurred when two criteria were met: A pair connected on LinkedIn at least one year prior to the job seeker joining the same company as the other user; and the user who first joined the company was there for at least a year before the second user came onboard. Those criteria were meant in part to weed out situations where the two could have ended up at the same company by chance. Overall, weak ties were more likely to lead to job changes than strong ones, the team found. But the study adds a twist to the theory: When job hunting, mid-tier friends are more helpful than either one’s closest friends or near strangers. Those are the friends with whom you share roughly 10 connections and seldom interact, Aral says. “They’re still weak ties, but they are not the weakest ties.”
The researchers also found that when a user added more weak ties to their network, that person applied to more jobs overall, which converted to getting more jobs. But that finding applied only to highly digitized jobs, such as those heavily reliant on software and amenable to remote work. Strong ties were more beneficial than weak ties for some job seekers outside the digital realm. Aral suspects those sorts of jobs may be more local and thus reliant on members of tight-knit communities.
The finding that job seekers should lean on mid-level acquaintances corroborates smaller studies, says network scientist Cameron Piercy of the University of Kansas in Lawrence who wasn’t involved in either the 2017 study or this more recent one.
That evidence suggests that the weakest acquaintances lack enough information about the job candidate, while the closest friends know too much about the candidate’s strengths — and flaws. “There’s this medium-ties sweet spot where you are willing to vouch for them because they know a couple people that you know,” Piercy says.
But he and others also raise ethical concerns about the new study. Piercy worries about research that manipulates people’s social media spaces without clearly and obviously indicating that it’s being done. In the new study, LinkedIn users who visited the “My Network” page for connection recommendations — who make up less than 5 percent of the site’s monthly active users — got automatically triggered into the experiment.
And it’s unclear how LinkedIn, whose researchers are coauthors of the study, will use this information moving forward. “When you are talking about people’s work, their ability to make money, this is important,” Piercy says. The company “should recommend weak ties, the version of the algorithm that led to more job attainment, if its purpose is to connect people with work. But they don’t make that conclusion in the paper.”
Another limitation is that the analyzed data lacked vital demographic information on users. That was for privacy reasons, the researchers say. But breaking down the results by gender is crucial as some evidence suggests that women — but not men — must rely on both weak and strong ties for professional advancement, Northwestern’s Wang says.
Still, with over half of jobs generally found through social ties, the findings could point people toward better ways to hunt for a job in today’s tumultuous environment. “You may have seen these recommendations on LinkedIn and you may have ignored them. You think ‘Oh, I don’t really know that person,’” Aral says. “But you may be doing yourself a disservice.”
A single, doomed moon could clear up a couple of mysteries about Saturn.
This hypothetical missing moon, dubbed Chrysalis, could have helped tilt Saturn over, researchers suggest September 15 in Science. The ensuing orbital chaos might then have led to the moon’s demise, shredding it to form the iconic rings that encircle the planet today.
“We like it because it’s a scenario that explains two or three different things that were previously not thought to be related,” says study coauthor Jack Wisdom, a planetary scientist at MIT. “The rings are related to the tilt, who would ever have guessed that?” Saturn’s rings appear surprisingly young, a mere 150 million years or so old (SN: 12/14/17). If the dinosaurs had telescopes, they might have seen a ringless Saturn. Another mysterious feature of the gas giant is its nearly 27-degree tilt relative to its orbit around the sun. That tilt is too large to have formed when Saturn did or to be the result of collisions knocking the planet over. Planetary scientists have long suspected that the tilt is related to Neptune, because of a coincidence in timing between the way the two planets move. Saturn’s axis wobbles, or precesses, like a spinning top. Neptune’s entire orbit around the sun also wobbles, like a struggling hula hoop.
The periods of both precessions are almost the same, a phenomenon known as resonance. Scientists theorized that gravity from Saturn’s moons — especially the largest moon, Titan — helped the planetary precessions line up. But some features of Saturn’s internal structure were not known well enough to prove that the two timings were related.
Wisdom and colleagues used precision measurements of Saturn’s gravitational field from the Cassini spacecraft, which plunged into Saturn in 2017 after 13 years orbiting the gas giant, to figure out the details of its internal structure (SN: 9/15/17). Specifically, the team worked out Saturn’s moment of inertia, a measure of how much force is needed to tip the planet over. The team found that the moment of inertia is close to, but not exactly, what it would be if Saturn’s spin were in perfect resonance with Neptune’s orbit.
“We argue that it’s so close, it couldn’t have occurred by chance,” Wisdom says. “That’s where this satellite Chrysalis came in.”
After considering a volley of other explanations, Wisdom and colleagues realized that another smallish moon would have helped Titan bring Saturn and Neptune into resonance by adding its own gravitational tugs. Titan drifted away from Saturn until its orbit synced up with that of Chrysalis. The enhanced gravitational kicks from the larger moon sent the doomed smaller moon on a chaotic dance. Eventually, Chrysalis swooped so close to Saturn that it grazed the giant planet’s cloud tops. Saturn ripped the moon apart, and slowly ground its pieces down into the rings.
Calculations and computer simulations showed that the scenario works, though not all the time. Out of 390 simulated scenarios, only 17 ended with Chrysalis disintegrating to create the rings. Then again, massive, striking rings like Saturn’s are rare, too.
The name Chrysalis came from that spectacular ending: “A chrysalis is a cocoon of a butterfly,” Wisdom says. “The satellite Chrysalis was dormant for 4.5 billion years, presumably. Then suddenly the rings of Saturn emerged from it.”
The story hangs together, says planetary scientist Larry Esposito of the University of Colorado Boulder, who was not involved in the new work. But he’s not entirely convinced. “I think it’s all plausible, but maybe not so likely,” he says. “If Sherlock Holmes is solving a case, even the improbable explanation may be the right one. But I don’t think we’re there yet.”