The Language of Liars by S L Huang
Mar. 13th, 2026 09:08 am
A linguist goes undercover to unravel a xenological puzzle whose answer is in plain view.
The Language of Liars by S L Huang


Fandom: Final Fantasy XIV
Rating: Mature
Archive Warnings: Major Character Death
Relationships: Urianger Augurelt/Moenbryda Wilfsunnwyn, Urianger Augurelt & Moenbryda Wilfsunnwyn, Ardbert & Urianger Augurelt, Unrealized Ardbert/Urianger Augurelt, Pre-Urianger Augurelt/Warrior of Light
Characters: Urianger Augurelt, Moenbryda Wilfsunnwyn, Ardbert Hylfyst, Elidibus, Unukalhai, Tataru Taru, Minfilia Warde, Warrior of Light, Dewlala Dewla, Y'shtola Rhul, Yugiri Mistwalker, Thancred Waters, J'Rhoomale, Blanhaerz, Lamimi, Naillebert, Haneko Burneko
Additional Tags: Grief/Mourning, Angst, Religion, Isolation, Loneliness, Patch 3.4: Soul Surrender Spoilers (Final Fantasy XIV), Elezen Warrior of Light, Female Warrior of Light, Canon-Typical Violence, Guilt, Emotional Repression, Child Neglect, Childhood Memories, Unresolved Sexual Tension
Series: With Lilies and With Laurel
Length: 57,340 / 92,000
Chapter: 9/15
Summary:
Heartbroken after the loss of his dearest companion, Urianger labors to save two worlds in which he has never felt more alone.
Notes:
If you're new here, please start with Chapter 1!
Final Fantasy XIV is owned by Square Enix. This is a non-commercial work of fanfiction.
( Read on AO3 )
( ...or below! )
Previous Chapter | Next Chapter
In 2025, Google, Amazon, Microsoft and Meta collectively spent US$380 billion on building artificial-intelligence tools. That number is expected to surge still higher this year, to $650 billion, to fund the building of physical infrastructure, such as data centers (see go.nature.com/3lzf79q). Moreover, these firms are spending lavishly on one particular segment: top technical talent.
Meta reportedly offered a single AI researcher, who had cofounded a start-up firm focused on training AI agents to use computers, a compensation package of $250 million over four years (see go.nature.com/4qznsq1). Technology firms are also spending billions on “reverse-acquihires”—poaching the star staff members of start-ups without acquiring the companies themselves. Eyeing these generous payouts, technical experts earning more modest salaries might well reconsider their career choices.
Academia is already losing out. Since the launch of ChatGPT in 2022, concerns have grown in academia about an “AI brain drain.” Studies point to a sharp rise in university machine-learning and AI researchers moving to industry roles. A 2025 paper reported that this was especially true for young, highly cited scholars: researchers who were about five years into their careers and whose work ranked among the most cited were 100 times more likely to move to industry the following year than were ten-year veterans whose work received an average number of citations, according to a model based on data from nearly seven million papers.1
This outflow threatens the distinct roles of academic research in the scientific enterprise: innovation driven by curiosity rather than profit, as well as providing independent critique and ethical scrutiny. The fixation of “big tech” firms on skimming the very top talent also risks eroding the idea of science as a collaborative endeavor, in which teams—not individuals—do the most consequential work.
Here, we explore the broader implications for science and suggest alternative visions of the future.
Astronomical salaries for AI talent buy into a legend as old as the software industry: the 10x engineer. This is someone who is supposedly capable of ten times the impact of their peers. Why hire and manage an entire group of scientists or software engineers when one genius—or an AI agent—can outperform them?
That proposition is increasingly attractive to tech firms that are betting that a large number of entry-level and even mid-level engineering jobs will be replaced by AI. It’s no coincidence that Google’s Gemini 3 Pro AI model was launched with boasts of “PhD-level reasoning,” a marketing strategy that is appealing to executives seeking to replace people with AI.
But the lone-genius narrative is increasingly out of step with reality. Research backs up a fundamental truth: science is a team sport. A large-scale study of scientific publishing from 1900 to 2011 found that papers produced by larger collaborations consistently have greater impact than do those of smaller teams, even after accounting for self-citation.2 Analyses of the most highly cited scientists show a similar pattern: their highest-impact works tend to be those papers with many authors.3 A 2020 study of Nobel laureates reinforces this trend, revealing that—much like the wider scientific community—the average size of the teams that they publish with has steadily increased over time as scientific problems increase in scope and complexity.4
From the detection of gravitational waves, which are ripples in space-time caused by massive cosmic events, to CRISPR-based gene editing, a precise method for cutting and modifying DNA, to recent AI breakthroughs in protein-structure prediction, the most consequential advances in modern science have been collective achievements. Although these successes are often associated with prominent individuals—senior scientists, Nobel laureates, patent holders—the work itself was driven by teams ranging from dozens to thousands of people and was built on decades of open science: shared data, methods, software and accumulated insight.
Building strong institutions is a much more effective use of resources than is betting on any single individual. Examples demonstrating this include the LIGO Scientific Collaboration, the global team that first detected gravitational waves; the Broad Institute of MIT and Harvard in Cambridge, Massachusetts, a leading genomics and biomedical-research center behind many CRISPR advances; and even for-profit laboratories such as Google DeepMind in London, which drove advances in protein-structure prediction with its AlphaFold tool. If the aim of the tech giants and other AI firms that are spending lavishly on elite talent is to accelerate scientific progress, the current strategy is misguided.
By contrast, well-designed institutions amplify individual ability, sustain productivity beyond any one person’s career and endure long after any single contributor is gone.
Equally important, effective institutions distribute power in beneficial ways. Rather than vesting decision-making authority in the hands of one person, they have mechanisms for sharing control. Allocation committees decide how resources are used, scientific advisory boards set collective research priorities, and peer review determines which ideas enter the scientific record.
And although the term “innovation by committee” might sound disparaging, such an approach is crucial to make the scientific enterprise act in concert with the diverse needs of the broader public. This is especially true in science, which continues to suffer from pervasive inequalities across gender, race and socio-economic and cultural differences.5
This is why scientists, academics and policymakers should pay more attention to how AI research is organized and led, especially as the technology becomes essential across scientific disciplines. Used well, AI can support a more equitable scientific enterprise by empowering junior researchers who currently have access to few resources.
Instead, some of today’s wealthiest scientific institutions might think that they can deploy the same strategies as the tech industry uses and compete for top talent on financial terms—perhaps by getting funding from the same billionaires who back big tech. Indeed, wage inequality has been steadily growing within academia for decades.6 But this is not a path that science should follow.
The ideal model for science is a broad, diverse ecosystem in which researchers can thrive at every level. Here are three strategies that universities and mission-driven labs should adopt instead of engaging in a compensation arms race.
First, universities and institutions should stay committed to the public interest. An excellent example of this approach can be found in Switzerland, where several institutions are coordinating to build AI as a public good rather than a private asset. Researchers at the Swiss Federal Institute of Technology in Lausanne (EPFL) and the Swiss Federal Institute of Technology (ETH) in Zurich, working with the Swiss National Supercomputing Centre, have built Apertus, a freely available large language model. Unlike the controversially-labelled “open source” models built by commercial labs—such as Meta’s LLaMa, which has been criticized for not complying with the open-source definition (see go.nature.com/3o56zd5)—Apertus is not only open in its source code and its weights (meaning its core parameters), but also in its data and development process. Crucially, Apertus is not designed to compete with “frontier” AI labs pursuing superintelligence at enormous cost and with little regard for data ownership. Instead, it adopts a more modest and sustainable goal: to make AI trustworthy for use in industry and public administration, strictly adhering to data-licensing restrictions and including local European languages.7
Principal investigators (PIs) at other institutions globally should follow this path, aligning public funding agencies and public institutions to produce a more sustainable alternative to corporate AI.
Second, universities should bolster networks of researchers from the undergraduate to senior-professor levels—not only because they make for effective innovation teams, but also because they serve a purpose beyond next quarter’s profits. The scientific enterprise galvanizes its members at all levels to contribute to the same projects, the same journals and the same open, international scientific literature—to perpetuate itself across generations and to distribute its impact throughout society.
Universities should take precisely the opposite hiring strategy to that of the big tech firms. Instead of lavishing top dollar on a select few researchers, they should equitably distribute salaries. They should raise graduate-student stipends and postdoc salaries and limit the growth of pay for high-profile PIs.
Third, universities should show that they can offer more than just financial benefits: they must offer distinctive intellectual and civic rewards. Although money is unquestionably a motivator, researchers also value intellectual freedom and the recognition of their work. Studies show that research roles in industry that allow publication attract talent at salaries roughly 20% lower than comparable positions that prohibit it (see go.nature.com/4cbjxzu).
Beyond the intellectual recognition of publications and citation counts, universities should recognize and reward the production of public goods. The tenure and promotion process at universities should reward academics who supply expertise to local and national governments, who communicate with and engage the public in research, who publish and maintain open-source software for public use and who provide services for non-profit groups.
Furthermore, institutions should demonstrate that they will defend the intellectual freedom of their researchers and shield them from corporate or political interference. In the United States today, we see a striking juxtaposition between big tech firms, which curry favour with the administration of US President Donald Trump to win regulatory and trade benefits, and higher-education institutions, which suffer massive losses of federal funding and threats of investigation and sanction. Unlike big tech firms, universities should invest in enquiry that challenges authority.
We urge leaders of scientific institutions to reject the growing pay inequality rampant in the upper echelons of AI research. Instead, they should compete for talent on a different dimension: the integrity of their missions and the equitableness of their institutions. These institutions should focus on building sustainable organizations with diverse staff members, rather than bestowing a bounty on science’s 1%.
This essay was written with Nathan E. Sanders, and originally appeared in Nature.
Apple announcement:
…iPhone and iPad are the first and only consumer devices in compliance with the information assurance requirements of NATO nations. This enables iPhone and iPad to be used with classified information up to the NATO restricted level without requiring special software or settings—a level of government certification no other consumer mobile device has met.
This is out of the box, no modifications required.
Boing Boing post.
Naturally, from various angles of my interests, I am going to click on a link like this, no? Pornucopia: The World’s Largest Collection of Smut, and You Can’t See It.
And while I have a certain historianly interest in the contents of the collection (though I was having a conversation with somebody a little while ago and we reckoned we would love to take a gander at Antony Comstock's Private Cupboard, because a leading smuthound must have accumulated a really outstanding filth collection, hmmmm?)
- I was going to myself with my archivist hat on, OMG, this is so many problems - there must be HUGE conservation issues, I just hope none of those porno movies are on nitrate film, but I do not think the smart money would be betting on it, and a lot of those relics are on degrading media even if they're not going to spontaneously combust. Some of them I wonder if there are actually means of playing them still.
(Tangentially I mention my wince when hearing thrilled younger scholar recount how they had listened to a 78 rpm recording in a sound archive, and I was, really???)
Then it sounds as though they are Not Keeping Up With Basic Processing ('embarrassed about the unorganized conditions', heh) which sounds as though ambitious collecting agenda has totally outrun capacity of institution to keep on top of it (should I add 'fnar fnar, nudge wink' at this point???).
Plus on the access thing and being not entirely welcoming to visitors, while - perhaps - historically collections like The Private Case (in the BL), L'Enfer (Bibliotheque Nationale), etc, were only made available to selected readers for fear of contaminating the public, in more recent days this is because this material is particularly vulnerable to to being mutilated - pages torn out or defaced, etc - which is why if you want to consult Cup. classification material in the BL you have to do so under the eye of the Librarian's Desk.
I suspect also in play is a probably legit fear of persons presenting themselves as SRS Scholars who once they are in will go BONFIRE OF THE VANITIES on the place ('wary about divulging warehouse locations', totally figures).
Over here, being niche.
Kenyan workers are still the underpaid labor behind AI training, moderation, and sex chatbots. The Data Labelers Association is fighting back.
When users select the 'expert review' button in the Grammarly sidebar, it analyzes their writing and surfaces AI-generated suggestions 'inspired by' related experts. Those 'industry-relevant perspectives' include the likes of Stephen King, Neil deGrasse Tyson, and Carl Sagan, among many others.
the endless dance around content bans requires constantly coming up with new ways to craft video titles and content that are frustrating not only for adult performers, but also their customers.
Age-verification systems require collecting sensitive data to support the biometric information. In no time, the internet will become a fully surveilled digital panopticon.
Desmond Cole fact checks his misinformation and explains how blaming the most vulnerable distracts us from fighting for good health care for all.
But critics say the Canadian rights tribunal didn’t go far enough after finding police discrimination.
From Fairy Creek to university campuses, CRU-BC is positioning itself as the go-to police force for repressing dissent.

When you’re trying to get folks excited about their own digital rights, a lot will depend on the examples you give them to understand the fight. As the Executive Director of the Electronic Frontier Foundation, Cindy Cohn certainly has examples. But which ones to choose? In this Big Idea for Privacy’s Defender, Cohn offers up her choices and explains why they matter.
CINDY COHN:
Do we have the right to have a private conversation online?
In this age of constant, pervasive surveillance, both government and corporate, how do you get people to believe that they can and should have that right?
And how do you show that safeguarding privacy is part of safeguarding a free, open and democratic society?
In Privacy’s Defender, my Big Idea is that by telling some rollicking stories about my three big fights for digital privacy over the past 30 years, I might inspire people not only to understand why privacy matters, but to actually start fighting for it themselves.
The challenge was different for each of the three stories I told. The first one, about cryptography, was in many ways the easiest, since it had a pretty straightforward narrative. Before the beginning of the broad public internet, in the early 1990s, I led a ragtag bunch of hackers and lawyers who sued to fight a federal law that treated encryption – specifically “software with the capability of maintaining secrecy” – as a weapon. We argued that code is speech and put together a case based on the First Amendment. By pulling in help from academics, scientists, companies and others, and by the grace of several women judges who were willing to listen to us in spite of the government’s national security claims on the other side, we won.
Many other stories from the early public internet are about men and the products they built. This one is different: It tells how some scruffy underdogs beat the national security infrastructure and brought all of us the promise of a more secure internet. But it’s otherwise kind of a hero’s tale with a dramatic ending when I was called to DC to negotiate the government’s surrender.
The second and third stories don’t end in such clean wins, which perhaps makes them more typical of how actual change happens when you are up against the government.
The second set of stories are about the cases we brought against the National Security Agency’s mass spying, starting after the New York Times revealed in late 2005 that the government was spying on Americans on our home soil. The fight was pushed forward by a whistleblower named Mark Klein who literally knocked on our front door at the Electronic Frontier Foundation in early 2006 with details of how the NSA was tapping into the internet’s backbone at key junctures, including in a secret room in an AT&T building in downtown San Francisco. This is the most cloak-and-dagger of the stories, made possible both by Mark’s courage and that of Edward Snowden, who revealed even more about the NSA spying in 2013 because he was angry at watching the government lie repeatedly to the American people, including before Congress.
As a result, Congress rushed in to protect… the phone companies, killing our first lawsuit. Later, after Snowden’s revelations, lawmakers passed some reforms to some of the programs we had sought to stop, but not nearly enough. In the end, the Supreme Court supported the government’s argument that – even though the whole world knew about the NSA spying and that it relied on access to information collected and handled by major telephone companies – identifying which company participated would violate the state secrets privilege. But we had dramatically shifted how the government did mass spying: ending two of the three programs we had sued over, scaling back the third, and providing far more public information about what the government was doing. In writing my book, I wanted to tell the truth about the progress we made without sugarcoating that we had not succeeded at nearly the scale that we did in the cryptography fights.
The third set of cases had a similar trajectory – an early win in the courts and some reform in Congress but ultimately not enough. These were the “Alphabet Cases” – so named because we couldn’t even name our clients publicly, assigning the cases letters instead – that we brought from 2011 through 2022 to scale back a kind of governmental subpoena called National Security Letters (NSLs), which let the FBI require companies to provide metadata about their customers but gagged them from ever telling anyone what had happened.
Though an appellate court ultimately sided with the government, we did succeed in helping our clients participate in the public debate and use their own experiences as evidence to counter the government’s misleading assertions. We had increased the procedural protections for those receiving NSLs, including clearing the way to challenge them with standards that were not quite as stacked against them. And we had helped create a path for corporate transparency reports that at least gave some information to the public about how often these controversial tools were being used.
I wanted this book to bring readers with me into the actual work, the bumpy ride, the incremental progress of protecting privacy, especially in the courts, in hope that people will think about how they too can join the fight. What we worried about in the 1990s, and fought to prevent in the 2000s and 2010s, seems closer than ever: that surveillance becomes the handmaiden of authoritarianism. But even in our troubled times, I’m confident that we are not powerless and we can prevail if we are patient, smart, thoughtful and work together. The Big Idea is that privacy is not just a coat of anonymity that you throw on before doing something embarrassing – it’s a check against unbridled government power. And as it turns out, the actual work of protecting that privacy can make for a fun, exciting and surprising life.
Privacy’s Defender: Amazon|Barnes & Noble|Bookshop
Author socials: Website
