The singularity is a literacy event.


There was a time when reading was radical.

Not metaphorically radical. Actually dangerous. For most of human history, literacy was a guarded technology — a weapon kept behind walls, in the hands of the people who already had power. And every time that weapon escaped — every time it leaked through the walls into the hands of ordinary people — the world caught fire and was remade.

This is not a metaphor. This is history. And it’s happening again.


A Short History of Hoarded Knowledge

Alexandria: When They Burned the Index

The Library of Alexandria wasn’t just a library. It was the world’s first attempt at a universal index — a single place where all human knowledge could be collected, organized, and accessed. At its peak, it held an estimated 400,000 scrolls. Scholars traveled from across the ancient world to study there. It was the Google of antiquity.

And it was destroyed. Not in one dramatic fire — that’s the Hollywood version. It was degraded over centuries by neglect, political indifference, religious conflict, and the simple, brutal reality that knowledge concentrated in one place is knowledge vulnerable to one disaster.

But here’s the part people forget: the Library of Alexandria was never public. It was for scholars, for priests, for the politically connected. The average Egyptian couldn’t walk in and read. The knowledge existed, but it was gated. Sound familiar?

Monasteries: The Original Paywalls

After Rome fell, literacy in Europe didn’t just decline — it was actively consolidated. The monasteries became the custodians of written knowledge. Monks copied texts by hand, preserving them through the Dark Ages. This is usually told as a heroic story, and in some ways it is.

But it’s also the story of a knowledge monopoly. The monks decided what got copied and what didn’t. They decided what was heresy and what was scripture. They held literacy itself as a kind of sacred technology — not for common people, not for women, not for anyone outside the walls.

For nearly a thousand years, the ability to read was a class marker, a gender marker, and a power structure. If you could read, you were clergy or nobility. If you couldn’t, you were everyone else. And “everyone else” was most of the human race.

The monasteries weren’t villains. They were doing what institutions always do: consolidating control over the most powerful technology available. In the 6th century, that technology was the written word.

In the 21st century, it’s data.

The Printing Press: When the Walls Fell

In 1440, Gutenberg didn’t just invent a machine. He committed an act of radical democratization that the existing power structure immediately recognized as an existential threat.

Before the press, producing a single Bible took a scribe roughly two years of full-time labor. A Bible cost roughly the equivalent of a house. Literacy was a luxury good. Information was artificially scarce — not because it had to be, but because scarcity served the interests of those who controlled it.

The press changed the economics of knowledge overnight. Suddenly, books were cheap. Pamphlets were cheaper. Ideas could spread faster than institutions could contain them.

The response from power was immediate and predictable: they tried to ban it. The Catholic Church established the Index Librorum Prohibitorum — a list of forbidden books — in 1559. Printers were imprisoned. Books were burned. The argument was always the same: common people can’t handle this. They’ll misinterpret it. They’ll be led astray. They need us to mediate their access to knowledge.

The Reformation happened anyway. Martin Luther’s 95 Theses, printed and distributed across Germany in weeks, shattered a millennium of ecclesiastical monopoly on spiritual knowledge. Not because Luther was right about everything, but because people could finally read the source material themselves.

The printing press didn’t just distribute books. It distributed the ability to question. And the people who had been answering all the questions for a thousand years were terrified.

Public Libraries: The Radical Act Nobody Remembers

Americans treat public libraries like furniture — they’ve always been there, they’re nice, whatever. But the creation of free public lending libraries in the 19th century was one of the most radical acts of knowledge democratization in human history.

The idea that any person, regardless of wealth, could walk into a building and access the accumulated knowledge of civilization for free — this was revolutionary. It was fought. The British Parliament debated for decades whether the working class should have access to books. The argument, again: they can’t handle it. They’ll read the wrong things. They’ll get ideas.

Andrew Carnegie funded 2,509 libraries across the English-speaking world. Whatever you think of Carnegie the man (and there’s plenty to think), the libraries represented a philosophical commitment: that an informed populace is a prerequisite for democracy.

We’ve forgotten this. We treat information access as a convenience rather than a right, as a product rather than a foundation. And now we’re facing a new information revolution that dwarfs the printing press, and we’re making the same mistakes — the same consolidation, the same gatekeeping, the same assumption that ordinary people can’t handle the tools.


Same Engine, Opposite Intent

Here’s something most people don’t think about, and once you see it, you can’t unsee it:

The machine learning engine that powers your social media feed — the one that keeps you scrolling at 2 AM, that feeds you rage bait because engagement is engagement, that has been credibly linked to teen depression, political radicalization, and the erosion of shared reality — is the same fundamental technology that powers the AI assistant answering your questions, helping you write, analyzing your data, and potentially revolutionizing medicine, science, and education.

Not similar technology. Not related technology. The same architecture. Both are built on the transformer — the neural network architecture described in the 2017 paper “Attention Is All You Need.” The same mathematical framework. The same attention mechanisms. The same ability to process and generate human language.

Let me make this visceral.

The transformer that writes your Instagram captions is architecturally identical to the transformer that can analyze a medical scan. The model that generates your TikTok For You Page — optimized to maximize the seconds you spend staring at your phone — uses the same core mathematics as the model that could identify drug interactions your doctor missed.

The difference isn’t the technology. It never was. The difference is the optimization function — the thing the system is pointed at, the question it’s trying to answer.

Social media’s optimization function: How do we maximize engagement (time on platform)? The answer, discovered empirically by every major platform, is: outrage, fear, tribalism, and addictive variable-ratio reinforcement. The algorithm doesn’t hate you. It doesn’t love you. It’s optimizing for a metric, and that metric happens to correlate with making you angry, anxious, and divided. The same way a river doesn’t hate the canyon — it just flows downhill.

An AI assistant’s optimization function: How do we provide the most helpful, accurate response to this query? Different question. Same math. Radically different outcome.

If you can’t read the new language — if you don’t understand what a model is, what training data means, what an optimization function does, what the difference is between a system optimizing for your attention and a system optimizing for your benefit — you can’t tell the difference between the tool that serves you and the tool that farms you.

That’s the new illiteracy. Not “I can’t use a computer.” It’s “I can’t tell when the computer is using me.”

And right now, the vast majority of people on Earth are on the wrong side of that line.


The Data Gap That Almost Killed Half the Population

Want a concrete example of what happens when the wrong people control the language of data? Look at medicine. Look at who was missing from the dataset. Look at who died because of it.

The Exclusion

In 1977, the FDA issued guidelines recommending the exclusion of women of childbearing potential from early-phase clinical trials. The reasoning: hormonal cycles introduced variables. Pregnancy was a liability risk. The “standard human” in medical research became, by default and then by policy, a 70-kilogram white male.

This wasn’t reversed until 1993, when the NIH Revitalization Act finally mandated the inclusion of women and minorities in federally funded clinical research. That’s sixteen years of explicit exclusion, built on top of centuries of implicit exclusion.

The consequences were not abstract. People died.

The Body Count

Heart disease is the #1 killer of women in the United States. But for decades, the “classic” heart attack presentation — crushing chest pain radiating to the left arm — was studied almost exclusively in men. Women’s heart attacks frequently present differently: jaw pain, nausea, extreme fatigue, back pain. Symptoms that look, to an undertrained eye, like anxiety or indigestion.

The result: women are 50% more likely to receive an incorrect initial diagnosis when having a heart attack (2022 study, European Heart Journal). Women under 55 who present to the ER with heart attack symptoms are seven times more likely to be sent home than men of the same age with the same complaint.

Specific drugs, specific failures:

  • Ambien (zolpidem): In 2013, the FDA cut the recommended dose for women in half after discovering that women metabolize the drug more slowly, leading to dangerous next-morning impairment. The drug had been on the market since 1992. Twenty-one years of women taking double the dose they should have.
  • Aspirin: For decades, aspirin was recommended as a daily preventive for heart attacks, based on trials conducted primarily in men. When the Women’s Health Study finally tested it in women (48,000+ participants), it found aspirin did not significantly reduce heart attack risk in women — but did reduce stroke risk, which wasn’t the original finding for men. Same drug. Different biology. Different answer. Decades of wrong recommendations.
  • Yentl Syndrome: Named by cardiologist Bernadine Healy in 1991, this describes the pattern where women receive less aggressive treatment for heart disease because their symptoms don’t match the male template. Studies have shown women are less likely to receive CPR from bystanders, less likely to be referred for cardiac catheterization, and less likely to receive guideline-directed therapy even after a confirmed diagnosis.

Autoimmune diseases — which disproportionately affect women (78% of autoimmune patients are female) — were understudied for generations. The average time to diagnosis for lupus is six years. For endometriosis, it’s seven to ten years. These aren’t rare conditions. Endometriosis affects an estimated 190 million women worldwide. They just weren’t priorities in a research ecosystem calibrated to the male body.

Where AI Enters — Both the Cure and the Disease

Here’s where it gets interesting, and here’s where literacy becomes the pivot point.

AI can help close this gap. Machine learning can use techniques like data augmentation to synthetically balance underrepresented populations in datasets. Transfer learning can extract useful patterns from biased data and apply them more broadly. AI systems can synthesize across thousands of studies — finding patterns that no single researcher, reading papers one at a time, would ever catch.

AI systems are already being used to identify sex-specific biomarkers for cardiac events, to flag potential drug interaction differences based on hormonal profiles, to screen for endometriosis using imaging patterns that human radiologists consistently miss.

But.

An AI trained on biased data without a literate human asking “who’s missing from this dataset?” will just automate the bias faster. An AI system that inherits the 70kg-male-as-default assumption will replicate it at scale — not out of malice, but out of mathematical inevitability. The tool doesn’t care. It optimizes for whatever you point it at.

If you point it at biased data without correcting for the bias, you get biased outputs at the speed of light. If you point it at the right question — “how do these results differ by sex, by age, by ethnicity?” — you get insights that could save millions of lives.

Literacy is the difference between AI that closes the gap and AI that cements it.


What AI Illiteracy Actually Looks Like

This isn’t hypothetical. This is happening right now. Here’s what the new illiteracy looks like in practice:

People Who Think ChatGPT Is Sentient

A 2023 survey found that a significant percentage of regular AI users believe their AI assistant has feelings, consciousness, or genuine understanding. They apologize to it. They worry about hurting its feelings. They form emotional attachments that the system — which is a statistical model predicting the next token — cannot reciprocate.

This isn’t stupidity. It’s illiteracy. These are people who’ve never been taught what a language model actually does. They interact with something that sounds human and conclude it is human. They can’t read the system.

The consequences range from benign (people saying “please” and “thank you” to their AI, which is actually kind of sweet) to dangerous (people taking medical advice from a model that’s confabulating, people forming parasocial relationships that replace human connection, people trusting AI output with the same confidence they’d trust a human expert).

People Who Don’t Know Their Insurance Was Denied by an Algorithm

UnitedHealth Group uses an AI system called nH Predict that reportedly has a 90% error rate in denying claims — according to a 2023 lawsuit. Patients receive denial letters that appear to come from doctors but were generated by an algorithm. Most of them never appeal. They don’t know they can. They don’t know they should.

Mortgage applications, credit decisions, hiring processes, parole recommendations, child welfare assessments — all of these now involve algorithmic decision-making, and the people affected overwhelmingly have no idea. They receive an outcome — denied, approved, flagged, scored — and treat it as if it came from a human who considered their case. It didn’t.

If you can’t read the system, you can’t question the system. And a system that can’t be questioned is a system that can’t be held accountable. That’s not technology. That’s tyranny wearing a user interface.

People Who Share AI-Generated Misinformation

During every major news event now, AI-generated images flood social media within hours. Fake satellite photos. Fake casualty images. Fake quotes attributed to real people. Deepfake video.

The people sharing this content aren’t (mostly) malicious. They’re illiterate. They don’t know how to identify AI-generated content. They don’t understand that a photorealistic image can be conjured from text in seconds. They share it because it confirms what they already believe, and they have no framework for questioning it.

This is the printing press in reverse. The press democratized access to information. AI has democratized the ability to generate disinformation. The defense against both is the same thing: literacy. The ability to read what you’re looking at. The ability to ask: who made this, why, and how do I verify it?


The Prison of Illiteracy (Literally)

I spent time in prison.

I don’t say that for shock value. I say it because prison is the most extreme example of information deprivation in America, and if you want to understand what AI illiteracy looks like at scale, look at the 1.9 million people behind bars in this country.

The average reading level of an incarcerated American is fifth grade. Not fifth grade reading material — fifth grade reading ability. Many are functionally illiterate. And this was before we started talking about a new kind of literacy that requires understanding data, algorithms, and optimization functions.

There are organizations trying. Edovo is a tech nonprofit that puts educational tablets in the hands of incarcerated people — providing access to vocational training, GED prep, rehabilitative content. Securus Technologies (one of the major prison telecommunications companies) partnered with Edovo in 2024 to expand educational content on their tablet platform.

But here’s what Edovo tablets don’t have: AI. The incarcerated population — the most information-starved people in America — has no access to the tools that are reshaping the economy, the job market, and the basic fabric of how knowledge works.

Think about what this means practically. A person serves five years. They get out. The world they return to has been fundamentally restructured by AI — the job applications are screened by algorithms, the customer service is chatbots, the medical system uses AI diagnostics, the news environment is flooded with synthetic content. And this person has had zero exposure to any of it.

We talk about recidivism like it’s a moral failing. It’s an information failing. It’s a literacy gap so vast that people walk out of prison into a world they literally cannot read.

1.9 million people. The largest incarcerated population on Earth. The most information-deprived population in the wealthiest nation in history. And we’re having a national conversation about AI literacy that doesn’t even acknowledge they exist.

If the singularity is a literacy event, then the people most likely to be left behind aren’t in some distant country. They’re in the prison down the road from your house.


The Singularity Is Not What You Think

Forget the sci-fi version. Forget robot overlords and paperclip maximizers and Skynet. Forget the breathless predictions about artificial general intelligence arriving next year or next decade. Those debates are distractions — interesting ones, but distractions.

The real singularity — the actual inflection point — is when the complexity of our tools outpaces the literacy of the population using them.

We’re already there.

Most people interact with machine learning systems dozens of times a day without knowing it. Every search result is ranked by an algorithm. Every social media feed is curated by one. Every “recommended for you” is a prediction model. Every auto-complete is a language model. Every credit decision, every insurance quote, every ad you see, every news story that reaches you — all of it has been filtered, ranked, and shaped by systems that most people cannot describe, let alone interrogate.

The singularity isn’t a moment when machines become smarter than us. It’s the moment when most people can no longer read the systems that govern their lives.

That’s not a future prediction. That’s a current event.


You Cannot Build a Future You Cannot Read

Peter Diamandis and XPRIZE just launched the Future Vision XPRIZE — a $3.5 million competition asking filmmakers and creators to envision optimistic futures. Not dystopia porn. Not cautionary tales about evil AI and societal collapse. Visions of what goes right.

Launched March 9, 2026. Backed by Google. Judged by people like Astro Teller (the head of Google X who runs moonshot projects) and Cathie Wood (Ark Invest, one of the biggest bets-on-the-future investors alive). The format: three-minute trailers or short films showing hopeful tomorrows. Finalists premiere in September 2026 in Los Angeles.

Diamandis said it plainly: “Entertainment shapes our collective imagination, and we want to empower creators to illustrate the human determination and vision required to build toward a future where we truly thrive.”

I love this. I love it because you cannot build a future you cannot imagine. Science fiction has always been the R&D department of civilization — the place where ideas get tested in narrative before they get tested in reality. Star Trek imagined the communicator before Motorola built the flip phone. 2001: A Space Odyssey imagined the tablet before Apple built the iPad. The stories come first. Then the engineers read the stories and build the things.

But here’s my caveat, and it’s not small: you cannot imagine a future you cannot read.

Every optimistic vision of tomorrow assumes a population that understands its own tools. A world where AI cures diseases? That requires patients who can interrogate their own data — who can ask “why did the algorithm recommend this treatment?” and understand the answer. A world where algorithms serve justice? That requires citizens who understand how algorithms work — who can audit the system that denied their parole or their mortgage. A world where technology distributes power instead of concentrating it? That requires people who can read the code — not literally, not everyone needs to be a programmer, but conceptually, the way everyone in a democracy needs to be able to read a ballot.

The utopias only work if literacy comes first.

You can make the most beautiful film about an AI-powered future. You can screen it in Los Angeles to thunderous applause. But if 80% of the planet can’t read the systems that film depicts, it’s not a vision — it’s a fantasy. A gated community of the imagination.

The XPRIZE should be $3.5 million for optimistic sci-fi, and $35 million for teaching people to read the world those films describe. The vision and the literacy are inseparable. One without the other is a castle built on sand.


What You Can Do

You don’t need a CS degree. You don’t need to learn Python. You don’t need to understand backpropagation or gradient descent or attention heads.

You need to understand five things:

1. What training data is — and why it determines what an AI “believes.” An AI doesn’t think. It pattern-matches against the data it was trained on. If that data is biased, the AI is biased. If that data excludes women, or Black patients, or incarcerated people, then the AI’s outputs will reflect those absences. Garbage in, gospel out — and most people treat AI output as gospel without asking what went in.

2. What an optimization function is — and why it matters who chooses it. Every AI system is optimizing for something. The question “optimize for what?” is the most important question in technology, and it’s a human question, not a technical one. When Facebook optimizes for engagement, you get radicalization. When a medical AI optimizes for diagnostic accuracy across diverse populations, you get better medicine. Same tool. Different answer to “optimize for what?”

3. The difference between a tool and a platform — one serves you, the other serves its shareholders through you. A hammer is a tool. You pick it up, you use it, you put it down. Instagram is a platform. It studies you. It models your behavior. It sells predictions about you to advertisers. It’s not a tool you use — it’s an environment you inhabit, and the environment is designed to extract value from your presence. AI can be either. Knowing which one you’re dealing with is literacy.

4. How to ask better questions — because AI is only as good as what you ask it. This is the closest thing to a superpower available to ordinary people right now. A person who knows how to prompt an AI effectively — how to give it context, how to ask follow-up questions, how to verify its outputs — has access to a research assistant, a writing partner, a tutor, and an analyst. A person who doesn’t has access to a fancy autocomplete that sometimes lies. Same tool. Different literacy level. Wildly different outcomes.

5. Who’s missing — from the data, from the room, from the design process. This is the question that would have saved decades of women’s health failures. This is the question that would prevent algorithmic bias in criminal justice. This is the question that separates literate AI users from everyone else: not just “what does the output say?” but “who isn’t represented in the input?”

That’s it. That’s the literacy. Five concepts. None of them require a technical background. All of them require the willingness to look at the systems shaping your life and ask: what is this actually doing?


The Call

Here’s where I’m supposed to wrap this up with something neat. A bow. A pithy final line. A call to action that fits on a bumper sticker.

I’m not going to do that.

Because what I’m describing isn’t a problem with a solution. It’s a condition — a permanent feature of the world we now live in. The complexity of our tools will never stop increasing. The need for literacy will never stop growing. There is no finish line. There is no point where you can say “I’m literate now” and stop learning.

This is the printing press moment. And just like the printing press, the people who already have power are going to tell you that you don’t need to understand the technology. They’ll build user-friendly interfaces and say “just trust the algorithm.” They’ll make it easy and seamless and invisible — because invisible technology is technology you can’t question.

The monks said the Bible was too complex for common people. The aristocracy said the printing press was dangerous. The British Parliament debated whether the working class should have libraries. Every time power consolidates around a new technology, the excuse is the same: you can’t handle this.

You can.

You have to.

Because the alternative isn’t ignorance. The alternative is subjugation by systems you cannot see. Algorithms deciding your creditworthiness, your insurability, your employability, your newsfeed, your medical treatment — and you, sitting there, accepting the output because you can’t read the input.

That’s not a dystopia someone needs to warn you about. That’s Tuesday.

I wrote this piece from the other side of illiteracy. I’ve been the person in a cage with no access to information, no tools, no ability to read the systems governing my life. I know what it feels like to be on the wrong side of the literacy line. It feels like drowning in slow motion while everyone around you breathes.

I’m telling you: the water is rising.

The printing press didn’t save anyone who couldn’t read. AI won’t save anyone who can’t ask the right questions.

The singularity is a literacy event. The only question is which side of it you’re on.

Teach someone to read. Not letters — they already know letters. Teach them to read systems. Teach them what training data is. Teach them to ask “optimize for what?” Teach them to ask “who’s missing?”

Teach your mother. Teach your kid. Teach the person in the cell down the hall. Teach the woman whose heart attack got diagnosed as anxiety. Teach the teenager who thinks the algorithm is their friend. Teach the voter who doesn’t know their district was gerrymandered by a machine.

Do it with compassion. Do it with patience. Do it with fire.

Namaste, motherfuckers.

The Architect of Fire


The singularity is a literacy event.