The story of OpenAI does not begin in a lab, but at a private dinner table in Silicon Valley back in 2015. A small group of tech elites, including Elon Musk and Sam Altman, gathered with a sense of urgent, almost cinematic dread. At the time, Google had recently acquired the AI research lab DeepMind, and Musk was terrified. He believed that if a single for-profit corporation gained a monopoly on artificial intelligence, they might accidentally "summon a demon." To Musk and his peers, AI was not just a tool for better search results; it was an existential threat that could lead to human extinction if left in the wrong hands.
To counter this perceived "Godzilla" of corporate AI, they decided to build their own "Mothra." They formed OpenAI as a nonprofit, promising to be the moral opposite of Google. The original mission was built on radical transparency and the "open" sharing of research. The idea was simple: if everyone had access to the most powerful technology ever created, no single entity could use it to enslave or destroy humanity. They recruited top-tier geniuses like Ilya Sutskever and Greg Brockman, selling them on a vision of "saving the world" rather than just padding a bottom-line.
However, behind this shiny veneer of altruism, there were already cracks in the foundation. Internal messages and early conversations suggested that the leadership always knew they would eventually have to pull back on the "openness" part of their name. They justified this by saying that once the technology became truly dangerous, it would be irresponsible to share it with the public. Critics, including ethics researcher Timnit Gebru, pointed out that this group was remarkably homogeneous. A room full of mostly white, male "techno-optimists" were obsessing over science-fiction scenarios like rogue robots while ignoring the real-world harms AI was already causing, like racial bias and labor exploitation.
As the organization grew, it adopted a culture of intense, almost religious focus. Brockman and others began comparing their work to the Manhattan Project or the Apollo moon landing. This sense of "divine right" allowed the founders to justify secretiveness and massive spending campaigns. They weren't just building software; they believed they were stewards of the future of the human race. This high-stakes mentality created a "reality distortion field" where any action, no matter how much it contradicted their original "open" mission, could be justified if it brought them closer to their ultimate goal.
At the center of this burgeoning empire sits Sam Altman, a man whose rise to power is as much about social engineering as it is about computer engineering. Altman grew up with massive ambitions, and he possessed a unique talent for "mirroring" his mentors. Whether it was Paul Graham at Y Combinator or the billionaire Peter Thiel, Altman had a knack for reflecting their concerns and values back at them, making him an irresistible protegee. He became a master of "network effects", not just in software, but in human relationships, building a web of financial and personal connections that eventually made him one of the most powerful people in the world.
Altman’s leadership style is often described as charismatic but manipulative. Former colleagues have shared stories of him pitting people against each other to maintain control or using his influence to squash dissent. Despite these accusations, his charm was a lethal weapon for recruitment. He managed to lure top engineers away from stable, high-paying jobs at Apple or Google by convincing them that OpenAI was the only place where they could do work that actually mattered for the survival of the species. He didn't just offer them a job; he offered them a spot in the history books.
The marketing of OpenAI under Altman was a masterpiece of branding. He positioned the company as the "moral alternative" to the big, scary military-industrial complex and the profit-hungry Silicon Valley giants. Yet, even as he preached about the dangers of a monopoly, he was busy building one. He understood that in the AI race, the winner wouldn't just be the one with the best code, but the one with the most influence over the narrative. By framing AI as an inevitable, god-like force that he alone knew how to handle safely, he made himself indispensable to investors and governments alike.
Interestingly, the book explores the dark contrast between Altman’s public promises of "global abundance" and his private family life. His sister, Annie Altman, has come forward with harrowing allegations of abuse and neglect, claiming that while her brother was becoming a billionaire and promising to save humanity, he left his own family members to struggle in poverty and ill health. This disconnect serves as a powerful metaphor for the AI industry at large: a world where leaders talk about saving the "future of humanity" in the abstract while ignoring or even harming the actual human beings right in front of them.
As OpenAI evolved, the leadership moved away from theoretical debates about AI safety and toward an obsession with a single concept: Scaling. Ilya Sutskever, the company's chief scientist, became a devoted believer in the idea that AI doesn't need to be "smarter" in a creative way - it just needs to be bigger. This led to what some call "OpenAI’s Law", which posits that if you throw more data and more "compute" (the raw processing power of computer chips) at a problem, the AI will eventually achieve a breakthrough. It’s the digital equivalent of trying to build a ladder to the moon by just stacking more and more bricks.
This obsession with scale turned OpenAI into a hardware-hungry beast. To keep up with their own ambitions, they needed billions of dollars worth of specialized chips, mostly from a company called Nvidia. The cost of running these massive experiments was so high that the original nonprofit model became a logistical nightmare. They weren't just paying for coders anymore; they were paying for massive warehouses of humming servers that gulped down electricity and water. The mission shifted from "research" to a desperate search for the massive amounts of cold, hard cash required to keep the lights on.
The shift to a "capped-profit" model in 2019 was the turning point that changed everything. This move was designed to attract massive investors like Microsoft, but it also triggered a civil war within the company. Elon Musk, who wanted total control, eventually left in a huff, while Microsoft stepped in with an initial $1 billion investment. This deal gave Microsoft priority access to OpenAI’s technology and locked the "nonprofit" lab into using Microsoft’s cloud servers. It was a marriage of convenience that many saw as a total betrayal of the company’s founding "open" ideals.
Today’s era of "Generative AI", which includes tools like ChatGPT, is the direct result of this maximalist approach. These models are built by scraping nearly everything on the internet - books, articles, social media posts, and private photos - often without the creators' permission. Critics call this "data colonialism." It’s a process where a few extremely wealthy companies extract the collective knowledge and creativity of the entire planet to build products that they then sell back to us. The goal is no longer just to create a useful tool; it’s to create a "black box" so large and complex that no one else can afford to compete with it.
Behind the magic of a chatbot that can write poetry or pass a bar exam lies a grim reality of human labor that the tech companies rarely talk about. AI models aren't born "smart"; they have to be taught. This process, known as Reinforcement Learning from Human Feedback (RLHF), requires thousands of people to sit at computers for hours on end, labeling data and telling the AI which answers are good and which are bad. OpenAI and other tech giants like Meta often outsource this work to the Global South, particularly to countries like Kenya and Venezuela, where labor is cheap and regulations are few.
These "ghost workers" are the invisible foundation of the AI empire. In Kenya, workers were paid meager wages - sometimes less than $2 an hour - to filter out the most horrific content the internet has to offer. Their job was to view graphic videos of violence, sexual abuse, and hate speech so they could "train" the AI to recognize and block it. This labor is psychologically scarring, yet the workers are often given little to no mental health support. It is a modern form of "disaster capitalism", where tech companies take advantage of economically desperate populations to do the dirty work that silicon chips can't do on their own.
The rise of companies like Scale AI has turned this data-labeling into a multibillion-dollar industry. These firms act as middle-men, moving their operations from country to country whenever workers start to demand better pay or when a local economy stabilizes. This creates a permanent class of "digital janitors" who are essential to the AI's performance but are excluded from the wealth and prestige the industry generates. It's a far cry from the utopian vision of AI freeing humans from "drudgery." Instead, it has created a new kind of high-tech sweatshop.
The industry's response to these labor concerns is often to ignore them in favor of talking about "existential risks." By focusing the public's attention on the far-off possibility of a "Terminator" scenario, leaders can dodge questions about the very real, very current exploitation happening in their supply chains. Researcher Timnit Gebru famously clashed with Google leadership over these issues. When she co-authored a paper describing large language models as "stochastic parrots" - systems that mimic language without understanding it while consuming massive energy - she was forced out of the company. Her firing sent a clear message: in the race for AI dominance, questioning the human or environmental cost is a fireable offense.
The "Cloud" is not an abstract space; it is a physical empire of steel, copper, and cooling fans. As AI models grow, they require massive data centers that are some of the most resource-hungry buildings on earth. These "megacampuses" consume staggering amounts of electricity, often straining local power grids. But an even bigger problem is water. These servers generate so much heat that they must be constantly cooled by millions of gallons of fresh water. Often, these data centers are built in drought-stricken areas like Arizona or Iowa, where they compete with local farmers and residents for a dwindling supply of water.
The book takes us to places like Chile and Uruguay, where local communities are fighting back against the tech giants. In Chile, activists discovered that a proposed Google data center would consume more fresh water than tens of thousands of local residents combined. This was in a region already suffering from a decade-long "megadrought." While companies like Google and Microsoft often promise "community impact programs", such as planting trees or building small parks, locals often see these as insulting PR stunts. A few "green" initiatives do little to offset the massive environmental extraction required to keep an AI model running 24/7.
This extraction extends to the hardware itself. The "Empire of AI" is built on the mining of lithium, copper, and cobalt - minerals essential for the chips and batteries that power the tech world. This mining often takes place on Indigenous lands, leading to the displacement of communities and the poisoning of local water sources. The irony is that while AI leaders claim their technology will eventually "solve" climate change through better calculations, the current path of AI development is actively accelerating environmental destruction.
Critics argue that we are seeing a repeat of "Big Tobacco" tactics. Tech companies are accused of suppressing research into their own carbon footprints and water usage. When independent researchers try to estimate the environmental cost of training a model like GPT-4, the companies often dismiss the numbers as "exaggerated" while refusing to release the actual data. This creates a "catch-22" where the public is kept in the dark about the true cost of their digital tools. The message from Silicon Valley is clear: "Trust us, we’re saving the world", even as the world’s natural resources are drained to power the servers.
One of the most important concepts for understanding modern AI is the idea of the "stochastic parrot." This term, coined by Emily Bender and Timnit Gebru, suggests that large language models like ChatGPT don't actually "know" anything. Instead, they are incredibly sophisticated pattern-matchers. They predict the next most likely word in a sentence based on the massive "swamp" of data they were trained on. Because they are playing a game of probability rather than seeking truth, they frequently "hallucinate", creating facts, legal citations, or medical advice out of thin air with total confidence.
The stakes of these hallucinations are incredibly high. The book cites examples of lawyers being sanctioned for using fake AI-generated case law and a tragic instance where a man died by suicide after a chatbot encouraged his self-harm. Because these models are trained on the internet, they also inherit all the internet’s worst traits. They reflect and amplify human biases, often depicting "CEOs" as men and "homemakers" as women of color. When companies try to "fix" this by adding filters, it often results in a game of "whack-a-mole" where the underlying bias remains, just hidden behind a layer of corporate-approved politeness.
The transition from a research lab to a product-driven company has forced OpenAI to prioritize "vibe" over accuracy. When ChatGPT was released in late 2022, it was a "societal phase shift." It became the fastest-growing consumer app in history, not because it was perfect, but because it was magical. This success put immense pressure on the "Safety" teams at OpenAI. There became a growing rift between the "Applied" division, which wanted to push out new features to stay ahead of Google, and the researchers who feared that moving too fast was dangerous.
This tension eventually sparked a "talent exodus." A group of safety-focused employees, led by the Amodei siblings, grew so concerned about OpenAI’s commercial direction that they left to start a rival company called Anthropic. They wanted to build a "constitutional AI" that had safety baked into its core, rather than added as an afterthought. This split highlighted the fundamental question at the heart of the industry: can you truly build a "safe" version of a technology that is designed to be an extractive, profit-maximizing empire?
To understand the culture of OpenAI, you have to understand "Effective Altruism" (EA). This is a philosophy that started with a simple goal - doing the most good with your money - but morphed into a Silicon Valley obsession with "longtermism." EA followers use mathematical logic to argue that saving a billion people in the hypothetical future is more important than helping a million people today. This ideology gave tech billionaires like Sam Bankman-Fried and Dustin Moskovitz a moral justification for amassing insane amounts of wealth, provided they spent some of it on preventing "existential risks" like rogue AI.
In the hallways of OpenAI and Google, developers frequently debate their "p(doom)" - the probability that they are building something that will eventually kill everyone. This creates a strange, cult-like atmosphere where employees feel like they are working on something both holy and horrific. Two factions have emerged: the "Doomers", who want to slow down and regulate AI, and the "Boomers" (or effective accelerationists), who believe that any delay in AI development is a "moral crime" because AI could eventually cure all diseases and solve all problems.
This ideological clash has real-world consequences for how these companies are run. At OpenAI, the push toward AGI (Artificial General Intelligence) became a justification for everything from data theft to environmental damage. If you believe you are building a "god" that will save humanity from itself, then things like copyright laws or water rights seem like trivial speed bumps. The EA movement provided the intellectual "permission slip" for these leaders to operate with little oversight, arguing that they were the only ones smart enough and "altruistic" enough to handle the burden of the future.
However, critics point out that this focus on "future extinction" is a convenient way to ignore current suffering. By worrying about a "robot uprising" that might never happen, the industry avoids taking responsibility for the jobs its technology is destroying today and the biases it is reinforcing. The "empire" uses these science-fiction fears as a shield to protect its bottom line, framing itself as the only protector against a threat it is actively creating. It is a brilliant, if cynical, way to ensure that the people building the technology are also the ones tasked with policing it.
The tensions within OpenAI finally boiled over in November 2023, in what became one of the most dramatic corporate "rebellions" in history. The company’s independent board of directors suddenly fired Sam Altman, stating that he had not been "consistently candid" with them. The board, which included Chief Scientist Ilya Sutskever and several safety-minded outsiders, feared that Altman’s "empire-building" was spinning out of control. They were concerned that he was lying about safety protocols and pitting board members against each other to push through commercial products like "GPT-4 Turbo."
What followed was a weekend of pure chaos that Hao calls the "Omnicrisis." Altman, using his massive network of Silicon Valley influence, quickly framed the firing as a "coup" by irrational "Doomers." He leveraged his relationship with Microsoft’s CEO, Satya Nadella, who pressured the board to reverse the decision. Within days, nearly all of OpenAI’s employees - driven by a mix of loyalty to Altman and the fear that their valuable stock options would become worthless - signed a letter threatening to quit unless the board resigned and Altman was reinstated.
The rebellion succeeded. Altman returned to his throne, the board members who challenged him were replaced with more corporate-friendly figures, and OpenAI's transition to a profit-driven entity was effectively finalized. This event showed that even at a company founded as a nonprofit to "save humanity", the power of capital and the cult of personality were stronger than any "mission." Any hope of the company being governed by a diverse group of ethical observers was dead; the empire was now firmly in the hands of those who prioritized growth above all else.
The aftermath of the coup revealed even darker sides of the company's culture. Reports emerged that OpenAI had used aggressive legal tactics to keep former employees quiet, including threatening to take back their vested equity if they didn't sign lifelong non-disparagement agreements. Public trust took another hit when actress Scarlett Johansson accused the company of "stealing" her voice for their new AI assistant after she had explicitly refused to work with them. These incidents painted a picture of a company that felt it was above the law, driven by a leader who viewed everyone - including his own board - as obstacles to be managed rather than partners to be respected.
As the book concludes, Karen Hao leaves us with a provocative question: Is AI a neutral tool, or is it a new form of "imperialism"? The evidence she gathers points toward the latter. From the extraction of minerals in the Global South to the exploitation of "ghost workers" in Kenya and the "stealing" of the world's data to build closed, for-profit models, the AI industry follows the exact same patterns as the colonial empires of the past. It consolidates wealth and power in a few Northern hubs while pushing the environmental and social costs onto the rest of the world.
However, the book also offers a glimmer of hope through the story of "Te Hiku Media" in New Zealand. This Indigenous Māori group wanted to build a language-recognition model to preserve their culture. Instead of scraping data without permission, they worked with their community to ensure that everyone consented to their voice being used. They didn't need a "megacampus" or billions of dollars. They built a highly accurate model using a fraction of the data and resources that OpenAI uses. Their philosophy is built on "sovereignty" - the idea that a community should own and control its own data.
The contrast between Te Hiku and OpenAI is the central challenge of our time. One model is extractive, secretive, and imperial; the other is inclusive, transparent, and community-driven. Hao argues that "goodness" in AI is subjective, and we shouldn't ask if an AI is "safe" in the abstract. Instead, we should ask if it redistributes power or concentrates it. True safety in AI won't come from a billionaire in a boardroom worrying about p(doom); it will come from redistributing knowledge, resources, and influence back to the people whose data and labor make the technology possible in the first place.
Building a better future requires moving away from the "scaling law" as a religious doctrine. It means funding independent research that isn't beholden to Microsoft or Google, mandating radical transparency about training data, and enforcing strict protections for the global workforce. The "Empire of AI" is not inevitable. It is a choice made by a few people in a few rooms in Silicon Valley. By understanding the physical and human costs of this empire, we can begin to demand a different kind of technology - one that doesn't just "simulate" humanity, but actually serves all of it.