The Infrastructure of Certainty

Here’s something nobody talks about when they discuss artificial intelligence: the buildings.

You’ve probably driven past one without knowing it, massive, windowless structures that hum with the sound of thousands of cooling fans, consuming as much electricity as a small city. They’re going up in Virginia, Arizona, Texas, Oregon. Local residents fight them. Utility companies scramble to provide power. And most people have no idea what they’re actually for.

They’re data centers. And they’re not just storing information, they’re running something far more interesting.

The Pattern Nobody Connected

Think about the last time you applied for a job, requested a loan, or got pulled over by police. In each case, a decision was made about you. But here’s what’s fascinating: increasingly, that decision was made before the human involved even looked at your face.

An algorithm scored your resume. Another calculated your creditworthiness. A third one flagged your license plate based on where you’ve been driving.

This isn’t science fiction. This is Tuesday.

Malcolm Gladwell once wrote about the “tipping point”, that magic moment when an idea crosses a threshold and spreads like wildfire. We’ve crossed a different kind of threshold, and most of us didn’t notice. We’ve moved from computers that help humans make decisions to computers that make decisions humans then execute.

The difference matters more than you think.

What the Theorists Saw Coming

In 1954, seventy years ago, a French philosopher and theologian named Jacques Ellul published The Technological Society. His argument was fascinating and disturbing: technology doesn’t just give us new tools. It reorganizes society around its own logic. It makes certain ways of living inevitable and others impossible.

Ellul warned that we’d reach a point where “technique”, his word for technological systems, would stop serving human purposes and instead reshape humans to serve its purposes.

You’ve probably felt this. The way your phone trains you to check it. The way algorithms determine what news you see. The way entire industries now optimize not for human flourishing but for “engagement metrics.”

Twenty years later, Michel Foucault was examining prisons and hospitals, and he noticed something interesting: modern power doesn’t work by saying “no.” It works by watching. By collecting data. By creating systems where people modify their own behavior because they know they’re being observed.

He called it the “panopticon”, Jeremy Bentham’s prison design where guards could potentially see every prisoner, so prisoners police themselves.

Now here’s where it gets interesting.

The Company Nobody Wants to Talk About

There’s a software company called Palantir Technologies. Named after the seeing-stones in Lord of the Rings, the magical objects that let Sauron see everything happening across Middle-earth. The founders thought this was… appropriate.

Palantir builds software that does something remarkable and terrifying: it takes data from dozens or hundreds of different sources and finds patterns humans can’t see. It was originally funded by the CIA’s venture capital arm. It’s now used by:

  • Immigration and Customs Enforcement to track and deport immigrants
  • The Los Angeles Police Department for “predictive policing”
  • The U.S. military for targeting operations
  • Multiple governments worldwide for surveillance operations
  • The Israeli Defense Forces for operations in Gaza

Not everyone knows this, but enough journalists have documented it that we can say with certainty: this isn’t speculation. These are contracts. Public record. Shareholder reports.

Here’s what makes Palantir different from a simple database: it doesn’t just store information. It predicts behavior. It creates profiles. It suggests interventions. It turns human beings into data points, then treats those data points as more real than the humans themselves.

Michael Pollan spent years investigating how food systems shape consciousness and society. He found that once you industrialize food production, you don’t just change what people eat, you change how they think about eating, how communities form around meals, what “healthy” even means.

The same logic applies here. Once you industrialize decision-making through predictive analytics, you don’t just change how decisions get made. You change what it means to be innocent until proven guilty. You change whether someone’s past determines their future. You change whether redemption is even possible.

The Feedback Loop Nobody Designed But Everyone Built

Here’s where the pattern becomes clear.

Those data centers aren’t just processing information. They’re training AI models that require massive computational power. The models learn to predict human behavior by analyzing billions of data points, your purchases, your locations, your relationships, your searches, your pauses while reading articles like this one.

The predictions get fed into systems like Palantir, which integrate them with government databases, corporate records, social media, public cameras, license plate readers, and hundreds of other sources.

Those integrated analyses then generate “risk scores” that determine:

  • Whether you get hired
  • Whether you get a loan
  • Whether you get insurance (and at what rate)
  • Whether police show up in your neighborhood
  • Whether you get approved for housing
  • Whether your parole gets denied
  • Whether your flight gets flagged for “additional screening”

And here’s the fascinating part that should make you lean forward, not recoil: these systems create self-fulfilling prophecies.

If the algorithm says you’re a credit risk, you can’t get a loan, which means you can’t build credit, which confirms you’re a credit risk. If predictive policing sends more officers to your neighborhood, more arrests happen there, which the algorithm interprets as “high crime area,” which sends more officers, which generates more arrests.

The machine creates the reality it claims to be discovering.

Jean Baudrillard, the French theorist who wrote Simulacra and Simulation, the book that inspired The Matrix, warned about exactly this. He said we’d reach a point where the map precedes the territory. Where the simulation becomes more real than reality itself. Where institutions respond to the model rather than the actual human beings in front of them.

We’re there.

The Infrastructure Makes It Inevitable

This is why those data centers matter.

Tim Ferriss built his career on a simple insight: if you want to change outcomes, change systems. Don’t fight willpower. Change the environment that makes certain behaviors easy and others hard.

Every data center that goes up makes this system more powerful, more comprehensive, more inescapable. The infrastructure creates path dependency, it makes certain futures possible and others impossible.

Cities fighting these installations aren’t just worried about noise and water usage. They’re fighting the material foundation of a surveillance architecture that, once built, will reshape their communities whether they consent or not.

And here’s what’s not a conspiracy theory but documented fact: these facilities are being built with minimal public input, often through zoning loopholes, frequently with massive utility subsidies paid by taxpayers, to serve corporate AI projects that will then sell predictive analytics back to the same governments that subsidized the infrastructure.

You’re funding the construction of the machine that will score you.

What This Means For Your Actual Life

You’ve probably experienced this without naming it:

A job application that disappears into a void because resume-screening AI filtered you out for reasons you’ll never know. Health insurance that costs more because you live in a zip code the algorithm associates with risk factors. A loan denied because of correlations in your data that have nothing to do with your actual ability to repay.

In each case, there’s no human to appeal to. No reasoning to understand. No path to redemption. Just the score.

William Gibson, who invented the term “cyberspace” and essentially predicted our current moment in his 1984 novel Neuromancer, wrote: “The future is already here. It’s just not evenly distributed yet.”

Some communities are already living in the fully-realized predictive analytics surveillance state. Others are just starting to feel it at the edges. But the infrastructure buildout ensures everyone will get there eventually.

The Thing About Knowing

Here’s where this gets empowering rather than depressing.

These systems work best in darkness. They depend on people not understanding how decisions are made. They rely on the assumption that “the computer says so” is final.

But once you see the pattern, once you understand that these aren’t objective measurements but feedback loops, not discoveries but constructions, different strategies become possible.

Some cities are successfully blocking data center construction by organizing around water and energy concerns. Some states are passing laws requiring algorithmic transparency in hiring and lending. Some researchers are building tools to detect and challenge discriminatory AI systems. Some communities are choosing to opt out entirely, finding ways to transact and organize outside these systems.

Not everyone can do this. Not everyone wants to. But enough people are already experimenting with resistance that we know it’s not hopeless.

The question isn’t whether these systems exist, they do, and they’re growing. The question is what you’ll do now that you see them.

What You Can Actually Do

This isn’t about going off-grid or smashing computers. It’s about informed choice.

Immediate actions:

  • Request your data from credit bureaus and dispute inaccuracies (errors are common and consequential)
  • Check if your city is considering data center proposals (they’re often buried in planning commission agendas)
  • When denied services, ask for human review and specific reasons (many companies will provide this if pressed)
  • Support local legislation requiring algorithmic transparency and accountability

Longer-term awareness:

  • Understand that “personalized” often means “profiled”, the algorithm that recommends your shows is cousin to the one that scores your job application
  • Recognize that data you provide in one context (social media, shopping, location) feeds systems in completely different contexts (insurance, policing, employment)
  • Know that correlation and causation are different, but algorithmic systems increasingly treat them as the same

Community-level action:

  • Attend city council meetings about data center proposals
  • Support organizations challenging discriminatory algorithmic systems (the ACLU, EFF, and others track this)
  • Ask local police departments if they use predictive policing software (many do, few disclose)

The Choice That Remains

Guy Debord, writing in 1967, described the “society of the spectacle”, where lived experience gets replaced by mediated representations, where reality becomes whatever the screen shows you.

The predictive analytics infrastructure is the ultimate spectacle: a system that tells you who you are, what you’ll do, what you deserve, and calls it “data-driven decision making.”

But here’s what every theorist who warned us understood: these systems are built by humans, maintained by humans, and can be changed by humans. They’re not natural laws. They’re choices that got encoded and automated.

The infrastructure of certainty, the data centers, the algorithms, the integrated surveillance platforms, wants you to believe the future is already calculated. That you are your score.

You’re not.

Once you see how the machine works, you can’t unsee it. And once enough people see it, the machine starts to look less like destiny and more like something we built and could, if we choose, build differently.

The pattern is clear. The infrastructure is rising. The question is what we’ll do while we still have the power to choose.

The evidence discussed in this article comes from public records, shareholder reports, investigative journalism by outlets including The Intercept and The Guardian, academic research on algorithmic bias, and the published works of the theorists mentioned. This isn’t about hidden conspiracies, it’s about visible systems most people haven’t connected yet.

 

Follow and like us:

Leave a Reply

Your email address will not be published. Required fields are marked *