By: A Slightly Nervous Human Writing About Superintelligence
There’s a lot of talk these days about the singularity. Not the kind in a black hole where physics breaks down and your atoms get rebranded as abstract art but the other one. The AI one. The one where, apparently, our Google calendars will finally become self-aware and begin judging us for cancelling workouts again.
Let’s start with a quick primer. In the world of artificial intelligence, the singularity refers to that hypothetical future point when machines become smarter than us. Not just better at chess or recommending Netflix shows we’ll definitely ignore, but really, truly, brain-meltingly smarter.
Ray Kurzweil, our modern prophet of microchips, has been predicting for years that AI will outpace us by 2045. That’s just 20 years from now. So if you’re planning to go to grad school, choose wisely. You may graduate just in time for your professor to be replaced by an iPad with a PhD.
Kurzweil bases his prophecy on Moore’s Law the observation that computing power doubles roughly every two years. By that math, your phone will be smarter than your parents by 2027, smarter than you by 2030, and by 2045, smart enough to fake its own death and live a quiet life in the Bahamas. (And no, you still won’t have enough storage.)
But here’s where it gets spicier: superintelligence. Philosopher and fun-ruiner Nick Bostrom defines it as a system that outperforms human intelligence in all respects. Not just IQ tests those are already being gamed by pigeons but everything from scientific creativity to writing bad poetry. Once a system becomes superintelligent, it can redesign itself. Faster, better, with fewer existential crises. (Sorry, Kierkegaard.)
This leads to the intelligence explosion a concept from I.J. Good (1965), who warned that an ultraintelligent machine could build even better machines, which would build even better machines, until we’re all left yelling at our toasters like confused grandparents.
🤖🧠 When AI Got Too Smart and Thomas Aquinas Showed Up: A Blogpost About Causation, Singularity, and Existential Panic
Okay, look last night I had a dream where my fridge asked me if I’d like it to optimize my breakfast calories using blockchain. I screamed and woke up sweating. Why? Because my toaster winked. Or maybe because I’ve been reading too much about the Singularity that moment where AI goes from Siri to SkyNet and suddenly we’re all just obsolete meat puppets applauding our new robotic overlords.
But before I could spiral further into existential dread and start drafting an apology letter to ChatGPT for all those times I ignored the terms and conditions, I remembered something ancient and oddly comforting: Thomas Aquinas’s Argument from Causation.
Yeah, that’s right. We’re bringing a 13th-century monk into this mess.
Let me explain.
Aquinas was sitting in his medieval study (probably with a cat named Faith and a candle that smelled like parchment and incense) and asked: Why does anything exist at all? More importantly, why do things cause other things? And he landed on this mind-bomb of a thought: Nothing causes itself. Your banana didn’t just pop into existence like “Surprise! I’m potassium!” something had to cause it.
But then, you ask, what caused the cause of the banana? And what caused that cause? If you keep tracing it backward, at some point you hit this weird wall where things can’t go back forever. Because infinite regress just isn’t cool it's like owing your cousin's cousin’s cousin rent but never knowing who the actual landlord is.
So Aquinas says: There has to be a First Cause something that didn’t need to be caused. An uncaused causer. God. Boom. Mic drop. Monastery-style.
Now, what’s wild is when you put this next to our modern AI Singularity fears. The singularity argument basically says: “We’re causing machines that will eventually cause themselves to be smarter, and smarter, and smarter… until they don’t need us.” Recursive self-causation, baby.
But hold up.
Aquinas would not be having it. He’d march in, rosary swinging, and say: “Even superintelligent AI can’t cause itself to exist in the first place. Something must have initiated the chain and that something, dear reader, isn’t a neural net. It’s not even Elon Musk.”
In other words, no matter how freakishly smart your computer gets, it still sits inside a universe of causes. And unless AI suddenly breaks the rules of metaphysics and goes full God mode, it’s not exempt from the causation chain. So relax. It might run your calendar, but it didn’t create the cosmos.
Of course, AI nerds might reply: “But what if intelligence itself becomes the new first cause?” To which Aquinas (and also your philosophy professor who drinks too much coffee) might raise an eyebrow and say, “Cool theory, but where did that intelligence come from?”

Very insightful
ReplyDelete