Singularity entered the English language in the 14th century, without fanfare. It was a synonym for singular, though was rarely used and became obsolete until revived by Einstein in 1915 with his General Theory of Relativity. In that theory, a singularity describes the center of a black hole--- a point of infinite density and such immense gravity that neither light nor any object can ever escape from it.
Poll your friends and I'll bet they say that a singularity has something to do with black holes. And while black holes are very, very far away from us, they conjure in most minds a frightful scene. Your poll will also reveal that.
A singularity, metaphorically speaking, is a graspable idea, requiring no advanced degree in cosmology. One can imagine a ‘somewhere,’ where the forces in control become so intense and distorted that even light disappears forever into a black hole. You would not want to venture close to the so-called event horizon on the edge of that black hole, any more than you'd want to visit one of the inner circles of Dante's Inferno, or tuck into a barrel whose destination is Niagara Falls.
We live in a culture of exaggeration. We're all batteries, endlessly charging and discharging our polarities. Given our hyper-energetic conditioning, it's surprising how infrequently we come across serious people using the metaphor of a black hole to describe some facet of our lives. That image, it seems, has been held in reserve, to be deployed only in exceptional--- or exceptionally scary---situations.
Thus, we should pay singular attention to the recent statement by Sam Altman, the founder of Open AI, who has described his vision of AI’s Singularity. He termed it "the hypothetical future point when artificial intelligence becomes so advanced that it triggers irreversible, exponential changes in society — beyond human control or understanding."
Altman's Singularity combines both the original 14th century usage denoting a singular event---singular as in unique---and the image of a black hole as a zone of irreversible events, 'beyond our control or understanding.' Contradicting what he said just a week or so ago about AI's place on the continuum of technological inventions, Altman's now claiming that AI is not just another chapter in the book of technology's progress, but a unique variant. A unicorn after all is not just another kind of horse.
Irreversible changes...beyond human control or understanding.
Whatever other talents Altman possesses, he is a morally speaking a child. For only a child would praise irreversible, exponential changes in society, beyond human control or understanding "as a virtue. Beyond human control? How is that in any way a desired state of affairs? Human history is a chequered affair, but you can't be serious that a machine beyond our control is going to do a better job of it.
Technology's evangelists such as Altman need to read the great conservative observers of human nature, such as Edmund Burke. He lived during the time of the American and French Revolutions. Burke rejected the Enlightenment ideal that people are ultimately rational. Instead, he argued that we are both “imperfect and imperfectible,” meaning that no matter how much society progresses, human flaws and limitations will persist. And if we are flawed and limited, surely the machines we create will also have those characteristics.
Victor Frankenstein had a fascinating and difficult life. In the end, in that scene in the Arctic, both he and the monster die. That story is a cautionary tale about the cost of human inventiveness. I won't summarize Frankenstein . You can read it for free here. We should all send Sam Altman that link.
Victor Frankenstein quite early on in the story realized the dangers he was involved in creating. Sam Altman is oblivious to them.
"If we open up ChatGPT or a system like it and look inside, you just see millions of numbers flipping around a few hundred times a second,” says AI scientist Sam Bowman. And he adds: “we just have no idea what any of it means.”
This is Perplexity's answer to the prompt, why do many software engineers say they don't really understand how AI works? (Perplexity is a search engine that was built around a large language model).
"Unlike traditional software, where logic is explicitly programmed and traceable, modern AI—especially large language models—consists of vast networks with millions or billions of parameters. These models learn from enormous datasets, and the internal workings become a “black box” that’s difficult to interpret, evenfor experts. While engineers can see the code and the data, understanding exactly why an AI generates a particular output is often impractical due to the sheer scale and complexity involved."
My son-in-law, a software engineer very knowledgeable about AI, has a sanguine view about its future. His conclusion is that we are very far away from the possibility of a rogue AI agent that could spread over a network and self-replicate, as he put it. Since we’re paying the electricity bills that allow these machines to exist, he believes that we would catch the behavior of any rogue agent and shut it off before much, if any, damage was done. If that's right, we'd be living in an inverted surveillance world, where humans were tasked with surveilling the machines they'd built to make sure they were well behaved. In place of machines surveilling us, knowing our location, eavesdropping on our conversations, we would need to keep a close eye on them to ensure they don’t go rogue.
This is not to suggest that AI should be placed under house arrest out of concern for public safety threats that may arise sometime in the future. But it is to say that prudence requires we pay careful attention to the risks presented by a technology that its practitioners themselves admit they don't fully understand.