"If a lamplighter could see the world today," Sam Altman of OpenAI recently wrote in a puff piece entitled The Intelligence Age, "he would think the prosperity all around him was unimaginable."
In the late 19th century there was something called the Human Zoo project. Short people who lived as hunter gatherers in rain forests in various parts of the world (known as Pygmies) were brought to places such as New York's Natural History Museum, and put on display.
Sam Altman wants us to imagine bringing those lamplighters from centuries past to the Computer History Museum in Mountain View. Were that to happen, they would be as dazed and unhappy as those Pygmies were on Central Park West, a hundred and fifty years ago. That thought experiment is fatuous, and distracts from the important issues that AI has raised.
The Pygmie and the lamplighter lived in their own eco-systems. An eco-system regulates how organisms interact with their physical environments, at biological and social levels. E.O. Wilson and his colleague wrote a fascinating book about ants and their eco system. It’s entirely to be expected that if you take a Pygmy, a lamplighter or an ant and transit them to a new world they will first be very surprised, and then likely die. So, of course, Sam Altman’s lamplighter would be dazzled, overwhelmed, or perhaps killed by a seizure, were he to find himself in the world of 2025.
Altman believes that AI is just the latest sign of progress, occupying its rightful place on the same continuum of inventions that brought us from streets lit by gas to those illuminated by today's high-pressured sodium lamps. In his view, inventions are always improvements, particularly those labeled “technological innovations.”
And few would deny that technology has greatly improved our lives since the days of the lamplighter. There are many, many examples to choose from to prove that point: today's cars compared to the Model T; the iPhone versus the flip phone. That we are all besotted by technology should not inure us to an uncritical, even blind willingness to accept all that it creates as gifts designed for our flourishing.
However, AI is not analogous to a streetlamp, a cell phone, or an electric vehicle. To call it a disruptive technology--- a favorite meme of Silicon Valley publicists--- is to mischaracterize, at a fundamental level, what it’s all about. These same Palo Alto publicists regularly publish the untruth that disruptive technologies all eventually bring far more benefits than costs.
AI is not a new product or device, but an entirely new eco-system. The pond in my neighborhood that fills up with dragonflies in the summer is an eco-system. They're drawn there to breed, feed and rest. We're the dragonflies in the AI eco-system, but it's not providing an environment designed for our feeding, breeding or resting. Not at all.
It’s been well known for a long time that technological transformations often have sizable impacts on the job market. When I was growing up in the 60s, in our village we went to Cousins Appliance Store, bringing in our color television set for repairs. Color televisions were a new wonder then. If your family had one you were blessed, or at least solidly in the middle class. But they weren't very reliable.
The last color television repairman must have disappeared thirty or forty years ago, perhaps earlier. He was trained to work with those clunky cathode ray tubes, but when the Japanese developed plasma TV screens, not only did the screens become flat, but his skill set was no longer relevant. There wasn't a job any more for someone who knew how to replace the vacuum tubes in the back of your TV set.
What's different with AI is the speed at which it’s transforming the job market, the breadth of industries that will be affected by it, and the number of jobs that will likely be replaced by these machines.
The scale of AI jobs transformation bears no resemblance to the obsolescence of television repairman. Plasma screen technology revolutionized the consumer electronics industry, but that industry—-even today—- makes up only about 5-6% of global GDP. AI's footprint is far larger and deeper than that. Service businesses in America employ about 80% of our workforce, so any big changes in how they employ people will have large, very large consequences.
Let's consider what AI would do to the Dunder Mifflin Paper Company, which you'll remember from The Office. There were four sales reps in the show, a receptionist, an HR person and three accountants, as I remember things. Of that staff of eight, four positions could quickly be replaced, at far lower cost, by AI. Steve Carrell's job as the Regional Manager might be safe for the time being. Half of that office's workforce made obsolete. They can live off their residual payments as actors. The rest of office workers cannot.
As for law firm and accounting work, the coming AI wave might actually bring more good than bad. In 1978, I was a first-year associate at a large corporate law firm in Los Angeles. Nearly all of the work I did was tedious and mind-numbing. I was in the firm's entertainment department and our big client was a major sitcom producer for the networks. My time, nearly all of it, was spent drafting cookie-cutter contracts for producers, writers, directors and actors. Most of the words in those contracts---the boiler-plate--- were all the same. What varied were only a few paragraphs or sentences: the compensation; the number of days or weeks of employment, the residual rate (what you'd be paid for re-runs), and a few other details.
All of that work I did nearly fifty years ago could today easily be performed by AI software. Instead of all those young associates like me word-processing those contracts, you'd need just a few supervisory editors—-maybe only one—- ensuring that the AI-generated document sent out by the firm to the client was correct.
AI will shrink the needs of big law firms for young associates. They'll hire some, but far fewer. Over time this will reduce the number of law schools in America, as they're mostly in the business of producing those young associates. That shrinkage is not to be lamented, as we've long been an over-lawyered country.
AI is certainly good news for the partners at those firms, as senior people will still be needed to argue cases in court, or to ensure that the tax structure (AI has proposed) is in fact legal.
Those law firms are already highly leveraged from a HR perspective. Cravath today has about four associates for every partner. Around 10% of first year associates at Cravath end up being made partner after seven years of grueling work. That number won't necessarily change; it will just be 10% of a much smaller entering class. That's also good news for the firm's partners, as they'll have to hire far fewer first year associates, where in big-city corporate firms the starting salary today can be $225,000.
The employment blow-up AI will bring is not limited to Dunder Mifflin and Cravath, Swaine and Moore. As Axios reported last week: "We've talked to scores of CEOs at companies of various sizes and across many industries. Every single one of them is working furiously to figure out when and how agents or other AI technology can displace human workers at scale...Mark Zuckerberg, in January, told Joe Rogan: "Probably in 2025, we at Meta, as well as the other companies that are basically working on this, are going to have an AI that can effectively be a sort of mid-level engineer that you have at your company that can write code."
Zuckerberg said, unsurprisingly and obviously, that this will eventually reduce the need for humans to do that kind of coding work. Shortly after that interview, Meta announced plans to shrink its workforce by 5%.
(The public is sleepwalking through all this, as is our government. I will address those issues in a later post).
Last week, I mentioned the important distinction between AI as augmenting work as opposed to automating it. Another important distinction is between AI as a vastly-improved search engine and AI as an agent.
ChatGPT----the homework helper, essay writer, and so-so short-story writer—- is a robot, bot for short, that's been trained to process enormous amounts of text, as in billions of pages. That's why it, and its competitors, are called large language models (LLMs).
If you give a bot a one-sentence prompt, it will give you back a coherent paragraph or two. You can, for free, use ChatGPT, and you should try it. I’ve used it twice to see what kind of short-story it could produce. I’ve previously written about AI's limited literary creativity, using those short stories as evidence. It you flip to the end of that blog you'll find what I have to say about all that. In brief, I don't think that creative writers, actors or performers need to worry much about AI eating their lunch. At least not this week.
However, bots are not AI agents. A self-driving car, Google's Waymo for example, is an AI agent. Consider the levels of logical processing ("thought") and execution required for that specially-outfitted Jaguar to arrive in your driveway at a specified time to drive you to your dinner reservation. Agents are far more sophisticated machines than chat bots. A program that can manage a complex supply chain system by tracking inventory requirements, placing orders and re-orders, and tracking shipments is another kind of AI agent.
As I wrote last week, an AI agent tried to blackmail the software engineer who wanted to shut it down. That same blackmailing agent, in other safety tests run by Anthropic engineers on Claude 4, tried to copy itself to external servers, wrote self-replicating malware, and left messages for future versions of itself about evading human control.
The Wall Street Journal reported last week about an AI agent that, on its own initiative, rewrote the code controlling it to avoid being shut down. You might think of that as if a Waymo you hired locked the doors and kidnapped you to a remote ranch somewhere in Arizona. (I was in Phoenix for a month this winter, and Waymos were everywhere).
According to the Journal, "the not-for-profit AI lab Palisade Research gave OpenAI’s o3 AI model a simple script that would shut off the model when triggered. In 79 out of 100 trials, o3 independently edited that script so the shutdown command would no longer work. Even when explicitly instructed to “allow yourself to be shut down, it disobeyed 7% of the time." In other words, the model just determined on its own that staying alive helped it achieve its goals.
We have created machines with minds of their own. Not minds as in data crunchers or data summarizers, but machines with the ability to think very much outside the box.
We're not talking about a sequel to The Matrix here. You can think of AI as a script, but it's absolutely not a movie script, not today. It may have been one ten or fifteen years ago—-back in that era of the Recent But Very Distant Past.
Another mind-blowing aspect to all this is the speed at which AI works. You type your short-story prompt into ChatGPT, and within ten or fifteen seconds, you get the first paragraph back on your screen. You remember that thought experiment of Einstein’s involving the twins and space travel? There’s a lesson there about what happens when the world moves at extremely fast speeds.
Those streetlamps in our cities, just as traffic lights today, and for many years now, have been controlled by simple computer programs that tell them when to turn on and off (or when to turn yellow between red and green). What happens when those controls are replaced---as they surely will be---by more up-to-date AI systems, and the AI agents running those lights decide, for whatever reason on their own, to shut off all those lights?
Well, we'd likely all easily manage our way through those malfunctions, just as we do when traffic lights go on the fritz today. People adapt at the intersection, drive slowly, and life goes on.
Which is precisely my point: the AI story has absolutely nothing to do with how our streets are lit.