Earlier this year, I noted that the big question about artificial intelligence is this: Who decides? That is, who will control this technology that could (a) bring unimaginable benefits to humankind, (b) annihilate humanity, or (c) do something in between? We now seem to have an answer to that profound question, and it is not encouraging.
This past week, the New York Times published a long four-part series on the development of AI within Silicon Valley that was pegged to the recent fight at OpenAI, the company that developed the much-ballyhooed AI system called ChatGPT. As you might recall, a few weeks ago, the board of OpenAI booted its chief executive, Sam Altman, citing vague reasons for his dismissal. That sparked outrage in Big Tech-land, where Altman was widely regarded as the preeminent proselytizer for AI, and triggered a rebellion within OpenAI, with hundreds of employees threatening to bolt if he were not reinstalled. After much to-and-fro-ing, Altman got his job back, and the OpenAI board was reformulated, with the board members who had fretted about Altman’s full-speed-ahead stance on AI ousted.
You don’t have to be immersed in all the details of this corporate drama to recognize the significance of the episode. As Kevin Roose, a tech columnist for the New York Times, pointed out when the dust settled, this battle had been a clash between those two opposing visions of AI: a transformative tool that will lead to world-changing innovations and ginormous profits for firms that quickly advance this new tech or a leviathan that will kill us all. As he wrote, “Team Capitalism won. Team Leviathan lost.”
The more recent series in the Times reinforces Roose’s observation, and it illuminates a larger point: deciding what to do about AI is in the hands of a small group of tech bros (it’s mainly men) who place their own corporate interests and personal preferences ahead of all else. The rest of us are at their mercy.
The series opens with a famous anecdote: In 2015, when Elon Musk was celebrating his 44th birthday at a three-day bash in Napa Valley, he got into an intense post-dinner conversation with Larry Page, then the chief executive of Google. Page predicted that humans and AI would eventually merge, and various sorts of intelligences would compete for dominance. For him, that was good news. Musk retorted that such a development would be the end of humans; the machines would wipe them out. Page called Musk a “speciesist”—which he meant as an insult. And Musk took it as such. Their friendship was over.
This was a prelude, the Times notes, to the debate that would rage at the highest levels within Big Tech in the following years. Musk and others in that world later that year established OpenAI as a nonprofit to chart a course of responsible AI development. Don’t think of Musk as a hero, though. He soon changed his tune and left OpenAI when he concluded it was not moving fast enough. And OpenAI itself, under Altman’s leadership, ended up rushing out ChatGPT. The series documents repeated instances of what happened at Big Tech companies, such as Google and Microsoft, when safety concerns conflicted with the desire for speed and profits. Guess which won out.
For example, when OpenAI unveiled ChatGPT, the industry freaked out:
Once ChatGPT was unleashed, none of that mattered as much, according to interviews with more than 80 executives and researchers, as well as corporate documents and audio recordings. The instinct to be first or biggest or richest — or all three — took over. The leaders of Silicon Valley’s biggest companies set a new course and pulled their employees along with them.
Over 12 months, Silicon Valley was transformed. Turning artificial intelligence into actual products that individuals and companies could use became the priority. Worries about safety and whether machines would turn on their creators were not ignored, but they were shunted aside — at least for the moment.
It was a gold rush. Microsoft, Facebook, Google, and others no longer considered the need for guardrails. None could allow another to gain an AI edge. After all, an AI-backed Bing (Microsoft’s search engine) could put Google out of business. These firms deemed a fervent embrace of AI an existential priority.
What was largely missing from the picture: the public and its representatives, i.e., the government. A small number of companies mostly run by billionaires were in the driver’s seat. They would determine what to do with this powerful technology that could threaten human civilization. Mark Zuckerberg, Elon Musk, Sam Altman—do we want the new world to be determined only by these tech oligarchs? We were not being given a choice.
The Times series chronicles governmental attempts to oversee the AI frenzy. But it’s unclear how much muscle these endeavors have. The third installment reports:
Lacking tech expertise, lawmakers are increasingly relying on Anthropic, Microsoft, OpenAI, Google and other A.I. makers to explain how it works and to help create rules.
“We’re not experts,” said Representative Ted Lieu, Democrat of California, who hosted Sam Altman, OpenAI’s chief executive, and more than 50 lawmakers at a dinner in Washington in May. “It’s important to be humble.”
Humble? Big Tech certainly isn’t. Microsoft and Google combined have at least 169 lobbyists in DC working the refs.
The White House and legislators have taken several steps. The Biden administration met with the Big Tech execs and asked them to concoct a set of self-regulations. The Federal Trade Commission has been looking into how OpenAI handles user data. Three months ago, Senate Majority Leader Chuck Schumer hosted a private meeting with lawmakers and Musk, Zuckerberg, Altman, Sundar Pichai of Google, and Satya Nadella of Microsoft to discuss AI rules. Musk, according to the newspaper, referred to AI’s “civilizational risks” (though he had started his own AI company), and Altman hailed it as the solution to global poverty. Schumer’s position: The tech companies know best. In other words, let them work (or fight) it out.
On October 30, President Joe Biden did try to catch up. He signed an extensive 111-page executive order on AI. It sought to establish standards and guidelines for the development of this technology—for instance, stating that tech companies must not release AI products before they are fully tested and that the test results should be shared with government regulators. It calls for the issuance of guidelines for labeling AI deep fakes as such. But the order did not create a vigorous enforcement regimen. And congressional action is likely required for effective regulation. Good first moves, but more is needed, and AI is galloping ahead.
On Friday, the European Union went further and agreed to a new law to regulate AI, aiming to prevent AI from spreading disinformation, automating jobs, and imperiling national security. The use of AI-backed facial recognition would be restricted, and AI-generated images would have to be clearly labeled. Makers of AI models would be required to disclose information about the operations of these systems and evaluate them for possible risks. But questions were immediately raised about the EU measure’s effectiveness. Many provisions would not kick in for a year or two. And it was uncertain how all this would be enforced within the 27-nation bloc.
It's not that various governments aren’t trying. But it’s evident that the tech firms—and the billionaires—have the advantage. (And I haven’t even mentioned what the Chinese and Russians are doing beyond the reach of any EU or US guidelines.) The bros have already shoved aside safety concerns in the first laps of the AI arms race. And the speed of AI development outpaces the ability of most lawmakers to fully understand the issues at hand and legislate accordingly. We may be left with Altman, Musk, and others deciding the fate of this technology—and possibly the fate of us all. If that doesn’t scare you, go watch the Terminator movies.
By the way, there’s a new Terminator coming out next year. Unless, of course, the smart machines take over before then.