banner
Home / News / AI Kingpins Have Global Rulemakers Over a Barrel
News

AI Kingpins Have Global Rulemakers Over a Barrel

Jul 22, 2023Jul 22, 2023

As OpenAI founder Sam Altman bounced around Europe meeting political leaders last week, his revolutionary technology was never far behind. ChatGPT on iPhone, which earlier this month launched in the US, arrived in the UK, France, Germany and eight other countries. It almost immediately became the most-downloaded productivity app in Apple's store.

The aggressive push from OpenAI to get its technology into as many places as possible — business, education, personal lives — has been unprecedented, wildly expensive, and tremendously successful. In a Deutsche Bank AG survey of 1,150 employees across sectors, 44% of US workers in April said ChatGPT was being used, if only tentatively, in their workplaces, and 22% said ChatGPT use was already "heavy"; in the UK, 14% of respondents said the same.

Microsoft Corp.'s $10 billion investment has made it possible to keep costs for end users low enough to encourage adoption and experimentation at scale. That early move prompted others, such as Alphabet Inc., to push their own technologies into the wild earlier than they might have been comfortable with — because falling behind in the AI race would have been unthinkable.

The point is, if this remarkable technology was suddenly switched off tomorrow, it would already have significant ramifications. The potential $15.7 trillion global economic boost, as estimated by McKinsey, would be slowed. The value of investments in AI, which have flooded in by the billions, would be thrown into question. And world leaders, who have been lining up for photo ops with Altman, would be called anti-innovation and bad for business.

That's why I have found Altman's recent comments so alarming — and regulators should be taking note too. When asked for his view on the European Commission's proposed AI Act, the first major piece of AI legislation on the table, he told reporters his company would "try to comply." But if it couldn't, OpenAI would "cease operating" in the market.

He later backtracked, writing on Twitter that the company has no "plans to leave" Europe, which is likely true — the act likely won't be finalized until next year. But the threat was clear.

European Commissioner Thierry Breton, a former tech chief executive himself, called Altman's comments "blackmail," adding: "Our rules are put in place for the security and well-being of our citizens and this cannot be bargained."

I’d call them something else: predictable. If you wanted to reduce Big Tech's regulation-busting playbook to a Post-It, it would need only say: "Become popular, threaten to leave." The only surprise was that Altman was in a position to make such a comment so soon: It's only been seven months since ChatGPT launched to the public.

We know this approach has worked for tech disruptors all over the world.

Uber Inc., which steamrolled into markets, sometimes illegally, knew that once it hit critical mass it would be able to call the shots. (It did so as recently as last week, successfully pushing for a bill to be vetoed in Minnesota, saying it would pull out of the state if forced to offer a minimum wage.) Airbnb Inc. trots out homeowners struggling to make ends meet — but never the large companies gobbling up housing stock — whenever regulations threaten to disrupt home-sharing. Meta Platforms Inc. is warning it would pull messaging app WhatsApp out of the UK over plans for an online-safety bill. And TikTok knows the best chance it has of avoiding a devastating US ban is by harnessing the army of Gen Z users who can't comprehend life without it.

Like Altman in Congress earlier this month, the companies mentioned above had shown a willingness, even an eagerness, for potential regulation. A one-line statement published on Tuesday, and signed by Altman and more than 350 other figures in AI, treads that familiar ground, saying mitigating the risk of AI should be a priority alongside preventing nuclear war.

I don't doubt their concern. But when worthy generalities become uncomfortable specifics, we know tech companies fight with everything they have. More often than not, they win. Lobbying dollars are a big part of it — tech companies are among the biggest spenders — but the real power comes from the broad adoption of the technology they have created. AI leaders know they are already in a position to do the same. In fact, generative AI makes disruptions like ridesharing look miniscule in comparison.

Altman hasn't been the only AI ambassador pressing the flesh of late. Microsoft President Brad Smith last week brought together government officials in Washington to unveil his guidelines for what AI governance should look like. Like Altman, Smith talked of hope for transparency and government-led frameworks. In the past, the company has said it supports the "core principles" of the EU's AI Act. But the Software Alliance, a trade group Microsoft bankrolls along with several other tech peers, is pushing to weaken key provisions. Its policy paper has outlined particular concerns with rules that would put the developers of general purpose AI — such as tools like ChatGPT — under greater scrutiny due to the "high risk" of its use. Another rule says companies must disclose when copyrighted material has been used to train their systems.

One defense of tech's resistance to regulation is that they are the experts and legislators are not. "There's no way a non-industry person can understand what is possible," former Google Chairman Eric Schmidt said recently. New bills, well-intentioned as they may be, could be bad, and are at risk of being rushed through. Some fighting in public is healthy, suggested Azeem Azhar, the entrepreneur and AI thought-leader who interviewed Altman on stage at University College London last week. "We should expect there to be friction," he told me. "Friction is evidence that there hasn't been a backroom deal."

But with each passing day, the number of people and businesses who will come to depend on AI will grow. So too will AI's leaders’ ability to stand firm on their demands — and perhaps make new ones.

Altman has rightly earned credit for being available to concerned parties, learning from Silicon Valley's previous arrogance in thinking it could avoid engaging with the outside world. But it will take more than that to convince me this time will be any different.

More From Bloomberg Opinion:

• Sam Altman Isn't the Answer to Regulating AI: Parmy Olson

• In the ChatGPT Age, Prompting Is the Language to Learn: Dave Lee

• ChatGPT Is Also an Impressive Feat of Marketing: Tyler Cowen

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Dave Lee is Bloomberg Opinion's US technology columnist. Previously, he was a San Francisco-based correspondent at the Financial Times and BBC News.

More stories like this are available on bloomberg.com/opinion

©2023 Bloomberg L.P.