The Davos elite embraced AI in 2023. Now they fear it

A spotlight is on concrete hazards borne out last year by a flood of AI-generated fakes and the automation of jobs in copywriting and customer service. Picture: Gerd Altmann/Pixbabay.

A spotlight is on concrete hazards borne out last year by a flood of AI-generated fakes and the automation of jobs in copywriting and customer service. Picture: Gerd Altmann/Pixbabay.

Published Jan 19, 2024

Share

ChatGPT was the breakout star of last year's World Economic Forum, as the nascent chatbot's ability to code, draft emails and write speeches captured the imaginations of the leaders gathered in this posh ski town.

But this year, tremendous excitement over the nearly limitless economic potential of the technology is coupled with a more clear-eyed assessment of its risks.

Heads of state, billionaires and CEOs appear aligned in their anxieties, as they warn that the burgeoning technology might supercharge misinformation, displace jobs and deepen the economic gap between wealthy and poor nations.

In contrast to far-off fears of the technology ending humanity, a spotlight is on concrete hazards borne out last year by a flood of AI-generated fakes and the automation of jobs in copywriting and customer service.

The debate has taken on new urgency amid global efforts to regulate the swiftly evolving technology.

"Last year, the conversation was 'gee whiz,'" Chris Padilla, IBM's vice president of government and regulatory affairs, said in an interview. "Now, it's what are the risks? What do we have to do to make AI trustworthy?"

The topic has taken over the confab: Panels with AI CEOs including Sam Altman are the hottest ticket in town, and tech giants including Salesforce and IBM have papered the snow-covered streets with ads for trustworthy AI.

But the mounting worries about the perils of AI are casting a pall over the tech industry's marketing blitz.

The event opened Tuesday with Swiss President Viola Amherd calling for "global governance of AI," raising concerns the technology might supercharge disinformation as a throng of countries head to the polls. At a sleek cafe Microsoft set up across the street, CEO Satya Nadella sought to assuage concerns the AI revolution would leave the world's poorest behind, following the release of an International Monetary Fund report this week that found the technology is likely to worsen inequality and stoke social tensions. And Irish Prime Minister Leo Varadkar said he was concerned about the rise of deepfake videos and audio, as AI-generated videos of him peddling cryptocurrency circulate the internet.

But the calls for a response have laid bare the limits of this annual summit, as efforts to coordinate a global strategy to the technology are hampered by economic tensions between the world's leading AI powers, the United States and China.

Meanwhile, countries hold competing geopolitical interests when it comes to regulating AI: Western governments are weighing rules that stand to benefit the companies within their borders while leaders in India, South America and other parts of the Global South see the technology as the key to unlocking economic prosperity.

The AI debate is a microcosm of a broader paradox looming over Davos, as attendees strap on their snow boots to sample pricey wine, go on sledding excursions and belt out classic rock hits in a piano lounge sponsored by the cybersecurity firm Cloudflare. The relevance of the conference founded more than 50 years ago to promote globalization during the Cold War is increasingly in question, amid raging wars in Ukraine and the Middle East, rising populism and climate threats.

In a speech Wednesday, U.N. Secretary General António Guterres raised the dual perils of climate chaos and generative AI, noting that they were "exhaustively discussed" by the Davos set.

"And yet, we have not yet an effective global strategy to deal with either," he said. "Geopolitical divides are preventing us from coming together around global solutions."

The forum's AI governance alliance - a coalition of tech executives, digital ministers and academics - released papers on AI policy Tuesday. But it was clear that global leaders are moving at different paces to address disparate priorities.

White House Office of Science and Technology Policy Director Arati Prabhakar touted the Biden administration's new AI executive order, saying in an interview that "everyone is looking to the U.S. for leadership" on how to address AI. Meanwhile, European leaders pointed to their recent deal on the E.U. AI Act as a sign of the bloc's global influence over the future of AI legislation.

Leaders from the Global South also sought to put their mark on the regulatory debate. Paula Ingabire, Rwanda's minister of information communication technology and innovation, announced plans for an international summit this year, where leaders from Africa can discuss the impact of AI on their economies.

It's clear tech companies are not waiting for governments to catch up, and legacy banks, media companies and accounting firms at Davos are weighing how to incorporate AI into their businesses.

Davos regulars say growing investment in AI is evident on the promenade, where companies take over storefronts to host meetings and events. In recent years, buzzwords like Web3, blockchain and crypto dominated those shops. But this year, the programming shifted to AI. Hewlett-Packard Enterprise and the Emirati firm G42 even sponsored an "AI House," which converted a chalet-style building into a gathering spot to listen to speakers including Meta chief AI scientist Yann LeCun, IBM CEO Arvind Krishna and MIT professor Max Tegmark.

The promenade effectively serves as "a focus group for the next emerging tech wave," said veteran WEF attendee Dante Disparte, chief strategy officer and head of global policy at Circle.

Executives signaled that AI will become an even more influential force in 2024, as companies build more advanced AI models and developers use those systems to power new products. At a panel hosted by Axios, Altman said the overall intelligence of OpenAI's models was "increasing across the board." Long-term, he predicted the technology would "vastly accelerate the rate of scientific discovery."

But even as the company powers ahead, he said he worries politicians or bad actors might abuse the technology to influence elections. He said OpenAI doesn't yet know what election threats will arise this year but that it will attempt to make changes quickly and work with outside partners. On Monday, as the conference was kicking off, the company rolled out a set of election protections, including a commitment to help people identify when images were created by its generator, DALL-E.

"I'm nervous about this, and I think it's good that we're nervous about this," he said.

OpenAI, which has fewer than 1,000 employees, has a significantly smaller team working on elections than large social media companies such as Meta and TikTok. Altman defended their commitment to election security, saying team size was not the best way to measure a company's work in this area. But The Washington Post found last year that the company does not enforce its existing policies on political targeting.

AI companies are trying to position themselves as responsible partners to government, making the case that they've learned lessons from the missteps of social media companies, which came under fire for allowing the spread of foreign influence operations, fomenting extremism, and fostering racism and toxicity.

"When you look at social media over the last decade, it's been kind of a f---ing s---show," Salesforce CEO Marc Benioff said. "We don't want that for the AI industry. We want a good healthy relationship with these regulators."

When Facebook, Google and other internet companies were in their infancy, they were widely viewed as political darlings among the Davos set. Yet their reputations dramatically faltered in the fallout of the 2016 presidential election, and governments around the world are still grappling now with how to regulate them. Nick Clegg, president of global affairs of Facebook, said it is "much healthier" that governments are assessing the social implications of AI as it evolves, rather than waiting like they did with social media.

However, tech companies in the past have praised regulation only to oppose guidelines that could hamper their business interests.

At the World Economic Forum, the charm offensive appears to be working. Eva Maydell, a member of the European Parliament, said in her meetings at Davos that AI companies are more open to assessing the social effects of their products than their predecessors in social media did five years ago.

But she would like to see the companies do more - especially ahead of elections in the European Union and around the world.

"Being here, I understand why there's so much hype around the business arena," she said in an interview. "Where I would also like to see some hype is when we talk about values and defending democracy - especially this year."

The Washington Post