Davos has embraced artificial intelligence, but elites now see it as a threat


DAVOS, SWITZERLAND — ChatGPT was the breakout star at last year’s World Economic Forum, where the emerging chatbot’s ability to code, draft emails and write speeches captured the imagination of leaders gathered in this luxury ski town.

But this year, enormous excitement over the technology’s almost limitless economic potential is being coupled with a sharper assessment of its risks. Heads of state, billionaires and CEOs appear to be united in their concerns, warning that burgeoning technology may increase misinformation, displace jobs and deepen the economic gap between rich and poor countries.

In contrast to far-fetched fears that technology will wipe out humanity, the tangible dangers underscored last year by a flood of counterfeit products created by artificial intelligence and the automation of jobs in copywriting and customer service are highlighted. The debate has taken on new importance amid global efforts to regulate the rapidly evolving technology.

“Last year, the conversation was ‘wheeze,'” Chris Padilla, IBM’s vice president of government and regulatory affairs, said in an interview. “Now, what are the risks? What do we need to do to make AI trustworthy?”

AI concerns are creeping into finance, business and law

The theme has taken over the conference: panels featuring AI CEOs, including Sam Altman, are the hottest ticket in town, and tech giants including Salesforce and IBM have filled snow-covered streets with ads for trustworthy AI.

But growing concerns about the risks of artificial intelligence are overshadowing the technology industry’s marketing campaign.

The event opened on Tuesday with Swiss President Viola Amherd calling for “global AI governance,” raising concerns that the technology could lead to increased disinformation as a swarm of countries head to the polls. At a chic Microsoft café across the street, Microsoft CEO Satya Nadella sought to allay fears that the artificial intelligence revolution will leave the world’s poor behind, in the wake of an International Monetary Fund report this week that found the technology is likely to… It exacerbates and exacerbates inequality. Social tensions. Over appetizers and cocktails at the Alpine Inn on the street, Ruth Porat, Google’s chief financial officer, promised to work with policymakers to “develop responsible regulation” and touted the company’s investments in efforts to retrain workers.

But calls for a response have exposed the limits of this annual summit, as efforts to coordinate a global technology strategy are hampered by economic tensions between the world’s leading AI powers, the United States and China.

Meanwhile, countries have competing geopolitical interests when it comes to regulating AI: Western governments are considering rules that would benefit companies within their borders, while leaders in India, South America and other parts of the Global South see the technology as key to opening the way. Economic prosperity.

The AI ​​debate is a microcosm of a broader paradox looming in Davos, where attendees don snowshoes to taste expensive wines, go on ski trips and play classic rock songs at a piano lounge sponsored by the cybersecurity company Cloudflare. The importance of the conference, which was founded more than 50 years ago to promote globalization during the Cold War, has become increasingly questioned, amid the wars raging in Ukraine and the Middle East, growing populism and climate threats.

In a speech on Wednesday, UN Secretary-General António Guterres raised the dual risks of climate chaos and generative artificial intelligence, noting that they were “extensively discussed” at the Davos group.

“However, we do not yet have an effective global strategy to deal with either,” he added. “Geopolitical divisions prevent us from coming together on global solutions.”

Governments are used to driving innovation. In terms of artificial intelligence, they are lagging behind.

Clearly, technology companies are not waiting for governments to catch up, and legacy banks, media companies and accounting firms in Davos are also studying how to integrate AI into their businesses.

Davos regulars say growing investment in artificial intelligence is evident in the park, with companies snapping up storefronts to host meetings and events. In recent years, buzzwords like Web3, blockchain, and crypto have dominated these stores. But this year, programming has turned to artificial intelligence. Hewlett-Packard Enterprise and UAE company G42 even sponsored “AI House,” which transformed a chalet-style building into a gathering place to hear speakers including Meta’s chief AI scientist Yann LeCun, IBM CEO Arvind Krishna, and Professor At MIT Max Tegmark.

The park effectively serves as “a focus group for the next emerging technology wave,” said veteran World Economic Forum attendee Dante Disparte, chief strategy officer and head of global policy at Circle.

Cryptocurrencies are back – in Davos at least – as the redemption round continues

Executives noted that AI will become an even more influential force in 2024, as companies build more advanced AI models and developers use those systems to power new products. In a session hosted by Axios, Altman said that the general intelligence of OpenAI models is “increasing across the board.” In the long term, he predicted, the technology “will dramatically accelerate the rate of scientific discovery.”

But even as the company moves forward, he said he’s concerned that politicians or bad actors could misuse the technology to influence elections. He said OpenAI doesn’t yet know what electoral threats will arise this year but will try to make changes quickly and work with external partners. On Monday, as the conference kicked off, the company rolled out a set of election protections, including a commitment to help people identify when images were created by its DALL-E creator.

“I’m nervous about this, and I think it’s good that we’re nervous about this,” he said.

OpenAI, which has fewer than 1,000 employees, has a much smaller team working on elections than large social media companies like Meta and TikTok. Altman defended their commitment to election security, saying that team size is not the best way to measure a company’s work in this area. But the Washington Post found last year that the company was not enforcing its existing policies on political targeting.

Policymakers still fear that companies do not think enough about the social impacts of their products. At the same event, Eva Maedel, a member of the European Parliament, said she was working to develop recommendations for artificial intelligence companies before the global elections.

“The theme of this year’s annual meeting is rebuilding trust,” said Maidel, who worked on the EU AI law, which is expected to become law this year after a political agreement was reached in December. “I very much hope that this is not the year that we lose confidence in our democratic processes because of misinformation, because of the inability to explain the truth.”

Leave a Reply

Your email address will not be published. Required fields are marked *