AI Safety Summit Draws Global Experts, but Google DeepMind Plans to Stay Home – PYMNTS.com

author
3 minutes, 22 seconds Read

As luminaries from academia, government and business prepare to gather in South Korea May 21-22 for the second AI Safety Summit, one tech giant’s absence is raising eyebrows: Google.

The search company’s research arm’s decision to skip the event, which will explore artificial intelligence’s limitations and commercial impact, underscores the intricate dynamics shaping the global conversation around the technology. Some see the summit as a crucial forum for addressing AI’s risks and challenges, while others express concerns that excessive regulation could stifle innovation and concede ground to competitors like China.

The advanced AI research group Google DeepMind expressed support for the summit but did not verify whether it would attend, Reuters reported. Google did not immediately respond to PYMNTS’ request for comment.

Impact on Business

The upcoming summit comes at a pivotal moment when there is growing recognition of the need for responsible AI development and deployment. As governments, businesses and researchers convene virtually to address these issues, the summit’s discussions and outcomes could have far-reaching implications for the future of commerce.

“The AI supply chain is pretty complex and doesn’t cleanly stay within national borders,” Andrew Gamino-Cheong, co-founder of Trustible, an AI software company, told PYMNTS. “Of the topics they’ve announced, one area where we already start to see countries fragmenting in their policies is around copyright issues. The ‘fair use’ doctrine reigns supreme in the U.S. but is being challenged. Other countries don’t necessarily have that tradition. Copyright isn’t just an issue with model training; some countries are starting to split on whether AI-generated content could receive IP protections.”

Moves for Safety

Gamino-Cheong noted that there has been a lot of activity in the AI safety space since the last AI summit. The U.S. AI Safety Institute only recently got funding and new leadership.

Meanwhile, the United States has already announced several partnerships with the United Kingdom and South Korea for their AI safety institutes, and the United NationsOECDWorld Economic Forum, and International Standardization Organization have all been busy publishing additional AI guidelines. The European Union AI Act also passed its final political hurdles, and its implementation will impact the global AI safety discussion due to the Brussels Effect.

“Things in AI itself haven’t shifted too much in the past six months,” Gamino-Cheong said. “Most new models released, like Llama-3Gemini and DBRX, have been incremental improvements to their predecessors, and most are still playing catch-up to GPT-4. A lot of the AI focus has been on the infrastructure around AI and how to make it safely access data through RAG patterns, run at a lower cost, or intercept malicious inputs/outputs. OpenAI’s pending release of GPT-5 could change all that, but we’ll just have to wait and see.”

International efforts to make AI safer are gathering momentum. The U.S. and Britain formed a partnership focused on AI safety last month. U.S. Commerce Secretary Gina Raimondo and British Technology Secretary Michelle Donelan solidified the collaboration by signing an agreement, which promotes joint efforts in developing advanced AI model testing. Highlighting AI as a pivotal technology of our time, Raimondo pointed out the partnership’s importance in addressing national security and societal risks, building on commitments from a previous AI Safety Summit at Bletchley Park.

However, observers are keeping their expectations low for the upcoming South Korea summit.

“Part of the reason for that is that governments are still very busy implementing many of the things they announced in the last summit and haven’t yet worked out what comes next,” Gamino-Cheong said. “A lot of leaders are still trying to learn about AI as well, and until they do, the deeper debates around definitions of bias, how to handle the risks of open-access AI models, and the liability of AI systems will be premature.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

This post was originally published on this site

Similar Posts