Leaders from the tech industry, academia, and other sectors convened in Davos, Switzerland, on January 21 to discuss responsible artificial intelligence development and safeguarding innovation. The roundtable, hosted by TIME CEO Jess Sibley, centered on the impact of AI on society, including its potential risks and benefits.
Navigating the Challenges of AI
Participants explored a range of critical issues, from the effects of AI on children’s development and safety to the best methods for regulating the technology and ensuring AI models are aligned with human well-being. A key concern raised was the potential for AI to exploit vulnerabilities in human psychology.
Jonathan Haidt, professor of ethical leadership at NYU Stern and author of The Anxious Generation, suggested that delaying smartphone access for children – potentially until high school – could allow for crucial brain development and the establishment of healthy habits. He argued that children do not need early exposure to technology to learn how to use it later.
Regulation and International Cooperation
Yoshua Bengio, professor at the Université de Montreal and founder of LawZero, emphasized the need for a scientific understanding of AI’s potential problems. He proposed two mitigation strategies: designing AI with built-in safeguards and implementing government regulations, potentially through mandatory liability insurance for AI developers and deployers.
Bengio, often referred to as one of the “godfathers of AI,” also argued against the notion that the U.S. should limit AI regulation due to competition with China. He posited that both nations share a common interest in preventing the development of harmful AI applications, such as those used for bio-weapons or cyberattacks, and could benefit from international cooperation, drawing a parallel to Cold War-era arms control agreements between the U.S. and the USSR.
The discussion also touched on the similarities between AI and social media, with participants noting that both compete for user attention. Bill Ready of Pinterest described current business models as “preying on the darkest aspects of the human psyche,” advocating for a shift towards prioritizing positive engagement.
The Future of AI Development
Yejin Choi, professor of computer science at Stanford University, questioned the current approach of training AI on vast datasets that include harmful content, suggesting a need for alternative intelligence models that learn “morals and human values from the get-go.” Kay Firth-Butterfield, CEO of the Good Tech Advisory, stressed the importance of gathering feedback from AI users – including workers and parents – and establishing AI literacy campaigns and certification processes.
Frequently Asked Questions
What was the primary focus of the roundtable discussion?
The roundtable focused on how to implement responsible AI development, ensuring safeguarding measures are in place while still fostering innovation.
What suggestions were made regarding children and AI?
Jonathan Haidt suggested delaying smartphone access for children until at least high school, while Yoshua Bengio advocated for designing AI with built-in safeguards to protect child development.
Was there discussion about international cooperation on AI?
Yes, Yoshua Bengio argued that both the U.S. and China have a shared interest in coordinating on AI safety and preventing the development of harmful applications, similar to past cooperation during the Cold War.
As AI continues to rapidly evolve, how can we ensure that its development aligns with our collective values and promotes a more positive future for all?
