Saudi Arabia has made major strides in artificial intelligence (AI) adoption, regulation and safety in recent years, securing the 11th position globally and the top spot regionally in the Global Index on AI Safety (GIAIS) issued by the International Research Center for AI Ethics and Governance in Beijing and the Beijing Institute of AI Safety and Governance.
This recognition was given to Saudi Arabia during the AI, Science and Society conference, held alongside the AI Action Summit in Paris, where the Saudi Data & AI Authority (SDAIA) participated in a dialogue session to discuss the findings of the inaugural International AI Safety Report.
Kingdom contributes 8.3 percent of AI safety research globally
Saudi Arabia attributed this achievement to several factors. The Kingdom has made significant strides in AI safety research, contributing 8.3 percent of all related research published globally. Furthermore, it created a robust and comprehensive governance framework that provides a solid foundation for developing and implementing safe and responsible AI policies within the Kingdom.
In addition, SDAIA has actively engaged in shaping international AI governance frameworks, playing a crucial role in global summits such as the UK AI Safety Summit and the Seoul AI Summit. Saudi Arabia is dedicated to advancing AI safety through ongoing national efforts in scientific research and developing robust governance standards.
Global AI risk incidents surge
Since 2022, with the breakthroughs in generative AI technology and its deepening application across various fields, the total number of AI risk incidents has surged. According to the OECD AI Incidents Monitor (AIM), the total number of risk incidents in 2024 has increased by approximately 21.8 times compared to 2022, showing a rapid growth trend.
Of the AI risk incidents that occurred between 2019 and 2024, about 74 percent were directly related to AI safety issues. The number of AI incidents that directly related to safety and security in 2024 grew by approximately 83.7 percent compared to 2023.
Regulating AI safety becomes increasingly important
In the survey of 40 countries, 17 have governance instruments related to AI safety. Among these, 9 countries (Australia, China, Germany, Japan, South Korea, Saudi Arabia, New Zealand, the United Kingdom and the United States) have both national AI-related safety laws and technical frameworks in place.
This highlights a global trend towards regulating AI safety through such frameworks. However, most AI-related safety laws are still primarily focused on cybersecurity and information security, with laws specifically targeting AI safety remaining relatively scarce. The majority of technical and policy frameworks were released in 2024, reflecting the concerted efforts to tackle AI safety issues in the past year.
Read| WGS 2025: UAE to launch National Cybersecurity Strategy
Top 10 countries in AI safety
According to the Global Index on AI Safety, the top 10 countries in AI safety include:
- United States
- United Kingdom
- China
- South Korea
- Japan
- Singapore
- Canada
- France
- Germany
- Australia
The index utilizes 6 main pillars to rank the countries including the governance environment, national institutions, governance instruments, research status, international participation and existential safety preemption.
Overall, the index shows that developed countries generally score higher in AI safety governance, with stronger capabilities in research and development, more complete governance frameworks related to AI safety and more involvement in international cooperation.
In contrast, developing countries and emerging economies face more challenges and urgently need to intensify efforts in governance systems, policy support and global collaboration.