
China AI Safety Evolution
Mar 4, 2025
By:
Author Name
Timeline of Key Developments
Date | Event | Significance |
2017, July | State Council releases New Generation AI Development Plan | Earliest ministerial-level reference to AI safety; emphasized need for legal/ethical frameworks |
2018, October | Xi Jinping presides over Politburo study session on AI | First time Xi publicly addresses AI safety risks; legitimized AI safety advocacy in China |
2021 onwards | Implementation of AI regulations across multiple agencies | Shift from informal policy planning to actual regulatory implementation |
2022, December | Deep synthesis technology rules issued | Part of sector-specific regulatory approach |
2022, January | Algorithmic recommendation guidelines released | Further development of incremental regulatory framework |
2023, July | Generative AI regulations introduced | Response to the emergence of LLM technologies |
2023, October | Ethical guidelines for science and technology published | Broadened scope of AI governance to ethical considerations |
2024, September | AI Safety Governance Framework released | Introduction of risk-based regulatory principles similar to EU approach |
2024, July | Third Plenary Session includes AI safety in decisions | Elevated AI safety to highest level of political attention |
2025, January | Vice-Premier Ding Xuexiang speaks at Davos | Emphasized need for global AI governance under UN framework |
Key Themes in China's AI Safety Evolution:
Shift from general safety concerns to existential risks
Preference for "small, fast, flexible" sector-specific regulation over comprehensive frameworks
Growing alignment with global regulatory models while maintaining Chinese characteristics
Increasing high-level political attention to AI safety
AI Safety in China: What You Need to Know
“We will not blindly follow trends, nor will we engage in unrestrained international competition.”
At the 2025 World Economic Forum Annual Meeting in Davos, Chinese Vice Premier Ding Xuexiang presented an intriguing metaphor to the assembled political and business leaders:
"AI should be an 'Ali Baba's cave' of treasures, not a 'Pandora’s box' of risks."
Will the development of artificial intelligence become an Ali Baba’s cave brimming with wealth and opportunity, or an unleashed Pandora’s box, fraught with uncontrollable dangers?
Ding’s remarks quickly sparked discussions both inside and outside the forum. Unlike many other politicians around world, who often focus on AI’s challenges regarding privacy, employment, or ethics, Ding emphasized the systemic risks AI might pose—and the necessity of installing a “braking system” for this technological race.
At a time when global AI regulation is still in its early exploratory phase, why has Beijing chosen this moment to deliver such a stark warning—and through a senior official who has rarely spoken publicly about AI?
Is this merely diplomatic rhetoric, or does it signal a shift in China’s approach to AI governance?
1. Who Is Ding Xuexiang, and Why Does He Matter?
In China’s political system, an official’s formal title does not always accurately reflect their true influence. Ding Xuexiang is a prime example of this dynamic.
A Political Role Beyond His Formal Title
On paper, Ding serves as a Vice Premier of the State Council, overseeing policy coordination in areas such as technology, industry, and environmental protection. However, his real role extends far beyond these administrative responsibilities.
A Member of the Political Core
At 62 years old, Ding Xuexiang is one of the seven members of the Politburo Standing Committee (PSC)—the highest decision-making body of the Chinese Communist Party. He is also the only Standing Committee member born in the 1960s, making him the youngest in the group.
Unlike many senior officials who rise through traditional Party affairs, Ding’s career began in science and administrative management before he transitioned into politics.
Engineering Background – Ding was originally trained in mechanical engineering and spent 17 years working in scientific research and management at the Shanghai Research Institute of Materials.
Political Ascent – In 2007, he became a key aide to Xi Jinping when Xi was Party Secretary of Shanghai. Since then, he has followed Xi’s rise and ascended to the Party’s highest ranks.
Policy Coordinator – Since 2013, Ding has been one of Xi’s closest aides, responsible for implementing top-level decisions and coordinating policies within the Party elite.
Neil Thomas, an analyst at Eurasia Group, notes that Ding has played a crucial role in Xi’s push to elevate technocrats within China’s leadership. Some even suggest that Ding may be the official who has spent the most time with Xi over the past five years.
A Key Figure in AI Governance
Ding’s influence is not just political—it extends deep into China’s technology policy.
In 2023, he was appointed Director of the newly established Central Science and Technology Commission—a powerful body designed to centralize Party control over China’s technological strategy.
This role places him at the core of China’s AI policymaking, particularly at the intersection of AI regulation, technological competition, and national security.
Ding’s remarks on AI safety at the 2025 Davos Forum should not be seen as just the opinion of a senior technocrat. Instead, they signal Beijing’s top-level stance on AI governance.
The message is clear: China is not just developing AI—it is actively shaping global AI governance rules.
https://www.reuters.com/world/china/ding-xuexiang-shanghai-party-cadre-xi-gatekeeper-2022-10-11/
http://www.bjb.cas.cn/kjcx/kcsd/202304/t20230403_6725986.html
https://www.mfa.gov.cn/web/wjdt_674879/gjldrhd_674881/202501/t20250122_11542793.shtmlhttps://www.chinadaily.com.cn/a/202501/22/WS6790bfd5a310a2ab06ea8a70.html#:~:text=Ding stressed the need for,should be shared%2C he said.
2. So, When did the Chinese Government Officially Discuss AI Safety?
China’s focus on artificial intelligence (AI) safety is not a recent development. In fact, the earliest official mention of AI safety in Chinese government documents dates back to 2017.
On July 20, 2017, the State Council of China released the New Generation Artificial Intelligence Development Plan, marking the first time that the Chinese government systematically introduced the concept of AI safety in an official document.
In Chapter 3 of the plan, it specifically states:
"Strengthen AI safety assessment and control capabilities, establish and improve AI-related laws, regulations, ethical norms, and policy frameworks to ensure AI is safe and controllable."
At the time, AI safety was broadly categorized into two types of risks:
Short-term risks: Job displacement, algorithmic bias, and privacy concerns.
Long-term risks: Ethical challenges, social governance issues, and AI security and controllability.
Following this, in 2018, the Ministry of Science and Technology established the AI Governance Expert Committee, dedicated to researching AI ethics and regulation. The government also began addressing emerging risks, such as algorithmic recommendation systems and deepfake technologies.
However, under the 2017 policy framework, AI safety was still regarded as a subtopic within technological development rather than being fully elevated to a national security concern.
https://www.gov.cn/zhengce/content/2017-07/20/content_5211996.htm
3. "Who Regulates AI in China, and How Do They Do It?"
AI Regulation: A Cross-Departmental Matrix Governance
Unlike in the West, where AI regulation is often overseen by a single authority, China adopts a multi-agency, matrix-style governance system, with various government bodies jointly shaping AI oversight.
Cyberspace Administration of China (CAC): The content regulator, responsible for overseeing deepfake technologies, generative AI, and algorithmic recommendation systems.
Ministry of Industry and Information Technology (MIIT): The industrial policy maker, supervising the commercialization of AI applications in sectors like autonomous driving and healthcare.
Ministry of Public Security (MPS): The security watchdog, focusing on facial recognition, AI-driven surveillance, and cybersecurity risks.
State Administration for Market Regulation (SAMR): The fair competition enforcer, preventing algorithmic discrimination and anti-competitive practices like price manipulation through big data.
Ministry of Science and Technology (MOST): The technology strategist, responsible for setting AI ethical standards and guiding fundamental research.
Two Regulatory Pillars: Use-Case vs. Industry-Specific Regulation
China’s AI regulatory framework operates along two primary axes:
1. Use-Case-Based Regulation
This approach focuses on the specific applications of AI, ensuring they are not misused or pose risks to society. Key regulatory areas include:
Algorithmic Recommendation (2021) – Prevents information manipulation and price discrimination via big data. (January 2022)
Deep Synthesis (2022) – Mandates AI-generated face swaps and voice synthesis to be clearly labeled. (December 2022)
Generative AI (2023) – Introduces review mechanisms for ChatGPT-like models to ensure compliance with content regulations. (July 2023)
2. Industry-Specific Regulation
This framework categorizes AI regulations based on sector-specific risks and compliance requirements:
Autonomous Driving (July 2021)
Medical AI (July 2021)