top of page

China AI Safety Evolution

2025年3月4日

经过:

Author Name

Timeline of Key Developments

Date

Event

Significance

2017, July

State Council releases New Generation AI Development Plan

Earliest ministerial-level reference to AI safety; emphasized need for legal/ethical frameworks

2018, October

Xi Jinping presides over Politburo study session on AI

First time Xi publicly addresses AI safety risks; legitimized AI safety advocacy in China

2021 onwards

Implementation of AI regulations across multiple agencies

Shift from informal policy planning to actual regulatory implementation

2022, December

Deep synthesis technology rules issued

Part of sector-specific regulatory approach

2022, January

Algorithmic recommendation guidelines released

Further development of incremental regulatory framework

2023, July

Generative AI regulations introduced

Response to the emergence of LLM technologies

2023, October

Ethical guidelines for science and technology published

Broadened scope of AI governance to ethical considerations

2024, September

AI Safety Governance Framework released

Introduction of risk-based regulatory principles similar to EU approach

2024, July

Third Plenary Session includes AI safety in decisions

Elevated AI safety to highest level of political attention

2025, January

Vice-Premier Ding Xuexiang speaks at Davos

Emphasized need for global AI governance under UN framework

Key Themes in China's AI Safety Evolution:

  • Shift from general safety concerns to existential risks

  • Preference for "small, fast, flexible" sector-specific regulation over comprehensive frameworks

  • Growing alignment with global regulatory models while maintaining Chinese characteristics

  • Increasing high-level political attention to AI safety



AI Safety in China: What You Need to Know

“We will not blindly follow trends, nor will we engage in unrestrained international competition.”

At the 2025 World Economic Forum Annual Meeting in Davos, Chinese Vice Premier Ding Xuexiang presented an intriguing metaphor to the assembled political and business leaders:


"AI should be an 'Ali Baba's cave' of treasures, not a 'Pandora’s box' of risks."

Will the development of artificial intelligence become an Ali Baba’s cave brimming with wealth and opportunity, or an unleashed Pandora’s box, fraught with uncontrollable dangers?


Ding’s remarks quickly sparked discussions both inside and outside the forum. Unlike many other politicians around world, who often focus on AI’s challenges regarding privacy, employment, or ethics, Ding emphasized the systemic risks AI might pose—and the necessity of installing a “braking system” for this technological race.


At a time when global AI regulation is still in its early exploratory phase, why has Beijing chosen this moment to deliver such a stark warning—and through a senior official who has rarely spoken publicly about AI?


Is this merely diplomatic rhetoric, or does it signal a shift in China’s approach to AI governance?


1. Who Is Ding Xuexiang, and Why Does He Matter?

In China’s political system, an official’s formal title does not always accurately reflect their true influence. Ding Xuexiang is a prime example of this dynamic.


A Political Role Beyond His Formal Title

On paper, Ding serves as a Vice Premier of the State Council, overseeing policy coordination in areas such as technology, industry, and environmental protection. However, his real role extends far beyond these administrative responsibilities.


A Member of the Political Core

At 62 years old, Ding Xuexiang is one of the seven members of the Politburo Standing Committee (PSC)—the highest decision-making body of the Chinese Communist Party. He is also the only Standing Committee member born in the 1960s, making him the youngest in the group.

Unlike many senior officials who rise through traditional Party affairs, Ding’s career began in science and administrative management before he transitioned into politics.

  • Engineering Background – Ding was originally trained in mechanical engineering and spent 17 years working in scientific research and management at the Shanghai Research Institute of Materials.

  • Political Ascent – In 2007, he became a key aide to Xi Jinping when Xi was Party Secretary of Shanghai. Since then, he has followed Xi’s rise and ascended to the Party’s highest ranks.

  • Policy Coordinator – Since 2013, Ding has been one of Xi’s closest aides, responsible for implementing top-level decisions and coordinating policies within the Party elite.

Neil Thomas, an analyst at Eurasia Group, notes that Ding has played a crucial role in Xi’s push to elevate technocrats within China’s leadership. Some even suggest that Ding may be the official who has spent the most time with Xi over the past five years.


A Key Figure in AI Governance

Ding’s influence is not just political—it extends deep into China’s technology policy.

  • In 2023, he was appointed Director of the newly established Central Science and Technology Commission—a powerful body designed to centralize Party control over China’s technological strategy.

  • This role places him at the core of China’s AI policymaking, particularly at the intersection of AI regulation, technological competition, and national security.

Ding’s remarks on AI safety at the 2025 Davos Forum should not be seen as just the opinion of a senior technocrat. Instead, they signal Beijing’s top-level stance on AI governance.

The message is clear: China is not just developing AI—it is actively shaping global AI governance rules.

https://www.reuters.com/world/china/ding-xuexiang-shanghai-party-cadre-xi-gatekeeper-2022-10-11/

http://www.bjb.cas.cn/kjcx/kcsd/202304/t20230403_6725986.html

https://www.mfa.gov.cn/web/wjdt_674879/gjldrhd_674881/202501/t20250122_11542793.shtmlhttps://www.chinadaily.com.cn/a/202501/22/WS6790bfd5a310a2ab06ea8a70.html#:~:text=Ding stressed the need for,should be shared%2C he said.


2. So, When did the Chinese Government Officially Discuss AI Safety?

China’s focus on artificial intelligence (AI) safety is not a recent development. In fact, the earliest official mention of AI safety in Chinese government documents dates back to 2017.

On July 20, 2017, the State Council of China released the New Generation Artificial Intelligence Development Plan, marking the first time that the Chinese government systematically introduced the concept of AI safety in an official document.

In Chapter 3 of the plan, it specifically states:

"Strengthen AI safety assessment and control capabilities, establish and improve AI-related laws, regulations, ethical norms, and policy frameworks to ensure AI is safe and controllable."

At the time, AI safety was broadly categorized into two types of risks:

  • Short-term risks: Job displacement, algorithmic bias, and privacy concerns.

  • Long-term risks: Ethical challenges, social governance issues, and AI security and controllability.

Following this, in 2018, the Ministry of Science and Technology established the AI Governance Expert Committee, dedicated to researching AI ethics and regulation. The government also began addressing emerging risks, such as algorithmic recommendation systems and deepfake technologies.

However, under the 2017 policy framework, AI safety was still regarded as a subtopic within technological development rather than being fully elevated to a national security concern.

https://www.gov.cn/zhengce/content/2017-07/20/content_5211996.htm

https://asiasociety.org/policy-institute/who-briefs-xi-jinping-how-politburo-study-sessions-help-decode-chinese-politics#section-ii-data-on-politburo-briefings-shows-beijings-changing-priorities-under-xi--19744

3. "Who Regulates AI in China, and How Do They Do It?"

AI Regulation: A Cross-Departmental Matrix Governance

Unlike in the West, where AI regulation is often overseen by a single authority, China adopts a multi-agency, matrix-style governance system, with various government bodies jointly shaping AI oversight.

  • Cyberspace Administration of China (CAC): The content regulator, responsible for overseeing deepfake technologies, generative AI, and algorithmic recommendation systems.

  • Ministry of Industry and Information Technology (MIIT): The industrial policy maker, supervising the commercialization of AI applications in sectors like autonomous driving and healthcare.

  • Ministry of Public Security (MPS): The security watchdog, focusing on facial recognition, AI-driven surveillance, and cybersecurity risks.

  • State Administration for Market Regulation (SAMR): The fair competition enforcer, preventing algorithmic discrimination and anti-competitive practices like price manipulation through big data.

  • Ministry of Science and Technology (MOST): The technology strategist, responsible for setting AI ethical standards and guiding fundamental research.


Two Regulatory Pillars: Use-Case vs. Industry-Specific Regulation

China’s AI regulatory framework operates along two primary axes:

1. Use-Case-Based Regulation

This approach focuses on the specific applications of AI, ensuring they are not misused or pose risks to society. Key regulatory areas include:

  • Algorithmic Recommendation (2021) – Prevents information manipulation and price discrimination via big data. (January 2022)

  • Deep Synthesis (2022) – Mandates AI-generated face swaps and voice synthesis to be clearly labeled. (December 2022)

  • Generative AI (2023) – Introduces review mechanisms for ChatGPT-like models to ensure compliance with content regulations. (July 2023)

2. Industry-Specific Regulation

This framework categorizes AI regulations based on sector-specific risks and compliance requirements:

https://www.mondaq.com/china/technology/1582036/中国人工智能法立法指南

4. Why Doesn't China Introduce a Single AI Law Like the EU?

At first glance, China’s AI regulatory approach appears fragmented compared to the EU’s AI Act, which establishes a unified legal framework classifying AI into different risk categories. Instead of a sweeping, one-size-fits-all law, China has opted for a sector-specific, iterative regulatory model.

Why has Beijing chosen this gradual, decentralized approach over a comprehensive AI law?

One explanation lies in China’s broader regulatory philosophy—"small, fast, and flexible" (小、快、灵). Instead of locking itself into a rigid framework, China layers its regulations incrementally, adapting them as risks and technologies evolve. Unlike the EU’s top-down legislative model, which seeks to impose uniform compliance standards across all industries, China's strategy is more experimental, allowing it to regulate AI in real-time while preserving maneuverability for future adjustments.

A Pragmatic Approach, or Regulatory Uncertainty?

Beijing’s case-by-case approach ensures that AI regulation remains aligned with national security priorities and economic interests, rather than being constrained by a single legal framework. However, this model is not without its drawbacks.

  • Flexibility vs. Stability: While sectoral regulations allow for rapid policy adjustments, they can also lead to uncertainty for businesses, as compliance requirements may shift unpredictably.

  • Government Control vs. Market Innovation: China’s multi-agency regulatory approach ensures strict oversight, but the lack of a clear, centralized AI law means companies must navigate a patchwork of overlapping rules, potentially stifling innovation.

  • International Alignment vs. Sovereign Regulation: The EU’s AI Act aims to set a global standard for AI governance, whereas China’s decentralized model keeps regulations highly localized, making cross-border compliance more complex.

Despite these concerns, discussions around a single AI law have not been entirely dismissed. Since 2023, Chinese legal scholars and think tanks have debated the possibility of consolidating AI governance into a centralized legal framework. The Model AI Law 1.0 (Expert Recommendation Draft) suggests integrating various AI-related regulations under a single national body, mirroring aspects of the EU’s approach. However, the State Council, despite including AI legislation in its 2023 and 2024 Legislative Work Plans, has yet to set a clear timeline for enacting such a law.


Regulatory Convergence, or Divergence?

A step toward a more structured AI governance model came in September 2024, when China introduced its AI Safety Governance Framework. This document incorporates risk-based regulatory principles, resembling elements of the EU AI Act’s tiered classification system—suggesting that China may increasingly align with global regulatory models while retaining its adaptive approach.

The key question remains: Will China eventually consolidate its AI regulations into a unified law, or will it continue its current sector-driven model? Given the rapid pace of AI development, decentralized legislation seems more practical. Comprehensive laws risk becoming obsolete before they are fully implemented, while sector-specific regulations offer greater flexibility to manage emerging risks and evolving technologies.

However, if the EU’s AI Act gains traction globally, China may be forced to adjust its approach—especially if major trading partners demand greater regulatory transparency for AI-driven technologies. For now, China’s model remains a work in progress, balancing regulatory control with the need for technological agility.

https://chinamediaproject.org/2024/09/19/how-china-thinks-about-ai-safety/

https://static.nfapp.southcn.com/content/202308/16/c8002535.html 

5. I knew that China cares about AI security, and now I see that they also care about AI safety. But do they actually care about X-risk?

Yes, China is concerned about AI existential risk (X-risk). While its AI governance has traditionally focused on short-term safety issues—such as misinformation, algorithmic bias, and cybersecurity—recent developments suggest a growing awareness of long-term AI risks, including those associated with advanced AI autonomy and control.


1. Safety vs. Security: How China Thinks About AI Risks

In Chinese, the term "Anquan" (安全) can mean both safety and security, shaping how the government approaches AI regulation.

  • Security (国家安全 / cybersecurity): AI governance has often been framed in terms of national security and political stability, focusing on issues like algorithmic control, censorship, and cyber defense.

  • Safety (技术安全 / AI risk management): Regulations have addressed misinformation, autonomous driving, and medical AI risks, ensuring AI systems do not cause unintended harm.

Historically, China’s AI safety concerns were practical and industry-specific, such as:

  • Ensuring AI-generated content aligns with state narratives.

  • Preventing self-driving cars from causing accidents.

  • Securing data from foreign AI models.

However, in the strictest sense of AI security, scholars focus on X-risk—the idea that AI could lead to human extinction or civilization collapse.

For years, the Chinese government showed little interest in AI doomsday rhetoric. But this changed in July 2024.


2. The Third Plenary Session: AI Safety Becomes a National Priority

A turning point came in July 2024, when the Third Plenary Session of the Chinese Communist Party (CCP) included the "establishment of an AI safety regulatory system" in its policy resolutions—placing AI safety alongside natural disasters, industrial accidents, and biosecurity threats.

This shift is significant because:

  • The Third Plenary Session is China’s most important political meeting, shaping long-term national strategies. For example, the Third Plenary Session of the 11th Central Committee in 1978 marked the beginning of China’s economic reforms and opening up, transitioning from a planned economy to a market economy.

  • AI safety was elevated from an industry-specific concern to a systemic risk. The government acknowledged the unpredictability of AI development, calling for regulation to prevent “reckless growth at the expense of safety.”

This is the first time AI safety was explicitly categorized as a national safety issue, putting it in the same risk category as biosecurity and nuclear safety.


6. "That's interesting, I'm curious how Chinese leaders started to learn about AI safety."

Due to the opaque nature of Chinese political processes, much of the internal deliberation remains undisclosed. However, informal yet high-level engagements between scholars and policymakers provide insight into why AI safety began appearing in official Chinese policy documents by the early 2020s.

The 2018 Politburo Study Session: When AI Safety Entered China’s Policy Agenda

On October 31, 2018, Xi Jinping personally presided over a Politburo collective study session on AI, marking the first time China’s top leadership formally engaged with AI safety. This session came just months after OpenAI’s GPT-1 demonstrated the shift toward self-supervised learning, signaling AI’s broader implications beyond industrial automation.

Why was this session important?

  • Politburo study sessions shape China’s long-term strategy, and Xi personally approved the AI agenda.

  • Gao Wen, a Peking University AI expert, was the key speaker, emphasizing AI’s rapid evolution and risks.

  • Xi Jinping’s closing remarks explicitly mentioned AI safety, calling for risk prevention, legal and ethical research, and regulatory frameworks.

Even today, Gao Wen continues to reference the 2018 Politburo study session, underscoring its lasting impact on China’s AI governance. Beyond formal meetings, many AI safety advocates influence leadership through private networks.

  • Xue Lan, Dean of Tsinghua University’s Schwarzman College, has been invited three times to lecture at Politburo study sessions, a rare honor that underscores his influence in shaping China’s technology policy. He is also a strong proponent of AI safety, actively advocating for greater awareness and regulation of AI risks.

  • Andrew Yao (Yao Qizhi), a Turing Award-winning computer scientist and one of China’s most esteemed AI researchers, has significantly influenced the country’s AI landscape. Xi Jinping personally acknowledged his contributions in a letter, highlighting his role in advancing China’s scientific progress. Meanwhile, Yao has been an outspoken advocate for AI safety, frequently warning in public forums that the risks of uncontrolled AI development could be catastrophic.

https://www.gov.cn/xinwen/2018-10/31/content_5336251.htm

https://www.qiaobaous.com/static/content/SH/2024-07-21/1264831785977679872.html

https://www.12371.cn/2024/11/06/ARTI1730877621816756.shtml

https://www.economist.com/china/2024/08/25/is-xi-jinping-an-ai-doomer

https://idais.ai/dialogue/idais-venice/

https://www.ainavpro.com/3782.html




最新消息

2025年3月4日

China AI Safety Evolution

Double click the dataset icon to add your own content.

Read More

2025年3月4日

Ethical Implications of AI Algorithms

Double click the dataset icon to add your own content.

Read More

2025年3月4日

Algorithmic Bias in Machine Learning

Double click the dataset icon to add your own content.

Read More

2025年3月4日

Future of Autonomous Vehicles

Double click the dataset icon to add your own content.

Read More

2025年3月4日

AI in Healthcare: Opportunities and Challenges

Double click the dataset icon to add your own content.

Read More
bottom of page