top of page

Search Results

以空白搜尋找到 14 個結果

  • TNO Tech News Online

    中国人工智能前沿 及时了解我们的人工智能政策更新 电子邮件* 是的,请订阅您的新闻通讯。 * 提交 Who we are AI Governance Exchange is a Singapore-based nonprofit bridging Chinese and Western AI safety efforts to advance responsible development and cross-border standards. 我们的服务 研究 描述您的一项服务 阅读更多 咨询 描述您的一项服务 阅读更多 联络 描述您的一项服务 阅读更多 订阅双语邮件 加入我们的邮件列表并获取仅限我们订阅者的特别内容。 Enter your email here Sign Up Thanks for submitting!

  • Research | My Site

    China AI Safety Evolution 2025年3月4日 经过: Author Name Timeline of Key Developments Date Event Significance 2017, July State Council releases New Generation AI Development Plan Earliest ministerial-level reference to AI safety; emphasized need for legal/ethical frameworks 2018, October Xi Jinping presides over Politburo study session on AI First time Xi publicly addresses AI safety risks; legitimized AI safety advocacy in China 2021 onwards Implementation of AI regulations across multiple agencies Shift from informal policy planning to actual regulatory implementation 2022, December Deep synthesis technology rules issued Part of sector-specific regulatory approach 2022, January Algorithmic recommendation guidelines released Further development of incremental regulatory framework 2023, July Generative AI regulations introduced Response to the emergence of LLM technologies 2023, October Ethical guidelines for science and technology published Broadened scope of AI governance to ethical considerations 2024, September AI Safety Governance Framework released Introduction of risk-based regulatory principles similar to EU approach 2024, July Third Plenary Session includes AI safety in decisions Elevated AI safety to highest level of political attention 2025, January Vice-Premier Ding Xuexiang speaks at Davos Emphasized need for global AI governance under UN framework Key Themes in China's AI Safety Evolution: Shift from general safety concerns to existential risks Preference for "small, fast, flexible" sector-specific regulation over comprehensive frameworks Growing alignment with global regulatory models while maintaining Chinese characteristics Increasing high-level political attention to AI safety AI Safety in China: What You Need to Know “We will not blindly follow trends, nor will we engage in unrestrained international competition.” At the 2025 World Economic Forum Annual Meeting in Davos , Chinese Vice Premier Ding Xuexiang presented an intriguing metaphor to the assembled political and business leaders: "AI should be an 'Ali Baba's cave' of treasures, not a 'Pandora’s box' of risks." Will the development of artificial intelligence become an Ali Baba’s cave brimming with wealth and opportunity , or an unleashed Pandora’s box, fraught with uncontrollable dangers ? Ding’s remarks quickly sparked discussions both inside and outside the forum. Unlike many other politicians around world, who often focus on AI’s challenges regarding privacy, employment, or ethics, Ding emphasized the systemic risks AI might pose—and the necessity of installing a “braking system” for this technological race. At a time when global AI regulation is still in its early exploratory phase, why has Beijing chosen this moment to deliver such a stark warning— and through a senior official who has rarely spoken publicly about AI ? Is this merely diplomatic rhetoric, or does it signal a shift in China’s approach to AI governance? 1. Who Is Ding Xuexiang, and Why Does He Matter? In China’s political system, an official’s formal title does not always accurately reflect their true influence. Ding Xuexiang is a prime example of this dynamic. A Political Role Beyond His Formal Title On paper, Ding serves as a Vice Premier of the State Council, overseeing policy coordination in areas such as technology, industry, and environmental protection. However, his real role extends far beyond these administrative responsibilities. A Member of the Political Core At 62 years old, Ding Xuexiang is one of the seven members of the Politburo Standing Committee (PSC)—the highest decision-making body of the Chinese Communist Party. He is also the only Standing Committee member born in the 1960s, making him the youngest in the group. Unlike many senior officials who rise through traditional Party affairs, Ding’s career began in science and administrative management before he transitioned into politics. Engineering Background – Ding was originally trained in mechanical engineering and spent 17 years working in scientific research and management at the Shanghai Research Institute of Materials. Political Ascent – In 2007, he became a key aide to Xi Jinping when Xi was Party Secretary of Shanghai. Since then, he has followed Xi’s rise and ascended to the Party’s highest ranks. Policy Coordinator – Since 2013, Ding has been one of Xi’s closest aides, responsible for implementing top-level decisions and coordinating policies within the Party elite. Neil Thomas, an analyst at Eurasia Group, notes that Ding has played a crucial role in Xi’s push to elevate technocrats within China’s leadership. Some even suggest that Ding may be the official who has spent the most time with Xi over the past five years. A Key Figure in AI Governance Ding’s influence is not just political—it extends deep into China’s technology policy. In 2023, he was appointed Director of the newly established Central Science and Technology Commission—a powerful body designed to centralize Party control over China’s technological strategy. This role places him at the core of China’s AI policymaking, particularly at the intersection of AI regulation, technological competition, and national security. Ding’s remarks on AI safety at the 2025 Davos Forum should not be seen as just the opinion of a senior technocrat. Instead, they signal Beijing’s top-level stance on AI governance. The message is clear: China is not just developing AI—it is actively shaping global AI governance rules. https://www.reuters.com/world/china/ding-xuexiang-shanghai-party-cadre-xi-gatekeeper-2022-10-11/ http://www.bjb.cas.cn/kjcx/kcsd/202304/t20230403_6725986.html https://www.mfa.gov.cn/web/wjdt_674879/gjldrhd_674881/202501/t20250122_11542793.shtmlhttps://www.chinadaily.com.cn/a/202501/22/WS6790bfd5a310a2ab06ea8a70.html#:~:text=Ding stressed the need for,should be shared%2C he said. 2. So, When did the Chinese Government Officially Discuss AI Safety? China’s focus on artificial intelligence (AI) safety is not a recent development. In fact, the earliest official mention of AI safety in Chinese government documents dates back to 2017. On July 20, 2017, the State Council of China released the New Generation Artificial Intelligence Development Plan, marking the first time that the Chinese government systematically introduced the concept of AI safety in an official document. In Chapter 3 of the plan, it specifically states: "Strengthen AI safety assessment and control capabilities, establish and improve AI-related laws, regulations, ethical norms, and policy frameworks to ensure AI is safe and controllable." At the time, AI safety was broadly categorized into two types of risks: Short-term risks: Job displacement, algorithmic bias, and privacy concerns. Long-term risks: Ethical challenges, social governance issues, and AI security and controllability. Following this, in 2018, the Ministry of Science and Technology established the AI Governance Expert Committee, dedicated to researching AI ethics and regulation. The government also began addressing emerging risks, such as algorithmic recommendation systems and deepfake technologies. However, under the 2017 policy framework, AI safety was still regarded as a subtopic within technological development rather than being fully elevated to a national security concern. https://www.gov.cn/zhengce/content/2017-07/20/content_5211996.htm 。 https://asiasociety.org/policy-institute/who-briefs-xi-jinping-how-politburo-study-sessions-help-decode-chinese-politics#section-ii-data-on-politburo-briefings-shows-beijings-changing-priorities-under-xi--19744 3. "Who Regulates AI in China, and How Do They Do It?" AI Regulation: A Cross-Departmental Matrix Governance Unlike in the West, where AI regulation is often overseen by a single authority, China adopts a multi-agency, matrix-style governance system, with various government bodies jointly shaping AI oversight. Cyberspace Administration of China (CAC): The content regulator, responsible for overseeing deepfake technologies, generative AI, and algorithmic recommendation systems. Ministry of Industry and Information Technology (MIIT): The industrial policy maker, supervising the commercialization of AI applications in sectors like autonomous driving and healthcare. Ministry of Public Security (MPS): The security watchdog, focusing on facial recognition, AI-driven surveillance, and cybersecurity risks. State Administration for Market Regulation (SAMR): The fair competition enforcer, preventing algorithmic discrimination and anti-competitive practices like price manipulation through big data. Ministry of Science and Technology (MOST): The technology strategist, responsible for setting AI ethical standards and guiding fundamental research. Two Regulatory Pillars: Use-Case vs. Industry-Specific Regulation China’s AI regulatory framework operates along two primary axes: 1. Use-Case-Based Regulation This approach focuses on the specific applications of AI, ensuring they are not misused or pose risks to society. Key regulatory areas include: Algorithmic Recommendation (2021) – Prevents information manipulation and price discrimination via big data. (January 2022) Deep Synthesis (2022) – Mandates AI-generated face swaps and voice synthesis to be clearly labeled. (December 2022) Generative AI (2023) – Introduces review mechanisms for ChatGPT-like models to ensure compliance with content regulations. (July 2023) 2. Industry-Specific Regulation This framework categorizes AI regulations based on sector-specific risks and compliance requirements: Autonomous Driving (July 2021) Medical AI (July 2021) https://www.mondaq.com/china/technology/1582036/中国人工智能法立法 指南 4. Why Doesn't China Introduce a Single AI Law Like the EU? At first glance, China’s AI regulatory approach appears fragmented compared to the EU’s AI Act, which establishes a unified legal framework classifying AI into different risk categories. Instead of a sweeping, one-size-fits-all law, China has opted for a sector-specific, iterative regulatory model. Why has Beijing chosen this gradual, decentralized approach over a comprehensive AI law? One explanation lies in China’s broader regulatory philosophy—"small, fast, and flexible" (小、快、灵). Instead of locking itself into a rigid framework, China layers its regulations incrementally, adapting them as risks and technologies evolve. Unlike the EU’s top-down legislative model, which seeks to impose uniform compliance standards across all industries, China's strategy is more experimental, allowing it to regulate AI in real-time while preserving maneuverability for future adjustments. A Pragmatic Approach, or Regulatory Uncertainty? Beijing’s case-by-case approach ensures that AI regulation remains aligned with national security priorities and economic interests, rather than being constrained by a single legal framework. However, this model is not without its drawbacks. Flexibility vs. Stability: While sectoral regulations allow for rapid policy adjustments, they can also lead to uncertainty for businesses, as compliance requirements may shift unpredictably. Government Control vs. Market Innovation: China’s multi-agency regulatory approach ensures strict oversight, but the lack of a clear, centralized AI law means companies must navigate a patchwork of overlapping rules, potentially stifling innovation. International Alignment vs. Sovereign Regulation: The EU’s AI Act aims to set a global standard for AI governance, whereas China’s decentralized model keeps regulations highly localized, making cross-border compliance more complex. Despite these concerns, discussions around a single AI law have not been entirely dismissed. Since 2023, Chinese legal scholars and think tanks have debated the possibility of consolidating AI governance into a centralized legal framework. The Model AI Law 1.0 (Expert Recommendation Draft) suggests integrating various AI-related regulations under a single national body, mirroring aspects of the EU’s approach. However, the State Council, despite including AI legislation in its 2023 and 2024 Legislative Work Plans, has yet to set a clear timeline for enacting such a law. Regulatory Convergence, or Divergence? A step toward a more structured AI governance model came in September 2024, when China introduced its AI Safety Governance Framework. This document incorporates risk-based regulatory principles, resembling elements of the EU AI Act’s tiered classification system—suggesting that China may increasingly align with global regulatory models while retaining its adaptive approach. The key question remains: Will China eventually consolidate its AI regulations into a unified law, or will it continue its current sector-driven model? Given the rapid pace of AI development, decentralized legislation seems more practical. Comprehensive laws risk becoming obsolete before they are fully implemented, while sector-specific regulations offer greater flexibility to manage emerging risks and evolving technologies. However, if the EU’s AI Act gains traction globally, China may be forced to adjust its approach—especially if major trading partners demand greater regulatory transparency for AI-driven technologies. For now, China’s model remains a work in progress, balancing regulatory control with the need for technological agility. https://chinamediaproject.org/2024/09/19/how-china-thinks-about-ai-safety/ https://static.nfapp.southcn.com/content/202308/16/c8002535.html 5. I knew that China cares about AI security, and now I see that they also care about AI safety. But do they actually care about X-risk? Yes, China is concerned about AI existential risk (X-risk). While its AI governance has traditionally focused on short-term safety issues—such as misinformation, algorithmic bias, and cybersecurity—recent developments suggest a growing awareness of long-term AI risks, including those associated with advanced AI autonomy and control. 1. Safety vs. Security: How China Thinks About AI Risks In Chinese, the term "Anquan" (安全) can mean both safety and security, shaping how the government approaches AI regulation. Security (国家安全 / cybersecurity): AI governance has often been framed in terms of national security and political stability, focusing on issues like algorithmic control, censorship, and cyber defense. Safety (技术安全 / AI risk management): Regulations have addressed misinformation, autonomous driving, and medical AI risks, ensuring AI systems do not cause unintended harm. Historically, China’s AI safety concerns were practical and industry-specific, such as: Ensuring AI-generated content aligns with state narratives. Preventing self-driving cars from causing accidents. Securing data from foreign AI models. However, in the strictest sense of AI security, scholars focus on X-risk—the idea that AI could lead to human extinction or civilization collapse. For years, the Chinese government showed little interest in AI doomsday rhetoric. But this changed in July 2024. 2. The Third Plenary Session: AI Safety Becomes a National Priority A turning point came in July 2024, when the Third Plenary Session of the Chinese Communist Party (CCP) included the "establishment of an AI safety regulatory system" in its policy resolutions—placing AI safety alongside natural disasters, industrial accidents, and biosecurity threats. This shift is significant because: The Third Plenary Session is China’s most important political meeting, shaping long-term national strategies. For example, the Third Plenary Session of the 11th Central Committee in 1978 marked the beginning of China’s economic reforms and opening up, transitioning from a planned economy to a market economy. AI safety was elevated from an industry-specific concern to a systemic risk. The government acknowledged the unpredictability of AI development, calling for regulation to prevent “reckless growth at the expense of safety.” This is the first time AI safety was explicitly categorized as a national safety issue, putting it in the same risk category as biosecurity and nuclear safety. 6. "That's interesting, I'm curious how Chinese leaders started to learn about AI safety." Due to the opaque nature of Chinese political processes, much of the internal deliberation remains undisclosed. However, informal yet high-level engagements between scholars and policymakers provide insight into why AI safety began appearing in official Chinese policy documents by the early 2020s. The 2018 Politburo Study Session: When AI Safety Entered China’s Policy Agenda On October 31, 2018, Xi Jinping personally presided over a Politburo collective study session on AI, marking the first time China’s top leadership formally engaged with AI safety. This session came just months after OpenAI’s GPT-1 demonstrated the shift toward self-supervised learning, signaling AI’s broader implications beyond industrial automation. Why was this session important? Politburo study sessions shape China’s long-term strategy, and Xi personally approved the AI agenda. Gao Wen, a Peking University AI expert, was the key speaker, emphasizing AI’s rapid evolution and risks. Xi Jinping’s closing remarks explicitly mentioned AI safety, calling for risk prevention, legal and ethical research, and regulatory frameworks. Even today, Gao Wen continues to reference the 2018 Politburo study session, underscoring its lasting impact on China’s AI governance. Beyond formal meetings, many AI safety advocates influence leadership through private networks. Xue Lan, Dean of Tsinghua University’s Schwarzman College, has been invited three times to lecture at Politburo study sessions, a rare honor that underscores his influence in shaping China’s technology policy. He is also a strong proponent of AI safety, actively advocating for greater awareness and regulation of AI risks. Andrew Yao (Yao Qizhi), a Turing Award-winning computer scientist and one of China’s most esteemed AI researchers, has significantly influenced the country’s AI landscape. Xi Jinping personally acknowledged his contributions in a letter, highlighting his role in advancing China’s scientific progress. Meanwhile, Yao has been an outspoken advocate for AI safety, frequently warning in public forums that the risks of uncontrolled AI development could be catastrophic. https://www.gov.cn/xinwen/2018-10/31/content_5336251.htm https://www.qiaobaous.com/static/content/SH/2024-07-21/1264831785977679872.html https://www.12371.cn/2024/11/06/ARTI1730877621816756.shtml https://www.economist.com/china/2024/08/25/is-xi-jinping-an-ai-doomer https://idais.ai/dialogue/idais-venice/ https://www.ainavpro.com/3782.html 最新消息 2025年3月4日 China AI Safety Evolution Double click the dataset icon to add your own content. Read More 2025年3月4日 Ethical Implications of AI Algorithms Double click the dataset icon to add your own content. Read More 2025年3月4日 Algorithmic Bias in Machine Learning Double click the dataset icon to add your own content. Read More 2025年3月4日 Future of Autonomous Vehicles Double click the dataset icon to add your own content. Read More 2025年3月4日 AI in Healthcare: Opportunities and Challenges Double click the dataset icon to add your own content. Read More

  • FAQ | My Site

    常问问题

  • Communication | My Site

    沟通 政策 政府 政策 政府 政策 政府 政策 政府 Need help organizing a visit or meeting? Get in touch to learn how we can support you. 联系我们

  • About | My Site

    关于 团队 这是您的团队部分。 简要介绍团队,然后在下面添加他们的简历。单击此处进行编辑。 团队 这是您的团队部分。 简要介绍团队,然后在下面添加他们的简历。单击此处进行编辑。 团队 这是您的团队部分。 简要介绍团队,然后在下面添加他们的简历。单击此处进行编辑。 艺术总监 阿什利·琼斯 为每位团队成员添加简短的个人简介。使其简短且信息丰富,以吸引访客的注意力。 技术主管 唐·弗朗西斯 为每位团队成员添加简短的个人简介。使其简短且信息丰富,以吸引访客的注意力。 加入我们的团队 对 AI 政策和治理充满热情?加入我们敬业的团队,共同打造负责任、包容的 AI 未来。探索中国 AI 前沿的职业机会。

  • 咨询 | My Site

    咨询 了解中国人工智能技术和政策环境可能会令人困惑,我们将根据您的独特需求制定个性化的行动计划。 我们的服务 服务名称 在这里描述您的服务。是什么让它如此出色?使用简短、引人注目的文字告诉人们您提供什么以及他们将获得的好处。出色的描述会吸引读者的兴趣,并使他们更有可能继续预订。 服务名称 在这里描述您的服务。是什么让它如此出色?使用简短、引人注目的文字告诉人们您提供什么以及他们将获得的好处。出色的描述会吸引读者的兴趣,并使他们更有可能继续预订。 服务名称 在这里描述您的服务。是什么让它如此出色?使用简短、引人注目的文字告诉人们您提供什么以及他们将获得的好处。出色的描述会吸引读者的兴趣,并使他们更有可能继续预订。 过去的客户 We offer a 30-minute advisory call at no cost to explore how we can support your engagement with Chinese stakeholders in AI governance. Our services are provided free of charge to organizations dedicated to mitigating risks associated with catastrophic AI outcomes. Reach out to us at info@ai-gx.org

  • Accessibility | My Site

    中国人工智能前沿致力于确保其网站对残障人士的可访问性。我们的目标是通过提供平等的信息和服务渠道,促进中国与世界其他地区的交流。我们的努力与人工智能前沿技术和人工智能监管的进步保持一致,同时考虑到人工智能的政治环境。我们努力使我们的网站符合 AA 级无障碍标准,符合 WCAG 2.1 指南。这包括调整网站以适应辅助技术,例如屏幕阅读器和键盘的使用,并实现清晰的标题结构、图像的替代文本以及可访问的视频、音频和文件。我们不断努力使用辅助功能向导来识别和修复潜在的可访问性问题。 无障碍声明 本声明体现了我们一贯致力于让中国人工智能前沿网站可供所有人(包括残障人士)访问。我们致力于确保我们位于 [输入网站名称和地址] 的网站能够让所有人轻松访问和浏览。 我们对网络可访问性的态度 在中国人工智能前沿,我们坚信为所有用户提供包容性的在线体验。网络无障碍使残障人士能够感知、理解、浏览和与网络互动,确保他们与其他人一样有机会获取、使用和贡献网络内容。 我们网站的可访问性功能 我们的网站是根据 AA 级 WCAG 2.1 指南设计和开发的。我们实施了各种可访问性调整,包括使用可访问性向导来识别和解决潜在问题、设置清晰的标题结构、为图像提供替代文本,以及确保所有视频、音频和文件均可访问。 我们还密切关注色彩对比,并减少网站上的动效,以提高可访问性。我们对可访问性的承诺延伸到解决可能影响我们网站整体可访问性的任何第三方内容。 由于第三方内容而部分符合标准的声明 [仅在相关时添加] 网站上某些页面的可访问性可能会受到第三方内容的影响。我们致力于实现完全合规,并不断努力解决与此类内容相关的任何可访问性问题。 我们组织中的可访问性 在中国人工智能前沿,我们不仅了解我们网站上的无障碍设施的重要性,也了解我们实体办公室和分支机构的无障碍设施的重要性。我们实施了各种无障碍设施,从进入我们场所的入口到我们提供的服务,以确保残障人士能够轻松使用我们的设施和服务。 联系我们获取无障碍支持 如果您在我们的网站上遇到任何无障碍问题或需要进一步帮助,请随时联系我们的无障碍协调员,[无障碍协调员姓名],地址为 [无障碍协调员电话号码] 或 [无障碍协调员电子邮件地址]。我们重视您的反馈,并致力于及时解决任何无障碍问题。

  • Privacy Policy | My Site

    隐私政策 法律免责声明 本页提供的解释和信息只是关于如何编写您自己的隐私政策文档的一般性和高级解释和信息。您不应将本文视为法律建议或关于您实际应该做什么的建议,因为我们无法提前知道您希望在您的企业和您的客户及访客之间建立的具体隐私政策是什么。我们建议您寻求法律建议,以帮助您理解并协助您创建自己的隐私政策。 隐私政策——基础知识 话虽如此,隐私政策是一份声明,披露网站收集、使用、披露、处理和管理其访问者和客户数据的部分或全部方式。它通常还包括一份关于网站承诺保护其访问者或客户隐私的声明,以及关于网站为保护隐私而实施的不同机制的解释。 不同司法管辖区对于隐私政策必须包含的内容有不同的法律义务。您有责任确保遵守与您的活动和所在地相关的法律。 隐私政策中应包含哪些内容 一般而言,隐私政策通常会解决以下类型的问题:网站收集的信息类型以及收集数据的方式;网站收集这些类型信息的原因;网站与第三方共享信息的做法是什么;访客和客户如何根据相关隐私立法行使其权利;关于未成年人数据收集的具体做法;等等。 要了解更多信息,请参阅我们的文章“创建隐私政策 ”。

  • Terms & Conditions | My Site

    条款和条件 法律免责声明 中国人工智能前沿了解政府政策的动态和复杂性,尤其是在人工智能前沿技术和监管领域。本页提供的信息旨在提供一般性见解和解释。但是,不应将其视为法律建议或具体建议。由于人工智能政策和监管的格局在不断发展,因此必须寻求专业的法律顾问,根据贵组织与人工智能技术和政府政策互动的具体要求和细微差别来定制条款和条件。 了解人工智能政策和法规 在不断扩展的 AI 政策和法规领域中,必须制定清晰、准确的条款和条件,以规范贵组织、其利益相关者和不断发展的 AI 政策格局之间的互动。这些条款和条件是划定贵组织与参与 AI 技术和政府政策的实体之间法律关系的基础框架。这些条款和条件的定制性质确保它们与 AI 技术和政府法规的独特动态保持一致,满足贵组织的独特需求。 条款和条件文件中的全面内容 人工智能政策和法规领域涵盖了多方面的内容,因此需要在条款和条件文件中全面涵盖这些内容。从界定人工智能技术的允许用途到解决治理和合规性不断发展的问题,条款和条件文件涵盖了一系列要素,例如道德考量、数据隐私协议、知识产权和程序框架。确保条款和条件文件与动态人工智能格局的相关性和适应性至关重要,我们在这些复杂领域中的专业知识可以为制定强大的人工智能政策框架提供宝贵的见解。

  • Refund Policy | My Site

    退款政策 法律免责声明 本页提供的解释和信息只是关于如何编写您自己的退款政策文件的一般性和高级解释和信息。您不应将本文视为法律建议或关于您实际应该做什么的建议,因为我们无法提前知道您希望在您的企业和客户之间建立的具体退款政策是什么。我们建议您寻求法律建议,以帮助您理解并协助您创建自己的退款政策。 退款政策 - 基础知识 话虽如此,退款政策是一份具有法律约束力的文件,旨在建立您与客户之间的法律关系,说明您如何以及是否向他们提供退款。在线销售产品的企业有时需要(取决于当地法律法规)提供其产品退货政策和退款政策。在某些司法管辖区,这是遵守消费者保护法所必需的。它还可以帮助您避免因客户对所购产品不满意而提出法律索赔。 退款政策应包括哪些内容 一般来说,退款政策通常会解决以下类型的问题:要求退款的时间范围;退款是全额还是部分;客户在什么条件下可以获得退款;等等。

  • Shipping Policy | My Site

    运输政策 法律免责声明 本页提供的解释和信息只是关于如何编写您自己的运输政策文件的一般性和高级解释和信息。您不应将本文视为法律建议或关于您实际应该做什么的建议,因为我们无法提前知道您希望在您的企业和客户之间建立的具体运输政策是什么。我们建议您寻求法律建议,以帮助您理解并协助您创建自己的运输政策。 运输政策 - 基础知识 话虽如此,运输政策是一份具有法律约束力的文件,旨在建立您与客户之间的法律关系。它是向客户展示您的义务的法律框架,同时也用于解决可能发生的不同情况以及每种情况下会发生什么。 运输政策是一种很好的做法,它对双方都有好处——您和您的客户。您的客户可能会从了解他们可以从您的服务中获得什么中受益。您可能会受益,因为如果您制定了明确的运输政策,人们可能会更愿意向您购物,因为不会对您的运输时间表或流程有任何疑问。 运输政策中应包含哪些内容 一般来说,运输政策通常解决以下类型的问题:处理订单的时间范围;运输成本;不同的国内和国际运输解决方案;潜在的服务中断;等等。

  • Resources | My Site

    Resources China’s Core AI Governance Institutions Policymakers Cyberspace Administration of China (CAC ; 网信办 ) The dominant force in China's AI safety governance. This body holds dual identities: as a Party organ (Central Cyberspace Affairs Commission) and as a state administrative body (Cyberspace Administration of China). It is the core institution regulating generative AI, algorithmic recommendation systems, and content compliance. Its authority stems not only from technical capabilities but also from its direct connection to the Communist Party's central leadership. In the realm of generative AI, the CAC spearheaded the "Interim Measures for the Management of Generative Artificial Intelligence Services" and is responsible for implementing the algorithmic registration system. It exemplifies China’s emphasis on "controllability" and "guidance," standing at the intersection of ideological control and technical regulation. Ministry of Industry and Information Technology (MIIT; 工信部 ) MIIT is the main body promoting the industrialization and standardization of AI technologies in China. Its focus lies not in political sensitivity or ethics, but in integrating AI into the national industrial system. It has led the release of strategies on intelligent manufacturing, internet of vehicles, domestic chip development, and AI-empowered industries. Recently, MIIT has also joined regulatory discussions regarding model compliance, particularly in areas such as model evaluation and computing infrastructure. National Development and Reform Commission (NDRC; 发改委 ) NDRC serves as China's top-level macroeconomic planner. It regards AI as part of the country’s "new productive forces" and a key strategic emerging industry. It is responsible for large-scale development programs and regional innovation pilots, such as the establishment of AI innovation zones. In cross-ministry policy coordination, the NDRC often acts as a resource allocator but is less involved in the technical or safety aspects of AI regulation. Ministry of Science and Technology (MOST;科技部) MOST is the central authority on AI ethics and scientific research governance. It established the National Governance Committee for the New Generation Artificial Intelligence and led the release of China’s AI ethical guidelines. MOST serves as a provider of "public legitimacy," offering internationally presentable frameworks for AI ethics. Additionally, it oversees the distribution of national research funds for basic AI science and major technology initiatives. Ministry of Education (教育部) Though not directly involved in regulation, the Ministry of Education shapes the AI governance landscape through its influence on talent cultivation. In recent years, it has promoted AI curriculum in higher education, supported the founding of AI colleges at top universities, and encouraged integration between academia and industry. Given that many of China’s leading model labs (e.g., Tsinghua AIR, Peking University Zhipu) are university-based, the ministry’s policies profoundly affect the R&D ecosystem. Example: Joint issue on Generative AI Regulation (2023) The "Interim Measures for the Management of Generative Artificial Intelligence Services" implemented in July 2023 exemplify China’s multi-agency approach to AI regulation. The policy was jointly issued by seven central government bodies: The Cyberspace Administration of China (CAC) led the drafting to ensure political alignment and content safety. The National Development and Reform Commission (NDRC) aligned the policy with national digital infrastructure and innovation goals. The Ministry of Industry and Information Technology (MIIT) contributed technical norms concerning training data, model development, and computing resources. The Ministry of Science and Technology (MOST) incorporated ethical governance frameworks. The Ministry of Education focused on AI talent and educational integration. The Ministry of Public Security addressed security risks such as deepfakes and fraud. The National Radio and Television Administration ensured generated content aligned with national media and cultural standards. Standardization & Compliance State Administration for Market Regulation (SAMR; 市场监管总局 ) SAMR is China’s top market regulator and plays a foundational role in AI governance. Though it doesn’t directly regulate algorithms or models, it serves as the institutional hub for China’s national standardization system, overseeing quality compliance, product testing, and certification systems. This provides an administrative foundation for embedding technical standards into regulation. Standardization Administration of China ( SAC ; 国家标准委 ) SAC, under SAMR, is responsible for organizing and issuing national (GB) and industry standards. It does not create standards directly but operates through technical committees (TCs). In AI, the most relevant committee is TC28. Chinese national technical committee (TC28;信标委) China’s official body for developing national IT standards. It mirrors ISO/IEC JTC 1 and covers key areas like AI, cloud computing, and cybersecurity. AI Subcommittee (TC28/SC42; 人工智能分技术委员会 ) AI Subcommittee, SC42, established in 2020, mirrors the international ISO/IEC SC42 group. SC42 is responsible for drafting standards on model evaluation, training data quality, explainability, and security. Its members come from government labs, universities, tech firms, and certification agencies. Draft standards must be reviewed by SAC before becoming official national or recommended standards. Cyberspace Administration of China (CAC; 网信办 ) Though CAC does not draft technical standards, it enforces regulatory requirements related to algorithmic filing, safety assessment, and content compliance. In practice, it often relies on standards produced by MIIT and SAC to determine whether a model is "compliant." Ministry of Industry and Information Technology (MIIT; 工信部 ) As the technical lead, MIIT not only co-drafts standards but also issues recommended technical norms and industry reference documents. Through pilot programs and test platforms, MIIT promotes implementation of model evaluation, data audit, and security safeguards. It is central in translating policy into technical rules. AI Standards Committee (MIIT/TC1; 人工智能标准化技术委员会 ) In January 2025, MIIT launched a new dedicated AI Standards Committee, potentially referred to as MIIT/TC1. This committee is separate from TC28 and may take a more engineering-oriented approach, focusing on practical implementation of safety testing, deployment protocols, and compliance tools. SC42 may retain its role in international-facing standard alignment, while TC1 covers domestic enforcement and technical operations. Ministry of Public Security ( 公安部 ) Though not a standards-maker, the Ministry of Public Security plays an important role in AI enforcement. Its focus lies in misuse cases such as fraud, identity theft, deepfake videos, and network manipulation. In some regions, the ministry has worked with CAC to investigate AI-related criminal behavior. Its involvement in biometric and surveillance use cases is expected to deepen. Legislative Foundation National People’s Congress ( 全国人大 ) AI-specific legislation in China falls under the authority of the National People’s Congress (NPC) and its Standing Committee’s Legislative Affairs Commission. Though no AI Act exists yet, the NPC has passed three foundational laws for digital governance: Cybersecurity Law (2017): Defines basic boundaries for data flows and infrastructure protection; Data Security Law (2021): Establishes classification and graded protection of data; Personal Information Protection Law (2021): Addresses automated decision-making, data minimization, and user rights. Together, these form a legal shield for AI governance, especially around platform liability and user rights. Research Network & Expert Community China AI Development and Safety Network ( China AISI ; 中国人工智能; 发展与安全研究网络 ) A strategic alliance of leading Chinese AI research institutions. It is not a formal government agency , lacks a board or administrative hierarchy, but includes many of the most influential government-affiliated think tanks and laboratories. It functions as China’s de facto hub for AI safety discourse and international engagement. Member Institutes China Academy of Information and Communications Technology (CAICT;信通院) A think tank under MIIT, CAICT leads algorithmic registration, infrastructure security, and national policy drafting. It frequently serves as China’s representative in international standardization bodies. China Center for Information Industry Development (CCID; 赛迪 ) Also under MIIT, CCID focuses on AI development roadmaps, technology maturity assessments, and industry compliance. It connects policy design with industry implementation. Beijing Academy of Artificial Intelligence (BAAI; 智源研究院 ) BAAI promotes academic-industry collaboration and long-term foundational AI research. It hosts annual international conferences and has a focus on the long-term societal impact of AI. Shanghai AI Laboratory (SAIL; 上海人工智能实验室 ) A national-level lab focused on frontier AI deployment. It hosts projects on ethics and alignment and has a research team working on AGI-related risks. Institute of Automation, Chinese Academy of Sciences (CASIA; 自动化所 ) Founded in 1956, CASIA is one of China’s earliest institutions in intelligent systems research. As part of the Chinese Academy of Sciences, it contributes foundational work in safe and explainable AI. QiZhi Institute ( 期智研究院 ) Founded by Turing Award winner Andrew Yao, QiZhi Institute works on cutting-edge quantum computing and AI safety research. Peking University (北京大学) One of China’s most prestigious universities, Peking University has strong AI research teams working on ethics, governance, and societal impacts. Tsinghua University (清华大学) Also under MIIT, CCID focuses on AI development roadmaps, technology maturity assessments, and industry compliance. It connects policy design with industry implementation.

bottom of page