Executive Summary
Multinational companies (MNCs) are increasingly looking to deploy global AI-based recruitment tools (the “AI Tool”) in China. These tools help HR departments screen resumes and conduct preliminary interviews using bot functionalities. Some AI Tools use generative AI to interact with candidates, generating content with minimal human intervention, while others do not. Typically, these tools rely on global solutions hosted on SaaS vendors’ infrastructure outside China. MNCs’ headquarters legal departments establish global control points for anti-bias, transparency, and data privacy impact assessments (DPIA). However, deploying these tools in China requires additional considerations due to local regulations on data protection and cybersecurity.
Existing PRC laws and regulations impose stringent localization requirements on algorithms and personal information. The Cyberspace Administration of China (CAC) and its provincial branches enforce these regulations. To deploy the AI Tool in China in compliance with these requirements, an MNC must:
- Complete algorithm filings with the CAC.
- If a volume threshold is met, comply with a personal data cross-border data transfer mechanism (CBDT Mechanism), which may require a separate filing.
Given these legal hurdles, MNCs often:
- Disable the AI Tool for public use and replace it with a local solution; or
- Deploy the AI Tool for public use on a risk-based approach, taking risk mitigation measures and complying with the CBDT Mechanism.
Analysis
We will analyze the issues in the following order: (i) Chinese algorithm regulations; (ii) Chinese CBDT regulations; and (iii) how to use a global AI Tool on a risk-based approach.
I. Chinese Algorithm Regulations
a. Summary of the Regulation
On December 31, 2021, the CAC, along with other authorities, issued the Administrative Provisions on Algorithm Recommendation of Internet Information Services, effective from March 1, 2022. This regulation requires algorithm recommendation service providers to complete a filing with the CAC within ten working days from the date of service provision.
b. Applicability to MNCs Deploying an AI Tool
A company qualifies as an algorithm recommendation service provider if:
- The services are algorithm-based and fall within categories such as generation and synthesis, personalized push, information sorting and selection, search and filtering, and scheduling decision-making.
- The services are provided to the public in China (services are not considered “public” if end users are corporate clients’ employees).
- The services have attributes of public opinions or social mobilization, typically referring to consumer/individual-facing services.
AI Tools with functionalities like “personalized push” and “search and filtering” for non-employee users are likely considered algorithm recommendation services. Even if the AI Tool targets global recruitment, its accessibility to Chinese applicants may trigger the need for Algorithm Filing.
c. Localization Requirement on Algorithm
As part of the Algorithm Filing, service providers must disclose technical parameters and the physical location of the software and hardware related to the algorithm. If the CAC identifies that the algorithm is deployed outside China, it may reject the filing, citing the inability to control and regulate the algorithm provider. The network that the algorithm resides on must pass China’s IT security assessment (MLPS), which requires the IT system to be deployed in China. Global AI Tools typically hosted outside China may not pass this filing, and migrating the AI Tool to China could incur significant costs.
d. Special Considerations for Generative AI
Foreign-based generative AI models like ChatGPT and Copilot are unavailable in China due to data security and other concerns. MNCs deploying generative AI in public-facing use cases must either:
- Fine-tune/Extension Approach: Work with domestic foundational models (e.g., Earnie, Kimi, SenseTime, etc.), fine-tune the algorithm, training data, or security measures, and complete a “dual filing” (deep-synthesis algorithm filing at central CAC and LLM “go-live” filing at local CAC). The key to “dual filing” is localization, ensuring localized content data, algorithms, and foundational models.
- API/Embedding Approach: Directly apply the foundational model without fine-tuning, shifting liabilities to service providers to avoid filing. Some provincial CACs may require LLM registration, though this is currently ad hoc and voluntary.
II. Transfer Mechanisms for CBDT
a. Summary of the Regulation
In March 2024, the CAC issued the Provisions on Promoting and Regulating Cross-border Data Flows in furtherance to previous rules on CBDT. This regulation requires data handlers (similar to data controllers under GDPR) to either enter into a standard contract for cross-border transfer of personal information with the data recipient and file it with the CAC, or obtain a certification for the protection of personal information (a “Transfer Mechanism”). This is mandatory if the data handler engages in the outbound transfer of personal information of over 100,000 people but less than one million people, or the sensitive personal information of less than 10,000 people within one calendar year (the “Volume Threshold”); beyond one million, a more stringent Transfer Mechanism of data security assessment will be triggered.
b. Applicability to an MNC
Since the AI Tool is based on infrastructure located outside of China, its processing of personal data may constitute a cross-border data transfer (CBDT). If deploying the AI Tool significantly increases the volume of personal data processed, causing the MNC’s China operations to meet the Volume Threshold (considering other data export scenarios), a Transfer Mechanism may be triggered.
c. Required Actions
An MNC must assess whether the deployment of the AI Tool will significantly increase the number of individuals whose personal data is processed, potentially meeting the Volume Threshold and triggering a Transfer Mechanism. If a Desensitization Solution can completely anonymize personal information, this concern may be mitigated, as anonymized data is not considered personal data for CBDT purposes.
III. Conclusion: Using a Global AI Tool on a Risk-Based Approach
Given the localization requirements and that migrating the AI Tool is often not feasible, to comply with Chinese regulations, MNCs typically adopt one of the following approaches:
- Disable the AI Tool and Replace with a Local Solution: Some MNCs opt for local online recruitment platforms that offer AI recruitment tools on local infrastructure. While China lacks specific anti-bias regulations like the New York Automated Employment Decision Tools (AEDTs) Law, there are basic transparency and disclosure requirements as well as rules against unfair, deceptive or abusive practices. MNCs can replicate global compliance guardrails to ensure the AI Tool complies with these requirements.
- Use the AI Tool on a Risk-Based Approach: Chinese laws and regulations on AI and cross-border data transfer are rapidly evolving, and the ban on foreign-based AI Tools is ad hoc. There are no published enforcement actions against companies using foreign-based AI Tools for recruitment and HR management. If the authority notices the AI Tool, it is likely to issue a warning first and request the company to complete the filing or shut it down, rather than directly imposing immediate fines. Complying with a shut-down request typically helps avoid or reduce penalties. Some AI Tool providers offer services leveraging anonymized learning and other anonymization tools. Therefore, if an AI Tool service provider can anonymize or desensitize information fed into the AI Tool, enforcement risks can be further mitigated, though not eliminated. Additionally, the deployer must have a mechanism to shut down the AI Tool on short notice, with a recommended shut-down service level agreement (SLA) of 24 hours.