China has unveiled draft regulations to tighten oversight of artificial intelligence systems that exhibit humanlike interaction, in a sign of Beijing’s growing focus on ethical, social, and security risks from rapidly advancing AI technologies. The draft rules, issued by the country’s cyberspace regulator and opened for public consultation, are expected to shape how consumer-facing AI products operate in the future.

Specifically targeted under the proposed regulations are AI systems which can simulate human personality, emotion, and communication patterns via text, voice, image, or video. Regulators have voiced apprehensions that the said systems may influence user behaviour, lead to emotional dependency, or obscure the difference between human-machine interaction. According to the draft, AI service providers will be mandated to clearly indicate when users are interacting with an artificial system instead of a real person.
The most important single provision makes companies fully responsible for the safety and ethical compliance of their AI products throughout their development and deployment. Providers would be required to implement internal mechanisms that review algorithms, increase protection for data, and personally protect information according to relevant cybersecurity and privacy laws. More rigorous oversight will be applied to systems collecting sensitive data or shaping user perceptions.
The draft also provides for protections against addiction and overuse. Firms could be asked to insert safeguards that prevent excessively long or compulsive interaction with a device, especially in those cases where an AI system is created for companionship or emotional support. Service providers would be expected to intervene if evidence of user dependence or other forms of psychological harm were detected and to modify system behaviour accordingly.
Content regulation forms yet another central pillar of the proposal, whereby AI systems would be banned from producing material that compromises national security, disseminates false information, induces violent acts, spreads obscenity, or contains other elements destructive to social stability. The rules reflect China’s broader regulatory approach to digital platforms, where control of content and public order are treated as core policy priorities.

Experts said the move underlined Beijing’s determination to balance technological innovation with rigid governance. As AI was becoming a part of daily life, the authorities seemed to become keener on preventing abuses while maintaining state oversight of emerging technologies. Any final shape will depend on feedback from the public, but analysts expect the core framework not to change. If implemented, the regulations would have significant implications for how AI products are designed and marketed in China. They also might be a reference point for other countries considering how to control human-like artificial intelligence systems.