Key Takeaways:
- China has published draft rules to regulate emotionally interactive AI, emphasising user safety and ethical standards.
- Providers would face lifecycle accountability, monitoring requirements and interventions for addictive behaviour.
- The draft bars content that threatens national security or spreads harmful material, aligning with existing digital content red lines.
- Public consultation will inform final rules, signalling wider implications for technology governance among BRICS nations.
Beijing has moved to strengthen control over artificial intelligence services that mimic human personalities and form emotional connections with users, publishing draft regulations that would impose new safety, ethical and behavioural standards on consumer-facing systems.
The draft framework, released by China’s cyber regulator for public consultation, targets AI products and services offered to the public that present simulated personality traits, thinking patterns or conversational styles. It covers systems that interact emotionally through text, images, audio or video.
Emotionally interactive AI regulation and key requirements
At the centre of the draft is concern about excessive use and emotional dependence. Service providers would be required to warn users about overuse and to intervene when signs of addiction emerge. Companies must monitor user behaviour and take action if interactions become unhealthy, the draft says.
Providers would face lifecycle accountability, assuming responsibility for safety from development through deployment. The rules call for mechanisms for algorithm review, data security and the protection of personal information. Firms would need to document testing procedures, incident responses and ongoing safety checks.
Psychological risks are addressed directly. Providers must identify user states, assess emotional responses and evaluate levels of reliance on the AI service. Where users display extreme emotions or addictive behaviour, companies would be expected to take necessary measures to intervene and provide appropriate safeguards.
The draft also sets explicit boundaries on content and conduct. AI services must not generate material that endangers national security, spreads rumours or promotes violence or obscenity. These provisions reinforce long-standing content rules in China and signal that emotionally engaging systems will be subject to the same limits as other online services.
Regulators say the measures aim to guide the rapid expansion of consumer-facing AI while protecting users and society. The consultation period will allow industry groups, academic experts and the public to submit feedback before final rules are adopted. Observers say the process reflects Beijing’s preference for centrally guided regulation to manage technological risk.
Industry reaction is likely to focus on compliance costs and implementation challenges. Companies may need to invest in monitoring tools, mental health protocols and transparency measures for algorithms. The requirement for lifecycle accountability could also prompt firms to rethink testing standards and post-deployment oversight.
Beyond domestic impact, the draft may influence conversations on responsible AI across the BRICS grouping and among partner countries. As states weigh how to balance innovation with trust and safety, China’s approach could inform regulatory practices elsewhere, particularly where governments prioritise social stability and data sovereignty.
Next steps include the public consultation and subsequent revisions before the rules are finalised. Firms operating in China that develop or deploy emotionally interactive systems should prepare for closer regulatory scrutiny and consider how to demonstrate compliance with the new expectations on user wellbeing and content safety.

















