As artificial intelligence weaves deeper into public services, commerce and education, six persistent questions will shape policy debates in 2026. The conversation matters to China and other BRICS+ nations because decisions on transparency, measurement and regulation will determine who controls the benefits and who bears the risks.
AI regulation and why transparency matters
Companies that build large models have kept training data secret, citing trade secrets and legal risk. Yet as these systems move into hospitals, schools and hiring platforms, governments will demand to know what the models were trained on. Clearer disclosures could reveal the presence of copyrighted material, biased or Eurocentric content, and, in the worst cases, abusive imagery. For countries such as China, India and Brazil, rules on data disclosure will affect domestic developers and foreign suppliers alike. Policymakers will need to balance intellectual property rights, individual privacy and public safety.
Can we agree on a test for general intelligence?
Debate over artificial general intelligence has become a focal point for investment and public fear, but experts still lack a common definition. Without measurable criteria, claims about reaching AGI remain vague and open to hype. A practical framework that focuses on specific capabilities and risks will help regulators in BRICS+ states assess threats to labour markets, critical infrastructure and national security.
Where will regulation emerge first?
Europe’s forthcoming rules will set a precedent, but regulatory approaches will vary. Some countries will prioritise rapid adoption and domestic championing of homegrown firms. Others will insist on strict oversight to protect citizens. For BRICS+ members with large tech sectors, like China and India, the challenge will be to foster innovation while ensuring accountability. Cross-border cooperation will be important for setting standards on safety and provenance.
What could burst the market bubble?
High valuations have persisted despite uncertain revenue paths for many AI start-ups. A shift could come from slowing customer uptake, the arrival of powerful free models, or a reassessment of how much automation buyers are willing to pay for. Investors across BRICS+ markets are likely to become more cautious and demand clearer routes to profitability.
How will AI pay for itself?
Hardware suppliers have seen strong returns, but model developers face a tougher road. Monetisation efforts may include subscription services, advertising or enterprise licensing. In price-sensitive markets, particularly in parts of Asia and Africa, consumer reluctance to pay for software could push companies to pursue alternative business models or focus on government and enterprise contracts.
What happens to jobs and skills?
Workers across sectors worry about displacement. Governments and businesses must consider reskilling, social safety nets and policies that encourage tasks where humans retain a comparative advantage. Early signs suggest a renewed premium on original thinking and human-centred services, but planning is essential to avoid disruptive labour shifts.
None of these questions will be settled in a single year. Still, 2026 is likely to be a turning point as regulators press for more transparency and investors test business models. For BRICS+ countries, the choices made now will shape who benefits from AI and how its risks are managed.
Key Takeaways:
- Calls for greater transparency about training data, including risks of copyrighted and abusive content.
- Researchers and policymakers seek measurable standards for AGI and clearer AI regulation across jurisdictions.
- Investors and companies must show viable revenue models as competition and open-source models reshape markets.
- Wider social risks, from jobs to education, will test how BRICS+ countries balance innovation and public interest under AI regulation.

















