Natalia Kaspersky, president of the InfoWatch group and chair of the Association of Software Developers “Domestic Software”, has urged caution over the use of autonomous artificial intelligence systems in cybersecurity. Speaking to Agent of the Future, she said that while AI can assist users in non-critical roles, it should not be allowed to take final, legally significant decisions about people or business processes.
Kaspersky said the suitability of an AI agent depends on its role. “If it is used as a hint or a user assistant in non-critical applications, that is reasonable,” she said. “But giving an agent the unquestioned right to prepare final documents, commercial proposals, contracts or to make autonomous legal decisions about people is dangerous.”
She highlighted a core technical concern: large language models are probabilistic. They do not guarantee correctness and can produce mistakes or so-called hallucinations. Over the past year, some specialists have noted a decline in the quality of output from these models. One contributing factor is that models are increasingly fine-tuned on texts they themselves have generated, which can reinforce errors.
Kaspersky drew a parallel with the blockchain boom, when the technology was promoted indiscriminately and embedded in programmes despite limited practical benefit. “We are at the peak of hype,” she said. “That should be a signal to invest very cautiously. Better to let others experiment and make mistakes, and then implement solutions that are proven to work.”
InfoWatch does employ AI in its products but in a deliberately constrained manner. The company uses a closed internal agent within one product to help users understand features and answer questions about specific options. Traditional linguistic tools are combined with the language model to produce more accurate replies. Crucially, the agent does not make decisions for the user and any errors remain within limits a human operator can detect and correct.
AI agents in cybersecurity must operate under human supervision
Kaspersky emphasised the need for human oversight. She warned against scenarios where AI systems automatically issue fines, make hiring or firing decisions, or otherwise carry out actions with legal consequences. Such use would be inappropriate until developers find reliable methods to control systems that currently tend to answer even when data are insufficient.
One major remaining challenge, she said, is teaching models to admit uncertainty. “The current architecture of large language models forces them to produce an answer even when they lack data,” she explained. “This creates a problem of confident wrongness. Until developers learn to manage this behaviour we cannot trust these systems to take critical decisions.”
Kaspersky called for measured adoption and robust governance. Her stance is that governments and companies should avoid blanket mandates for generative AI and instead prioritise pilot projects and safety controls. For regions where cybersecurity and digital policy are a focus, including BRICS members, her message is clear: use AI agents where they assist humans, not where they replace them.

Key Takeaways:
- Natalia Kaspersky, head of InfoWatch and chair of the ARPP, cautions that AI agents must not make legally or commercially significant decisions without human oversight.
- She warns that large language models are probabilistic and can produce errors or hallucinations, especially when retrained on machine-generated text.
- InfoWatch uses a closed internal agent for user assistance, emphasising that it does not make final decisions and can be corrected by humans.
- Kaspersky calls for restraint while experimenting with generative AI and stresses the need to teach models to say “I do not know”.

















