CFOtech Australia - Technology news for CFOs & financial decision-makers
Story image

Agents reshape work as guardian AI addresses new risks

Today

The rapid evolution of artificial intelligence (AI) is bringing agentic AI—autonomous digital agents capable of making decisions and undertaking tasks independently—squarely into focus for industry leaders. As organisations look to leverage the benefits of AI while navigating new risks, both experts and analysts are identifying significant impacts on personal productivity, cybersecurity, and the future of work.

Sumir Bhatia, President of Asia Pacific at Lenovo's Infrastructure Solutions Group, describes a paradigm shift in the adoption of so-called agentic AI. "Agentic AI, or AI agents, capable of independent action and decision-making, are set to make waves over the next year and drive not just personalisation, but complete individualisation. For the first time, AI is no longer just a generative knowledge base or chat interface, it is both reactive and proactive—a true partner," he notes.

Bhatia points to forecasts such as Gartner's estimate that by 2028, nearly 15% of daily work decisions will be handled autonomously via agentic AI. Central to this shift are local large language models (LLMs) that can interact with an individual's personal data in real time, often without relying on external cloud infrastructure. This not only increases productivity by automating tasks ranging from document management to meeting summary generation but also offers enhanced data privacy since interactions remain device-bound.

Looking beyond single-purpose automation, Bhatia envisions the emergence of "personal digital twins" — clusters of collaborative AI agents that collectively address various aspects of users' lives. For instance, one's personal digital twin may integrate a grocery-buying agent, a travel planner, and a language translator, among others, all functioning in concert to address diverse and often complex needs.

This transformative trajectory is already being harnessed in sectors where efficiency, accuracy, and risk mitigation are paramount. In the legal industry, David Fischl, Partner in Corporate and Commercial at Hicksons Lawyers, highlights how in-house AI is reshaping claim processing and fundamentally altering how legal teams operate. Fischl explains, "Traditionally, junior lawyers and paralegals have spent countless hours reviewing claim files to create chronologies and extract key information—a necessary but time-consuming first step before senior lawyers can develop case strategy."

Hicksons' proprietary AI agent now processes thousands of pages a day, extracting and organising essential data with a high degree of accuracy and speed. This not only reduces operational costs and turnaround times but enables junior lawyers to pivot more quickly to strategic, high-value work, accelerating their professional development and enhancing client outcomes. "By leveraging AI in this way, we are able to deliver outstanding, cost-effective legal services for our clients while accelerating the professional growth of our junior lawyers," notes Fischl.

While these advancements offer significant upside, experts are also warning of new and expanding risks associated with increased agent autonomy. Guardian agents—AI-based technologies specifically designed to maintain trustworthy and secure AI interactions—are emerging as critical components. According to a recent Gartner forecast, guardian agent technologies are on track to account for at least 10-15% of the agentic AI market by 2030.

Guardians function by continuously monitoring, reviewing, and, when necessary, blocking or adjusting the actions of other AI systems to ensure adherence to predefined goals and safety protocols. "Agentic AI will lead to unwanted outcomes if it is not controlled with the right guardrails," says Avivah Litan, a VP Distinguished Analyst at Gartner. "The rapid acceleration and increasing agency of AI agents necessitates a shift beyond traditional human oversight."

The risks confronting AI agents include input manipulation, data poisoning, and agent deviation—issues that could expose organisations to vulnerabilities ranging from credential theft to reputational damage. This is especially relevant as Gartner's recent poll of IT executives reported that over half of respondents are already deploying AI agents for internal administrative purposes, with nearly a quarter using them in client-facing roles.

To mitigate these risks, Gartner recommends focusing on three primary categories of guardian agents: reviewers (ensuring output validity), monitors (tracking actions), and protectors (blocking inappropriate actions in real time). The strategic deployment of these guardians is expected to be crucial as multi-agent systems, wherein multiple AI agents interact at high speeds, become the norm, predicted to comprise 70% of AI applications by 2028.

As AI agents move from niche automation tools to partners and safeguards in the modern workplace, ongoing innovation and vigilant oversight are set to define the next era of digital transformation.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X