Nurturing agentic AI beyond the toddler stage

Nurturing agentic AI beyond the toddler stage


The Accountability Challenge: It’s Not Them, It’s You

Until now, governance in artificial intelligence has revolved around monitoring model outputs, with humans firmly in the decision-making loop—especially for high-stakes choices like loan approvals or job applications. The focus was on controlling model behavior, including drift, alignment issues, data exfiltration, and poisoning. The rhythm of interaction was dictated by humans prompting models in chatbot-style exchanges, allowing for plenty of back-and-forth before any consequential action was taken.

But today, the landscape has shifted dramatically. Autonomous agents are now operating within complex workflows, and the promise of applied AI hinges on drastically reducing human involvement. The aim is to run businesses at machine speed by automating manual tasks with clear architectures and decision rules. From a liability perspective, the goal is to ensure that the enterprise or business risk remains unchanged whether a machine or a human is operating the workflow. As CX Today succinctly puts it: “AI does the work, humans own the risk.” California state law (AB 316), which took effect January 1, 2026, reinforces this principle by eliminating the “AI did it; I didn’t approve it” defense—much like how parents are held accountable for a child’s actions that negatively impact the community.

The challenge now is that without embedding operational governance directly into code—aligned to varying levels of risk and liability throughout the entire workflow—the benefits of autonomous AI agents are effectively nullified. In the past, governance was static, matching the pace of chatbot-style interactions. But autonomous AI, by design, removes humans from many decisions, which can undermine traditional governance structures.

Considering Permissions

Imagine handing a three-year-old a video game console that remotely controls an Abrams tank or an armed drone. That’s the kind of risk posed by leaving a probabilistic system operating without real-time guardrails that can alter critical enterprise data. For example, agents that integrate and chain actions across multiple corporate systems can drift beyond the privileges granted to a single human user. To move forward successfully, governance must evolve from committee-set policies to operational code built into workflows from the outset.

A humorous meme about toddler behavior with toys starts with all the reasons why whatever toy you have is now mine and ends with a broken toy that is definitely yours. For instance, OpenClaw delivered a user experience closer to working with a human assistant, but excitement quickly gave way to concern as security experts realized inexperienced users could be easily compromised by using it. For decades, enterprise IT has grappled with shadow IT—the reality that skilled technical teams must take over and clean up assets they didn’t architect or install, much like the toddler giving back a broken toy. With autonomous agents, the risks are magnified: persistent service account credentials, long-lived API tokens, and permissions to make decisions over core file systems. To meet this challenge, it’s imperative to allocate upfront appropriate IT budget and labor to sustain central discovery, oversight, and remediation for the thousands of employee or department-created agents.

Having a Retirement Plan

Recently, an acquaintance mentioned that she saved a client hundreds of thousands of dollars by identifying and then ending a “zombie project”—a neglected or failed AI pilot left running on a GPU cloud instance. There are potentially thousands of agents that risk becoming a zombie fleet inside a business. Today, many executives encourage employees to use AI—or else—and employees are told to create their own AI-first workflows or AI assistants. With the utility of something like OpenClaw and top-down directives, it’s easy to project that the number of build-my-own agents coming to the office with their human employee will explode. Since an AI agent is a program that would fall under the definition of company-owned IP, as an employee changes departments or companies, those agents may be orphaned. There needs to be proactive policy and governance to decommission and retire any agents linked to a specific employee ID and permissions.

Financial Optimization Is Governance Out of the Gate

While for some executives, autonomous AI sounds like a way to improve operating margins by limiting human capital, many are finding that the ROI for human labor replacement is the wrong angle to take. Adding AI capabilities to the enterprise does not mean purchasing a new software tool with predictable instance-per-hour or per-seat pricing. A December 2025 IDC survey sponsored by DataRobot indicated that 96% of organizations deploying generative AI and 92% of those implementing agentic AI reported costs were higher or much higher than expected.

#tags
#AI governance #autonomous agents #enterprise risk #California AB 316 #operational code #shadow IT #AI permissions #zombie projects #financial optimization #AI liability #human in the loop #model behavior #data exfiltration #AI alignment #workflows #enterprise IT #AI agents #AI-first workflows #IT budget #AI retirement #AI decommissioning #AI costs #generative AI #agentic AI #DataRobot #IDC survey #machine pace #probabilistic systems #real-time guardrails #service account credentials #API tokens #file systems #employee-created agents #department-created agents #central discovery #oversight #remediation #human capital #operating margins #ROI #company-owned IP #employee ID #permissions #proactive policy #governance structures #back-and-forth interactions #consequential decisions #loan approvals #job applications #chatbot interactions #model drift #model poisoning #enterprise risk management #business risk #liability standpoint #CX Today #California state law #parenting analogy #toddler behavior #video game console #Abrams tank #armed drone #broken toy #OpenClaw #user experience #human assistant #security experts #inexperienced users #compromised #neglected AI pilot #GPU cloud instance #build-my-own agents #AI assistants #department changes,

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *