The point of agentic AI is for it to act by writing code, running scripts, executing commands, or calling APIs. All of that is more effective if those actions are discoverable, documented, consistent, and designed to be easy for agents to use in the first place.Agents need slightly different information from people. Both benefit from complete, accurate and up to date documentation or error messages that include the correct commands to use, but, for example, DevOps agents need the full context of the entire CI/CD pipeline, and developers usually don’t.Generally, though, better management of business logic, code, data, documentation, API design, policies, and best practices will help your human users as well. And if you don’t do the work, you risk a new level of shadow IT as members of staff turn to services their agents can work well with.“People are starting to believe and buy into agentic AI, but there’s so much work to be done to actually achieve this,” says Forrester VP and principal analyst of enterprise architecture, Charles Betz. Agents need precise, structured, and accessible information, and AI will amplify strengths and weaknesses. “If you have weak system understanding, you’re going to have some awfully confused AI agents,” he adds.Optimizing the agent experienceGetting your existing enterprise systems ready for agents is part of what Netlify CEO Mathias Biilmann calls agent experience (AX) or what an agent encounters as the user of a product of platform.Every product and tool already has an agent experience, though, because AI agents already try to use them. “It’s just a question of does it have a good agent experience or a bad one,” he says.If you expect an agent to work with a product or service you rely on, it’s important to know if it has the structured, predictable interface, the comprehensive error handling, session persistence for workflows with multiple steps, and real-time feedback that it needs.“How does an agent discover your product,” says Biilmann. “Is it the user that tells them, or does it figure out on its own this might be a good solution? How do you help it understand what it can do with your product, and how do you make it easy for it to get access to the product, ideally, with as little asking the user for permission as possible? Then in that agent loop, how do you give context to the agent to make sure it can get the best possible experience of that product and solve the tasks for its user in the most efficient way?”The term has already caught on with both AI coding agents and developer tool providers, and Biilmann expects it to become more widely relevant. “As lessons from coding agents, and especially Claude Code, start making their way to agents more oriented at businesses and consumers,” he says, “we’ll start seeing the next layer of tools and services understand that if we don’t work well with those agents, we’re going to start getting routed around.” Akamai’s latest research shows that shift has begun.In the enterprise, AX is about doing the foundational work to effectively integrate agentic AI with existing systems. So do you bolt them on as an afterthought, like RPA that scrapes data out of legacy applications, or do you make sure workflows and tools are set up for MCP to work against so agents are more effective?Investors are starting to talk about these principles as part of how they do valuations and predict agent-led growth. Anthropic has guidelines for how to write tools so they work well with agents, and Microsoft is building new accounts with access limitations into Windows just for agents so they run securely. One AI code tool provider even ripped out the CMS it had just paid to set up because its Cursor coding agent couldn’t access it easily, and the marketing team found asking a chatbot to add their content to the website as Markdown easier than a traditional CMS interface.Of course, as a company building AI coding tools, even Cursor’s non-technical employees are fairly technical, and soon after that provider gave up on the CMS, the vendor released an MCP server designed to give agents access to create, update, and manage a website without such a drastic change. But it’s an example of the impact AI agents can have on traditional software usage.Beyond MCPBiilmann suggests four principles for good AX based on if agents can reach a system with the right permissions; the LLM can get the right context to use it effectively; tools like APIs, SDKs or the command line interface are built so agents can act as an interface to use them; systems make it easy to trigger and coordinate a favourite agent.“There’s a whole field of context engineering emerging that involves MCP, skills, context files, and tweaking tool responses to make sure agents have the right context when they’re using your product,” he says, adding that a single line of output added to the error message for a Netlify CLI command fundamentally transformed the AX from not being able to use the tool, to deploying it in one go.But it’s not enough to just make an MCP that wraps an API. “Think about the MCP as a UI for an agent that gives it the right context to not just do API calls but accomplish tasks efficiently with your API,” he says. “Give it context, structure and how API endpoints are typically used together.”Sean Blanchfield, CEO of AI infrastructure company Jentic, agrees that AI needs context, not more integration glue. “If you give an LLM clean documentation for a well-designed API, it can already interface with it directly,” he says. “That makes high-quality API management the real foundation of enterprise AI capabilities. Existing investments in OpenAPI, gateways, authentication, and governance will pay serious AI dividends.”Agents also need APIs that match their specifications, which many don’t. Jentic’s free AI Readiness Scorecard tool is one way to check that.Common mistakes are broken references, malformed schemas in OpenAPI specs, or APIs that don’t specify the server that hosts them and only have authentication information written for human developers not available through the API. Human developers might be able to laboriously work around those but agents will struggle.API specifications alone have no concept of operational sequence, says David O’Neill, COO of SaaS platform APIContext, so you need to use OpenAPI’s Arazzo workflow standard to encode those. “Suddenly OpenAPI specifications and Arazzo workflows are important because those are the things MCPs and agentic systems use to validate if something’s going to work,” he says.Forrester’s Betz refers to this kind of work as generative engine optimization to help agents get details about services.“Documenting APIs, getting robust business definitions of your data and information, understanding unambiguously where that data and information live, and what system is authoritative are absolutely critical,” he adds. “All the things your data and enterprise architects have been trying to build for the last 20 years, AI needs all that stuff to do what it does.”Testing AX ground rulesMost organizations will use older code and architectures they aren’t ready to get rid of but don’t want to replicate, says Michael Coté, staff technologist at Broadcom. “Make sure to go through and mark up what you think is a well behaved database and data architecture versus a functional one you don’t like,” he says.Some of this is basic IT hygiene but it’s still critical, adds Martin Heyder, director of IT at data analytics platform KNIME, especially if you’re automating workloads that have relied on staff manually collating data from multiple sources that haven’t had to be kept up to date or available before. “If inventories, logs, or documentation are unreliable, any AI system will simply automate bad information,” he says.Also, agents will often initially expose where tacit knowledge exists, so establishing standards across code, enforcing code reviews, and keeping a single reference for system definitions and documentation gives agents an easy framework to work within.Microsoft CVP for apps and agents Amanda Silver refers to that as an agentic constitution, a natural language way to specify requirements or common expectations across the organization or code base. “You make sure the agent always has that in its context,” she says.This can apply much more broadly and be used as a meta prompt, that can include specific ways an agent should do things when it needs to manage identity, create a human in the loop interface, or build something that allows you to observe the agent’s operations.Context and connectionIdentity, access management, and permissions for existing infrastructure are fundamental. Most organizations likely need to start with an inventory, to make sure they know about everything they want agents to interact with, and that they can update it if necessary.That may mean adding new APIs or even replacing legacy apps where core logic and user interface are so tightly coupled, they’re hard for agents to use with SaaS that has APIs built in, or composable, headless architectures. Almost 30% of organizations IDC talks to, for instance, are working through how to modernize their entire software portfolio to take better advantage of AI and agents.“The highest value ROI task agents are capable of is backend workflow processing,” Silver says. “You might use a protocol like MCP to expose data to agents, but that doesn’t mean they can take actions. Think about what the actions are you want to automate, that you can expose as tools to agents, that they can invoke to go not just from intent to planning, but to actual execution and action.”If you want agents to be useful to resolve an outage, they need access to an API that lets them add information to the dashboard where you collate incident details. Workflows will also need to include agents so they can subscribe to events like whether an order has shipped or an invoice has been disputed, and they need to be able to trigger the right actions in response.Plus, policies may be arranged in code rather than written down, and extracted into natural language policy documents agents can access. As organizations start to use more agents, Silver believes they’ll need not just sequential orchestration to cover an entire workflow, but adversarial evaluation, where workflow needs to follow multiple policies, all trying to deliver specific outcomes. “For that, you might implement multi agent orchestration with policies, then fan out where you consult multiple expert agents, and then those expert agents come back and make a judgment call, solving for multiple policies codified in natural language.”FinOps for agentsEarly adopters are already finding that agents can be relentless and prone to issuing multiple queries. “They keep trying, they’ll do anything,” Biilmann says. “They won’t stop and they can overwhelm a service.” If you can’t redesign an API to return hints because you don’t own it, you may need to implement quotas or prioritize queries.“I think there’s going to be a lot of manually created guardrails for scenarios because agentic stuff is really bad at policing itself,” O’Neil says. “It’s designed to deliver solutions, not turn itself off when it’s failed to deliver a solution.”Enforce idempotence, retries, quotas, and limits to prevent runaway or repeated API consumption, Betz adds. “Agents aren’t an unlimited resource, and if they’re not getting the right answer, they need to know when to abandon the attempt,” he says.Part of governance, Silver says, is understanding which MCPs are exposed to agents. “You could revoke access to them, govern token consumption, and enforce governance polices that make sense for you and your organization,” she says.So AX is a similar discipline to UX in that it’s iterative, and it’s about understanding what users or agents are trying to accomplish, says Biilmann. “You have to staff a team that does the research and figures out the problems and opportunities,” he says. That means looking at session replays, identifying failing agent behaviors, and building responses and interfaces to make them successful.As agent models have improved, Netlify doesn’t have to rely as much on adding context files that guide agents to the most recent API, but that just means they’re moving targets. “There’s a new audience of agents you constantly have to work with and iterate on,” he says.APIContext users already using MCPs find that the numbers of transactions, the APIs, and even the specific services used can change unexpectedly, O’Neill says. “The services returned by the MCP have changed, and it’s trying a different way to create a record, and that doesn’t work anymore,” he says. “It’s not like a gateway where you’re defining everything. It’s more of a black box providing you with a list of tools you can use.”A new MCP server performance monitoring tool lets you check and set alerts, and create governance rules based on performance thresholds, specifications, network functionality, or data transfer rules for data sovereignty. Agents require paying more attention than ever to API management. “It’ll force people to take this API governance seriously,” O’Neill adds.Also, lean on your API gateways in addition to governance tools, monitoring, and observability, says Betz. “An agent is a service that happens to have an LLM involved, and maybe some chain of thought, intention management, and goal seeking behaviors,” he says. “But it’s all still just software.”That means your logging infrastructure will be placed under load. “If you fully embrace agentic, and you’re tracking all the agents, this traffic is going to bounce around in somewhat nondeterministic ways,” he says. In the end, measuring whether agents are successful is about outcomes as well as observability, Silver says. “Over time, you’re going to need to evaluate whether the agent is actually executing the thing you want it to execute, including not going off the rails,” she says.Microsoft is already using agents for Site Reliability Engineering to reduce the cost of responding to live site incidents, and improve remediation time. “To be able to do that, you need labeled, timestamped, traceable data you can look over,” she adds. “You also need to have rich, structured signals, and a clear sense of what’s good and bad from an outcomes perspective.”
Read Full ArticleThis article was originally published on cio_it. Click the button above to read the complete article.