Select Page

Hero image

Introduction

Agentic AI – autonomous, goal-driven systems that can carry out multi-step tasks with minimal human prompting – has gone from proof-of-concept demos to a flood of vendor product launches and enterprise pilots. But hype has outpaced demonstrated value: recent analyst research warns that supply of agentic solutions exceeds demand and that a large share of current projects will be canceled before delivering sustained impact.

This post explains why many agent projects stumble, which types of use cases are most likely to survive the coming shakeout, and practical steps product and engineering leaders should take to build resilient, valuable agentic systems.

Why so many agent projects fail

There are three recurring failure modes I’ve seen across enterprises and vendors:

  • Misaligned expectations: Agents are framed as magical productivity multipliers but are often introduced without clear goals, success metrics, or executive sponsorship.
  • Underestimating systems work: Effective agents need reliable data, connectors, monitoring, and safety guardrails – the real effort is systems engineering, not model selection.
  • Risk and compliance gaps: Agents acting autonomously can surface legal, privacy, or brand risks. Without strong policies and observability, organizations pause or cancel projects.

Analyst signals reflect this reality. Gartner estimates that many agentic AI projects will be canceled in the next few years; at the same time, a meaningful minority of enterprise applications will include agents by the decade’s end. That divergence points to a selective future: not every agent will survive, but the ones that solve well-scoped, high-value problems will thrive.

Use cases that are likely to survive

Focus on areas where agents can reduce clear operational cost, speed decision loops, or unlock new revenue without creating outsized risk. Examples:

  • Repetitive knowledge work with structured inputs: invoice processing, triaging standard support tickets, or summarizing compliance documents where outcomes are verifiable.
  • Assistant layers that stitch existing systems: agents that orchestrate CRM, ERP, and marketing systems to automate common workflows (e.g., opportunity-to-quote) while keeping humans in the approval loop.
  • Compliance-first automation: monitored agents that surface exceptions and provide audit trails, rather than fully autonomous decision-makers where legal liabilities are high.
  • Developer and ops assistants: curated agentic tooling that accelerates coding, testing, or incident remediation with guardrails and rollbacks.

These winners share three properties: measurable ROI, constrained action space, and easy-to-verify outputs.

How to design agentic solutions that last

If you’re evaluating or building an agentic product, center your approach on systems, not models. Key principles:

  1. Define the business metric first

Start with the metric you care about (handle time, time-to-revenue, cost-per-case) and frame the agent as an experiment to move that metric. Avoid launching agents as feature demos.

  1. Constrain the agent’s action surface

Limit the APIs, data, and write privileges an agent can access. A smaller action surface reduces risk and makes behavior predictable.

  1. Build observability and audit trails

Log agent decisions, inputs, and downstream effects. Observability enables debugging, compliance checks, and continuous improvement.

  1. Invest in data plumbing and integration

Reliable connectors, canonical data views, and retry semantics are what make agents robust in production. This is often the majority of the engineering work.

  1. Layer guardrails and human oversight

Design for human-in-the-loop escalation on exceptions and approvals for high-risk actions. Automated rollback and “safe mode” are critical deployment features.

  1. Measure the cost to operate

Track not just model inference cost but human review time, integration maintenance, and incident handling. A low headline automation rate can still be valuable if it reduces specialized labor costs.

  1. Plan for governance and lifecycle management

Define versioning, access controls, performance SLAs, and a deprecation strategy – agentic features will be subject to consolidation and regulation.

Product and GTM implications

  • Sales teams: sell results, not agents. Position agentic features as workflow improvements with clear KPIs.
  • Engineering: prioritize integration, monitoring, and SRE practices for agents; automate testing that covers end-to-end task flows.
  • Legal/Compliance: involve privacy and risk teams early. Build templates for approvals and incident response.
  • Vendors: focus on composability – customers will prefer modular “skills” they can assemble safely rather than monolithic autonomous agents.

Conclusion

Agentic AI is real and will reshape parts of enterprise software, but the early market is noisy. Many projects will be canceled not because agents are inherently flawed, but because initiatives lacked clear metrics, system-level engineering, or governance. The winners will be the organizations and vendors that treat agents as integrated systems: constrained, observable, and designed around measurable business outcomes.

Key Takeaways
– Agentic AI supply currently outpaces real-world demand; prioritize high-value, low-risk use cases.
– Design for data, guardrails, and composability – success depends on systems, not just models.