One of the most overlooked questions in an AI SaaS evaluation is not “what features are included.” It is “what happens when we need this vendor to stop?”

That question sounds defensive until you watch a buying team realize what the demo is really showing. A customer-service platform is no longer only summarizing tickets or drafting replies. It is checking order status, updating CRM fields, issuing refunds under a policy threshold, escalating exceptions, and reporting resolution metrics. The feature checklist looks good, the integration plan looks manageable, and the ROI model looks plausible. But the real decision sits underneath the checklist. The buyer is deciding which part of the operation to hand over. Improve. Coordinate. Execute. Quietly become responsible for.

In my experience, AI SaaS buyers need a new buying lens. They should evaluate vendors on two axes at the same time: the operational role the vendor plays and the amount of work the vendor absorbs. The first determines where value appears. The second determines how hard the vendor will be to govern, pause, replace, or exit. AI SaaS does not make old lock-in disappear. In some deployments, it extends lock-in into the operating model. That is workflow absorption.

The risk is the delegation, not the agent

Traditional SaaS buying treated the application as the unit of value. The buyer asked familiar questions: does the product have the features we need? Does it integrate with our stack? What does it cost per seat? Will users adopt it? Can we export our data if we leave? Those questions still matter, but they are incomplete when the vendor starts acting inside the workflow.

AI is pushing SaaS value away from manually used applications and toward workflow-level outcomes. Applications move behind the scenes. The workflow becomes the front door. Agents coordinate work, trigger actions, reconcile data, manage exceptions, and sometimes participate in decisions. A customer-service platform may no longer be just a better ticketing tool if its agents resolve requests across systems. A finance platform may no longer be just an expense dashboard if it reviews transactions, enforces policy, flags fraud, approves low-risk spend, and logs the decision. A healthcare revenue-cycle platform may no longer be just administrative software if it checks claim status, works payer queues, and updates the EHR workflow.

This is why “AI SaaS” is too broad a category to guide buying decisions. A dashboard, a routing layer, a task-performing agent, and an expert-assisted decision system create different kinds of value. They also create different kinds of dependency.

Absorption occurs when a vendor becomes responsible for a repeatable operational function, rather than merely supporting the people who perform it. The threshold is easy to miss. A customer-service AI starts by summarizing cases. The vendor improves visibility. Then it recommends the next-best action, drafts a response, and routes the case to the right queue. The vendor now coordinates work. Then it updates a CRM record, changes an order, issues a refund, or closes the case. The vendor executes work. Then leaders rely on its resolution metrics, exception queues, escalation rules, and performance dashboards to manage the function. The vendor is no longer just a tool inside the workflow. It is part of how the workflow runs.

Public examples point in this direction, with an important caveat: the evidence supports bounded execution more than full autonomy. Salesforce Agentforce materials and customer-service analysis describe agents that can handle order status, refunds, troubleshooting, appointment scheduling, case updates, and profile updates. Intercom’s Fin has moved its pricing language from resolutions toward outcomes, including workflows where the agent gathers context, executes configured procedures, and hands off when policy requires a person. ServiceNow-like workflow platforms increasingly sit above ERP, CRM, and ITSM systems, turning the AI layer into the place where work is assigned and measured. Banking and payments examples now include agents involved in re-underwriting documents, payment dispute resolution, and transaction execution.

Agents have not taken over whole enterprises. They have not. What is changing is that agents are beginning to execute bounded tasks inside live workflows. Once that happens, the buying decision changes. The buyer has moved from software adoption to operational delegation.

The vendor absorption grid

Buyers need a simple way to name what they are delegating. The Vendor Absorption Grid starts with the vendor’s role.

Vendor role What the vendor does Main dependency Buyer’s minimum control
Visibility Shows what is happening Reporting dependency Data definitions, source access, metric lineage
Coordination Routes and prioritizes work Orchestration dependency Workflow documentation, escalation rules, override paths
Execution Performs tasks in systems Continuity dependency Action logs, rollback, failure recovery, service levels
Judgment Recommends or participates in decisions Accountability dependency Validation rules, approval thresholds, liability terms, decision logs

Then the grid asks how much work the vendor absorbs into its own operating model.

Absorption level Meaning Buyer question
Assist Helps humans do the work What productivity gain does it create?
Orchestrate Coordinates how work moves Can we see and change the workflow logic?
Execute Performs defined tasks Can we audit, reverse, and recover actions?
Absorb Becomes responsible for continuity of the function Could we still operate if the vendor stopped?

The grid is a diagnostic, not a maturity model. Every buyer should not race toward absorption.

A visibility vendor may help leaders see bottlenecks faster. The dependency is usually manageable. A coordination vendor may improve prioritization and handoffs, but it also starts to shape who gets work, what gets escalated, and which cases receive attention. An execution vendor may expand capacity, but it creates continuity risk when the system updates records, triggers payments, closes tickets, or approves work.

The scrutiny changes most when a vendor participates in judgment. If the agent recommends eligibility decisions, refund approvals, dispute handling, risk scoring, claims handling, or pricing changes, the buyer needs to know what evidence a human sees, when a human can intervene, and who is accountable when the outcome is disputed. That does not make absorption wrong. In some workflows, a deeply embedded vendor is the rational choice. The vendor may be more capable, more consistent, and more accountable than a fragmented stack of tools and internal handoffs. But the buyer should make that trade-off explicitly.

What changes from classic lock-in

Enterprise buyers already know lock-in. ERP, CRM, ITSM, cloud, billing, procurement systems, RPA, BPO, and managed-services providers have always shaped processes, created switching costs, and given vendors control. AI SaaS does not invent dependency.

What changes is the compression of software, workflow orchestration, decision support, and managed execution into the same vendor surface. Some vendors are beginning to demonstrate pieces of this capability in bounded workflows: interpreting intent, selecting tools, routing work, updating records, escalating exceptions, and measuring outcomes. That compression matters because the system of record becomes less visible to the user. ERP, CRM, ITSM, billing, and support systems may remain underneath. But the AI workflow layer becomes the place where work is assigned, executed, reviewed, and measured.

Classic SaaS lock-in often centered on data, contracts, user adoption, and configuration. AI-era dependency can also span models, orchestration frameworks, runtime environments, prompts, policies, workflow configurations, agent behavior, action history, escalation patterns, and human operating routines.

Connection is not portability. A protocol may help one agent call another tool. It does not automatically make agent memory portable, preserve workflow configuration, transfer escalation habits, recreate behavioral calibration, or tell a new vendor why the old system routed a sensitive case a certain way.

The practical test is simple, even if the answer is difficult: could another vendor operate the workflow with the exported data, configuration, logs, and process documentation, or could it merely connect to the same systems? For many current AI SaaS deployments, full operational portability will be aspirational. That is still useful. It tells the buyer which dependency is being accepted.

Concentration risk versus fragmentation risk

Many CIOs are not trying to add more composability. They are trying to reduce tool sprawl. That matters because the optionality argument has a serious counterargument: one accountable vendor can be less risky than a scattered stack of tools with fragmented ownership.

Incumbents know this. SAP, Microsoft, Salesforce, ServiceNow, and others pitch unified AI platforms as risk reducers. They are not always wrong. A single vendor can reduce coordination cost, simplify procurement, standardize identity and permissions, unify data models and support, and give the buyer one accountable party when outcomes fall short. But consolidation changes the risk. It does not erase it.

The trade-off is concentration risk versus fragmentation risk. Fragmentation risk asks: can we make all these tools work together, and who owns the outcome when they do not? Concentration risk asks: what happens when one vendor controls the workflow layer, the data context, the agent behavior, the pricing meter, and the roadmap? A buyer can rationally choose concentration. But it should be a conscious choice, not the accidental result of accepting the easiest AI bundle from an incumbent.

Pricing becomes a control surface

Traditional SaaS pricing was often frustrating, but it was legible. Seat counts, modules, contract tiers, and renewal uplifts gave buyers something to model. Agentic SaaS puts pressure on that model. If agents execute work that used to require named users, vendors will price less around access and more around usage, actions, resolutions, outcomes, or premium AI capacity.

That can match incentives. A vendor paid for resolved cases has a reason to improve resolution, just as a vendor paid for recovered revenue has a reason to increase yield. But the same model can create financial risk. If cost scales with agent activity, a successful deployment can create a budget problem. If pricing depends on outcomes, the buyer needs a clean way to define attribution. When the agent acts across several systems, who gets credit for the result? Who carries the cost when it makes an error? Outcome pricing becomes risky when the vendor’s commercial meter scales with the buyer’s operational dependence.

Pricing terms should become part of the governance negotiation, not a separate conversation:

Pricing question Why it matters
What exactly counts as an action, resolution, or outcome? The meter must be measurable and disputable.
How does cost scale with automation volume? More successful automation can mean more variable spend.
Are there caps, bands, or approval thresholds? Production workflows need budget control.
What triggers renegotiation? New use cases can change economics fast.
Can the buyer audit the meter? Outcome pricing fails when measurement is opaque.
Does price protection survive deeper adoption? Embedded vendors gain renewal power.

Outcome-based pricing can be useful when the vendor truly owns part of the result and the contract makes accountability clear. It becomes risky when the vendor gets variable upside without accepting operational responsibility.

Contracting should match delegation

Procurement teams already know how to negotiate uptime, data export, termination rights, and renewal terms. Those still matter. But they miss the action-level concerns that appear when agents execute work. The contract needs to answer what the vendor can do, under whose authority, in which systems, up to what threshold, with what record, and with what recovery path.

Vendor role Buying focus Contract focus
Visibility Can we trust the operational picture? Metric definitions, data lineage, source access, audit rights
Coordination Can we see and override workflow logic? Routing rules, escalation rights, change notice, configuration export
Execution Can we audit, reverse, and recover actions? Action logs, rollback, kill-switch rights, recovery SLAs, failure drills
Judgment Who remains accountable? Approval thresholds, decision records, validation rules, liability allocation

Before giving an agent execution rights, the buyer should ask a more specific set of questions:

  • What actions can the agent take?
  • In which systems can it act?
  • Does it act under a user identity, service identity, or vendor identity?
  • What dollar, customer, legal, or operational thresholds limit its authority?
  • What actions are prohibited?
  • Where are action logs stored?
  • Can the buyer pause the agent immediately?
  • Can the buyer reverse an action?
  • What happens if the vendor fails mid-workflow?
  • Can the team run manually for 30 days?

These are not policy questions in disguise. They are operating requirements. An agent that can issue a refund needs refund thresholds, audit logs, exception rules, reversal paths, and dispute rights. One that approves expenses needs policy logic, fraud flags, approval records, human escalation, and liability allocation. One working a healthcare claim queue needs traceability, payer-rule updates, compliance controls, human review points, and recovery plans when automation fails. The deeper the delegation, the more the contract must move from software access to operational control.

Minimum governance before the vendor acts

Governance is often treated as a brake on AI adoption. In agentic SaaS, governance is what lets the buyer safely move from assistance to operational value.

Not every buyer can run an enterprise-grade AI governance program on day one. Many midmarket teams lack the architecture staff, procurement maturity, and data governance required for a full optionality review. That does not mean they should ignore the problem. It means they should tier it.

At a minimum, viable governance should cover the first set of risks before anything goes live. Most teams can start with these seven checks:

Area Minimum question
Workflow scope What exact tasks can the agent perform?
Human control Where can a person pause, override, or reverse the action?
Escalation What cases must go to a human?
Audit trail Can we see what the agent did and why?
Cost control Can spending exceed plan without approval?
Data export Can we retrieve operational data and decision history?
Failure mode What happens if the vendor is down?

The buyer does not need to solve every future problem before signing. But the buyer does need to know which problems are being deferred. Use the grid before the contract is signed, not after implementation.

Start by placing the vendor in one cell based on what it will do on day one. Then place it in the cell the roadmap points toward. The gap between those two cells is where dependency grows unnoticed. A vendor may begin by orchestrating (improving prioritization and routing cases across teams) but the roadmap may point toward executing (updating records, issuing refunds, resolving cases, and reporting outcomes). That gap should trigger a different review. The buyer should negotiate for tomorrow’s operating role, not only today’s feature set.

The four questions for the next demo

The answer is not to avoid AI SaaS that absorbs work. Many buyers should use it. The value can be real: faster resolution, fewer manual updates, cleaner handoffs, better exception handling, and one accountable vendor.

The answer is to name the delegation before it becomes invisible. For the next AI SaaS demo, the test is not only “can the agent do the work?” It is these four questions:

  • Show us the kill switch.
  • Show us the action log.
  • Show us the manual fallback plan.
  • Show us the export path for workflow logic, not just data.

Until the vendor can answer those questions, do not let the agent execute. The test for AI SaaS buyers is simple: if the vendor stopped tomorrow, could the organization still perform the workflow, explain past decisions, control costs, and migrate the operating logic? If the answer is no, the buyer may still choose the vendor. But it should price, govern, and contract for that dependency before the vendor becomes the workflow.