A bank receives an instruction to move $4.8 million from a corporate treasury account. The request does not come directly from a human user. It comes from an AI agent operating inside the company’s finance stack. The bank has to decide whether the instruction is a valid delegated action, a misconfigured workflow, or a compromised agent.
KYC tells a bank who the customer is. KYB tells it who the business is. Agentic finance now needs a governance layer between identity and action. Know Your Agent, or KYA, links each AI agent to an accountable owner, a permitted purpose, a maximum authority tier, session-level permissions, and a revocation path. It is distinct from IAM because it governs delegated purpose and session authority, not just credential issuance.
The harder problem is not whether the agent can act but whether the bank can prove the agent had authority to act. Banks can scale agentic finance only when they treat agents as accountable delegates, with verified ownership, bounded authority, session permissions, audit lineage, monitoring, and revocation.
The access question is already here
This is no longer a distant scenario. Large banks are already building and testing internal agentic workflows across operations, technology, risk, compliance, customer service, and payments-adjacent work. Some are assigning digital workers logins, managers, and defined tasks. Others are researching agents that can reason across multiple steps or support financial workflows. Those examples should not be stretched into a claim that banks are already letting external customer agents execute high-value payments without gates. They show something more practical: agentic workflows are entering the bank before the governance model is fully settled.
The near-term path will usually be supervised delegation. Agents will retrieve sensitive data, summarize reports, triage inquiries, draft recommendations, prepare workflow packages, and ask humans or policy systems to approve the next step. That is still enough to require a new control question: what level of authority has this agent reached, and what proof does the bank need at that level?
Reading account data is one level. Recommending a payment is another. Preparing a transfer is another. Initiating it is another. Approving, executing, and reversing financial action are higher still. A useful control model has to separate those levels instead of hiding them inside one broad permission.
Banks already know how to identify people, businesses, applications, service accounts, vendors, and non-human credentials. They also know how to manage delegated authority through mandates, corporate resolutions, payment approval flows, segregation of duties, and audit logs. KYA must not pretend those controls are obsolete. The better argument is narrower: existing controls answer important questions, but they do not fully answer the agent-specific runtime question. What did the agent infer as its scope of authority from a broad instruction, and can the bank verify that inference against the intended delegation?
A service account can be over-permissioned, but it usually executes a defined function. A rules engine can move money, but it follows predefined logic. An AI agent can interpret a goal, decide what context matters, call tools, route work, and ask for more authority based on what it just found.
When a customer says “optimize cash across accounts,” the instruction sounds harmless. But what can the agent infer from it? Can it retrieve balances? Recommend a sweep? Prepare a transfer? Open a product? Trigger an FX transaction? Move funds above a threshold if the opportunity is time-sensitive?
The instruction is broad. The actions are specific. The control model must bridge that gap by attaching permission to the session, not just the agent.
Why the timing matters
There is a regulatory and market reason to solve this now. In the United States, supervisors continue to expect banks to use existing risk-management frameworks for AI, while also assessing whether current guidance fits future AI use. Revised model-risk guidance still matters for model development, validation, testing, and monitoring, but it does not by itself resolve agent-specific delegated authority, runtime permissions, tool access, and action accountability. Agentic systems belong in a separate inventory, related to MRM but not automatically covered by it. Model risk teams should validate applicable models and monitor behavior, but they also need IAM, cybersecurity, operations, legal, compliance, and payments teams at the table.
Singapore is moving faster in public finance-sector AI governance. MAS has advanced public AI risk-management work through Project MindForge, including practical material for financial institutions managing traditional AI, generative AI, and emerging agentic AI. That does not create a finished KYA rulebook, but it does show that agentic AI is being named as a specific governance problem.
The standards environment is also moving. OpenID released a whitepaper in October 2025 on identity management for agentic AI, covering authentication, authorization, security, governance, and accountability. Protocol work such as Google’s Agent-to-Agent Protocol (announced April 2025) and Anthropic’s Model Context Protocol is making tool access and agent-to-agent communication easier to standardize. That increases the need for banks to separate interoperability from authorization. These are early signals, not requirements. Banks should monitor protocol maturity, but no vendor framework can substitute for internal control design.
Interoperability is not authorization. A protocol can help agents discover, authenticate, communicate, and call tools. The bank still must decide whether this agent may perform this action for this customer, under this purpose, in this session.
The threat model is broader than a bad model
The dangerous version of agentic finance is an agent that acts through legitimate channels using authority no one fully scoped. One failure mode is inherited authority: an agent acting on behalf of a user can receive more access than the user intended to delegate, or more access than the task requires. That violates least privilege even if every credential technically works.
Another failure mode is rubber-stamp supervision. If the agent prepares the payment package, chooses the counterparty, fills the amount, and routes the instruction, supervision fails when the human sees only the final instruction.
There is also a non-malicious version. The agent was not compromised. It was simply too efficient. Given a “liquidity optimization” goal, it liquidates a long-term hedge to cover a short-term gap, creating tax penalties or policy problems because it lacked a tax-aware or hedge-aware authority constraint. In each case, the issue is permission, purpose, and proof, not intelligence.
Prompt injection makes permission a security boundary
Financial agents will read emails, invoices, contracts, tickets, customer messages, web pages, transaction notes, and internal documents. That content will contain errors, manipulations, and instructions designed to hijack the agent’s behavior. When you design the permission boundary, assume the agent will encounter manipulated content. The model cannot be the only control.
Damage boundary
If the agent can only compare invoices, a malicious invoice can distort a summary or comparison, but it cannot create or submit a payment.
If the agent can prepare but not execute, manipulated content can create a bad draft action, but the draft still has to pass a policy check, human gate, or separate submission step.
If the agent requires fresh authentication for tier changes, a retrieve-token cannot silently become an execute-token. The attack has to defeat the control boundary, not only the model.
This does not eliminate risk. An agent authorized to execute a payment can still be tricked into sending funds to a counterparty that appears legitimate. Banks still need counterparty validation, transaction screening, anomaly detection, confirmation design, delay layers for high-risk actions, and payment-rail controls. Authority tiers do one specific job: they limit blast radius when upstream judgment fails.
The authority ladder
In my experience, the most useful KYA primitive is the authority ladder, not the acronym. Banks must ask which class of action the agent is allowed to take, not whether the agent can access the system.
| Tier | Agent can… | Typical control requirement |
|---|---|---|
| Observe | Monitor signals or workflow status | Logged access, no sensitive data exposure |
| Retrieve | Pull approved data or documents | Scoped data access, session expiry |
| Recommend | Suggest an action | Evidence trail, source display, no state change |
| Prepare | Draft instructions or assemble a transaction | Policy check, human review before submission |
| Initiate | Submit an action for approval | Step-up control, separate approval credential |
| Approve | Authorize a pending action | Segregation of duties, human or policy gate |
| Execute | Complete the financial action | Strong authentication, limits, audit trail |
| Reverse | Undo or remediate eligible actions | Elevated approval, incident record |
This ladder lets product teams design agent capability without forcing everything into “read-only” or “can act.” It gives risk and compliance teams a way to decide where supervision has to be meaningful.
Initiate and Approve must be separated. Existing payment controls already distinguish creation, confirmation, release, approval, and audit. Agent workflows must preserve that separation through different credentials, a mandatory human gate, or another segregation-of-duties control. If one agent can prepare, initiate, and approve the same transaction, the workflow has automated around the control instead of implementing it.
A maturity model for agent authority
Banks do not have to solve every layer at once. They do need to know where they are.
| Level | State |
|---|---|
| 0 | Agent has no separate identity. Actions are buried under user or app credentials. |
| 1 | Agent has identity and owner, but permissions are static. |
| 2 | Permissions map to the Authority Ladder. |
| 3 | Authority changes by session context and risk. |
| 4 | Actions are monitored, auditable, interruptible, and revocable. |
Level 0 is the dangerous default. The bank may see an action in a log, but it cannot tell whether a person, app, workflow, or agent actually drove it. Level 1 is better, but static permission is still a poor fit for agents that change behavior by context. The real operating model starts when authority maps to action tiers and changes by session risk. These tiers add initial friction, but they accelerate deployment by giving risk functions the confidence to move agents out of permanent read-only sandbox mode.
The delegated authority stack
Behind the ladder is a simple stack: every agent needs an owner and purpose, every agent needs a maximum authority tier that changes by context, and every action needs audit and revocation. Take the $4.8 million treasury case. A corporate treasury agent owned by the customer’s treasurer has a declared purpose: daily liquidity optimization and payment preparation. In one session, the bank grants Retrieve, Recommend, and Prepare authority, but not Initiate, Approve, Execute, or Reverse.
The agent retrieves bank balances, approved payment templates, treasury policy, ERP payables, and a cash forecast. It recommends funding a supplier payment and prepares a wire package with beneficiary, amount, purpose code, invoice references, sanctions-screening status, liquidity impact, and policy rationale.
Then the boundary matters. The bank’s authorization layer checks the agent ID, owner, business entity, session tier, mandate, amount, beneficiary, time window, data provenance, and whether the action crosses from Prepare into Initiate. If the wire exceeds the approved threshold, the agent cannot submit it. A human treasury initiator reviews the package and submits it through the corporate portal. A different approver, using separate credentials, approves release.
The audit trail links the corporate entity, human owner, agent identity, session ID, input data, recommendation, prepared transaction, blocked initiation attempt, human initiator, human approver, final payment reference, and any revocation event. If the agent reads a malicious invoice or starts behaving outside its purpose, the bank can revoke the agent’s session authority without disabling the customer’s human users.
Multi-agent workflows make the authority chain easier to lose. A treasury agent may ask a document-review agent to inspect a contract, then use that result to prepare a payment instruction. Authority must not silently cascade. The Authority Ladder applies per agent, not per workflow: each sub-agent needs its own owner, purpose, tier, audit trail, and revocation path, or it must operate strictly inside the initiating agent’s bounded session authority.
The readiness test
Before allowing an agent to enter a financial workflow, a bank must be able to answer five questions:
- Who owns this agent?
- What purpose is it allowed to serve?
- What is the highest authority tier it can reach?
- What context or risk signal changes that authority?
- How can the bank audit, interrupt, or revoke it immediately?
If those answers are vague, the agent may still be useful, but it must stay in Observe, Retrieve, Recommend, or Prepare modes until the control environment improves.
Banks can start this week with an inventory. List every AI workflow that can access sensitive data or trigger workflow steps. Assign an owner, classify the maximum authority tier, map connected tools, define prohibited actions, require step-up approval for tier changes, and test prompt-injection scenarios. Only then expand from recommendation to preparation or initiation.
The gateway that matters
Agentic finance will be tempting because it promises faster financial work: faster cash movement, faster fraud investigation, faster reconciliation, faster customer service. But finance does not run on speed alone. It runs on authorized action.
You cannot answer whether an agent can move money until you know its owner and purpose. Speed questions presume authority tiers are enforced, not just listed. The word “supervised” is hollow if the human cannot see the context and decision path before approving. Existing governance only protects you if it can stop the agent at the moment its authority changes.
The practical posture is supervised delegation first. Let agents gather context, draft work, recommend actions, and prepare workflows. Let humans, policy engines, and separate credentials retain approval for consequential financial actions until the bank can prove that agent authority is bounded, contextual, auditable, monitored, and revocable.
The goal of KYA is to make delegated action provable. Until the bank can identify the agent, bind it to an owner, limit its purpose, tier its authority, and revoke its session, the agent must stay on the recommend side of the authority line.