In 2017, an algorithmic trading system executed a series of cryptocurrency trades at roughly 250 times the prevailing market rate. No human approved the transactions before they settled. The platform later reversed them. The counterparty sued.
The case, Quoine Pte Ltd v B2C2 Ltd [2020] SGCA(I) 02, forced the Singapore Court of Appeal to confront a question that now matters far beyond crypto:
What happens when software forms a contract, the outcome is obviously wrong, and no human intervened at the point of execution?
That is not just a contract-law question anymore. It is one of the clearest judicial parallels to the problem agentic AI systems create today.
The Facts Matter Less Than the Structure
Quoine operated a cryptocurrency exchange. B2C2 was an algorithmic market maker. A disruption in Quoine's pricing inputs led the exchange to execute trades at wildly distorted rates. B2C2's software filled those orders exactly as programmed.
The result was an outcome no reasonable human trader would have approved in ordinary conditions. But the system did not pause. It did not escalate. It did not ask for review. It executed.
That is the structural point that matters for legal AI.
The Hard Question Was Whose Knowledge Counts
In a normal unilateral-mistake case, the court asks what the parties knew at the time of the transaction. If the buyer knows the seller is making a fundamental pricing mistake, that can matter a great deal.
But in Quoine, no human made the decision at the point of trade.
So the court had to answer a different question:
When software acts, whose knowledge is legally relevant?
The court's answer was that the relevant inquiry centered on the programmer's knowledge and intent at the time the system was written, not some fictional mental state at the moment the software later executed.
That is a major idea.
It means responsibility does not disappear because the action was automated. The legal system still looks backward to the people who designed the system and asked what they knew, what they intended, and what risks were foreseeable.
Why This Matters for Legal AI
Replace the trading algorithm with a legal AI workflow.
Now imagine a system that:
- classifies a matter
- selects a jurisdiction
- drafts a document
- routes a communication
- prepares something for filing or delivery
If the system reaches an outcome no reasonable lawyer would have approved, the same question returns:
Who is responsible?
The model ran. The workflow executed. The output moved. No one stopped it.
That does not make the responsibility disappear. It makes system design the center of the analysis.
The Approval Gate Problem
The most useful lesson from Quoine is not about cryptocurrency.
It is about the absence of checkpoints.
The trades happened because the system was allowed to produce externally effective outcomes without a human review layer between the internal logic and the final action. The catastrophic result was not a separate category from the system design. It was the system design.
That is exactly why agentic legal systems need approval gates.
If a workflow can create legal effect, by filing, sending, routing, or authorizing something externally significant, then the system needs a clear human checkpoint before that effect occurs.
Without that checkpoint, the strongest question a court may eventually ask is not "did the system run correctly?" but "why was this system allowed to do that without review?"
The Dissent Matters Too
Lord Mance's dissent is especially useful here.
His position was that equity should intervene where the outcome was so obviously disconnected from what reason and justice would expect that the law should not simply let the result stand.
That idea maps well onto agentic systems.
There will be cases where a workflow executes exactly as designed and still produces an outcome no reasonable professional would endorse. Those are the cases where "the software did what it was told" will not feel like much of a defense.
That is why correct execution is not the same thing as acceptable outcome.
What Legal AI Builders Should Take From This
The lesson is not that software should never act.
It is that action without checkpoints changes the liability surface.
The design questions become much more important:
- what can the system do on its own
- what requires explicit human approval
- what gets logged
- what can create external legal effect
- what happens when inputs are wrong but the workflow still runs
Those are not implementation details. They are the real governance layer.
The Deeper Point
Quoine is one of the clearest reminders that once a system acts, the law will still look for a human source of responsibility.
If there was no checkpoint, that absence will itself be treated as a design choice.
That is the part agentic legal systems should take seriously now, before the cases arrive in their own domain.
The question is not whether the workflow executed. The question is whether the system was designed so that no unreasonable outcome could take effect without a human being able to stop it.
That is where governance becomes architecture.
FlowCounsel is the AI-native operating system for legal teams. FlowLawyers is the consumer-facing legal help platform with attorney discovery, legal aid routing, state-specific legal information, and document tools. Neither provides legal advice. Attorney supervision of all AI output is required.