ComparisonTS-native peer

xFlow vs Mastra

Mastra is the most direct TypeScript-native peer to xFlow. It owns workflow composition, agent orchestration, tool execution, memory, and evals as a single batteries-included framework. This page covers Mastra's workflow runtime, agent runtime, tool runtime, memory, and suspend/resume model — and where xFlow's IR-first model goes structurally beyond it.

Mastra is excellent at what it does. Its DX, observability, and TS-first ergonomics are best-in-class. This page is a structural comparison so teams choosing between the two understand which axis they're optimizing for.

Setup

The question this page answers.

Mastra is a TS-native batteries-included agent + workflow framework. xFlow is a workflow substrate with the workflow itself as data and the runtime as a substrate-pluggable consumer. They overlap on workflow composition; they diverge on registry, multi-substrate, signing, and the optimization story.

What is Mastra?

A workflow composer (`.then` / `.parallel` / `.branch` / `.dountil` / `.foreach`), an Agent class with model + tools + memory, an evals system, and a deployable runtime — all in one TS framework.

What's outside its model?

Definition distribution as data, statechart richness (parallel-region join, hierarchical, history), multi-substrate runtime, multi-writer claims, browser placement, signing, optimization, and provability.

When is Mastra alone enough?

TS-first product team, server-side execution, sequential/branchy workflows, trusted authors, single-runtime deployment.

Baseline

A minimal Mastra workflow + agent.

The canonical shape: an Agent with a model and tools, a step that calls it, a workflow that chains steps with .then / .branch / .parallel, and a Mastra instance that registers them. Storage and evals plug in on the Mastra instance.

hello-mastra.ts
// Mastra — minimal workflow + agent.
import { Mastra } from "@mastra/core"
import { Agent } from "@mastra/core/agent"
import { createWorkflow, createStep } from "@mastra/core/workflows"
import { z } from "zod"
import { openai } from "@ai-sdk/openai"

const triage = new Agent({
  name: "triage",
  model: openai("gpt-4o"),
  instructions: "Classify the issue.",
  tools: { /* ... */ },
})

const classify = createStep({
  id: "classify",
  inputSchema:  z.object({ issue: z.string() }),
  outputSchema: z.object({ label: z.enum(["bug", "feature", "support"]) }),
  execute: async ({ inputData }) => {
    const r = await triage.generate([{ role: "user", content: inputData.issue }])
    return { label: r.text as "bug" | "feature" | "support" }
  },
})

const handle = createStep({ /* ... */ })

const wf = createWorkflow({
  id: "issue.triage",
  inputSchema:  z.object({ issue: z.string() }),
  outputSchema: z.object({ ticketId: z.string() }),
})
  .then(classify)
  .branch([
    [async ({ inputData }) => inputData.label === "bug",     bugStep],
    [async ({ inputData }) => inputData.label === "feature", featureStep],
    [async ({ inputData }) => inputData.label === "support", supportStep],
  ])
  .commit()

new Mastra({ agents: { triage }, workflows: { issueTriage: wf } })

Workflow runtime

Composition primitives over async steps.

Mastra's workflow runtime is a chain of composition combinators. Each step is a typed async function. The runtime resolves the chain, persists state at suspend points, and emits telemetry. Schemas (zod) flow through inputs and outputs.

workflow primitives
// Mastra workflow composition primitives.

createWorkflow({ id, inputSchema, outputSchema })
  .then(step)                                  // sequence
  .parallel([step1, step2, step3])             // concurrent fan-out
  .branch([[condFn, stepA], [condFn, stepB]])  // exclusive routing
  .dountil(step, condFn)                       // loop until cond
  .dowhile(step, condFn)                       // loop while cond
  .foreach(step, { items: "$.list" })          // iterate over array
  .map(step)                                   // transform input → output
  .sleep(durationMs)                           // pause
  .waitForEvent(eventName)                     // pause for external signal
  .commit()                                    // finalize the workflow

// Steps are async functions:
createStep({
  id, inputSchema, outputSchema,
  execute: async ({ inputData, getStepResult, suspend, resume }) => { ... }
})

Sequence + branching

.then chains; .branch takes [cond, step] tuples; only the first matching branch runs.

Parallel + iteration

.parallel([s1, s2]) fans out; .foreach iterates over an array; .dountil / .dowhile loop with predicates.

Time + signals

.sleep(ms) pauses; .waitForEvent(name) blocks until external signal; both persisted across restarts.

Schemas

Each step has zod input + output schemas. Mastra typechecks the chain at compile time and validates at runtime.

Telemetry

OpenTelemetry-compatible traces are emitted by every step. Mastra Cloud has a UI for inspecting runs; self-hosted ships traces wherever you point them.

Step results

Each step receives getStepResult(otherStep) to read prior outputs by id — no global state, just typed lookups.

Agent runtime

Agent class — model + tools + memory + evals.

The Agent is Mastra's core orchestration object. Pass a model, instructions, tools, and memory; call .generate / .stream / .streamObject. Mastra handles tool-call orchestration, message routing, and telemetry. Tools execute in-process.

agent runtime
// Mastra agent runtime — Agent class.

const agent = new Agent({
  name:         "support-bot",
  description:  "answers product questions",
  instructions: "You are a helpful assistant...",
  model:        openai("gpt-4o"),
  tools:        { search, lookupOrder, escalate },
  memory:       new Memory({ storage: pg, vector: pg, embedder: openai("text-embedding-3-small") }),
  evals:        { /* metric registrations */ },
  voice:        new OpenAIVoice({ /* ... */ }),
})

// Three calling shapes:
const r  = await agent.generate(messages, { runId, threadId })
const s  = await agent.stream(messages,   { runId, threadId })
const so = await agent.streamObject(...)

// Tools are local LangChain-style functions (or AI SDK 'tool()'):
const search = createTool({
  id:           "search",
  description:  "search docs",
  inputSchema:  z.object({ q: z.string() }),
  outputSchema: z.object({ results: z.array(z.string()) }),
  execute:      async ({ context: { q } }) => fetchDocs(q),
})

// All tool execution is in-process. No id@version, no placement,
// no claim model, no signing, no multi-writer.

Memory

Threads, semantic recall, and working memory.

Mastra's memory is per-agent, per-thread. Storage holds messages verbatim; vector store powers semantic recall; working memory lets the agent maintain an evolving JSON document. Memory is a separate persistence concept from workflow run state.

memory configuration
// Mastra memory — thread-based.

new Memory({
  storage:  pg,                   // libsql / pg / upstash / d1 / ...
  vector:   pg,                   // pgvector / pinecone / qdrant / ...
  embedder: openai("text-embedding-3-small"),
  options: {
    lastMessages: 20,
    semanticRecall: { topK: 5, messageRange: 4 },
    workingMemory: { enabled: true, template: "..." },
  },
})

// Threads = conversation containers; messages stored verbatim.
// Working memory = an evolving JSON document the agent maintains.
// Semantic recall = vector lookup over message history.

// Memory is per-agent, not per-flow. Workflow runs and agent threads
// are different persistence concepts in Mastra.

Human-in-the-loop

suspend() inside a step, resume() from outside.

Mastra's HITL primitive is suspend() inside a step body. The runtime persists state and exits. An external caller resumes the run with resumeData; the step is re-executed and reads resumeData on the second pass.

suspend/resume
// Mastra suspend/resume — built-in HITL.

const approve = createStep({
  id: "approve",
  inputSchema:  z.object({ amount: z.number() }),
  outputSchema: z.object({ approved: z.boolean() }),
  resumeSchema: z.object({ decision: z.enum(["yes", "no"]) }),
  execute: async ({ inputData, suspend, resumeData }) => {
    if (!resumeData) {
      await suspend({ ask: "approve charge?", amount: inputData.amount })
      return { approved: false }              // unreached on suspend
    }
    return { approved: resumeData.decision === "yes" }
  },
})

// Resume from outside:
await mastra.getWorkflow("checkout").createRun({ runId }).resume({
  step: approve,
  resumeData: { decision: "yes" },
})

// State is persisted to the storage layer between suspend and resume.

Gap analysis

Where Mastra's model bottoms out — nine axes.

Mastra's structural composition is closer to xFlow than LangGraph is. Both have parallel, branching, looping, iteration. Both separate definition from execution. The deltas below are about the *kind* of definition (code vs IR), the action layer (in-process vs registry-resolved), and the execution model (single runtime vs multi-substrate).

Axis

Graph-is-code vs graph-is-data

Mastra: A Mastra workflow is built imperatively in TypeScript via .then / .parallel / .branch / .dountil. Steps are async functions. The workflow lives in code; the runtime materializes it at process start.

Consequence: You can't inspect, sign, diff, version, content-address, or distribute a Mastra workflow as data. Sharing means sharing source. Cross-language consumption is impossible. Static analysis or optimization passes have to be re-implemented in code each time.

What xFlow adds

Graph-is-code vs graph-is-data

Flow definitions are canonical JSON (RFC 8785) and SCXML. Each registry entry has a sha256 content hash and an ed25519 signature. The workflow can be inspected, diffed, validated, and authorized without executing any of its bodies.

Axis

Statechart richness

Mastra: Mastra has good structural composition: sequence, parallel, branch, dountil, dowhile, foreach, sleep, waitForEvent. The model is a directed step graph with control-flow combinators — closer to a workflow DSL than a statechart.

Consequence: There's no parallel-region with onDone-join (you compose .parallel in a chain instead). No hierarchical macro-states. No history states for resume-where-you-were-after-a-transition. Patterns that need 'fork branches and re-enter where you were when the macro-state finishes' need custom state.

What xFlow adds

Statechart richness

xState as the canonical language. Statecharts give you parallel regions with formal join semantics, hierarchical state nesting, history (shallow + deep), guards, actions, and SCXML serialization for W3C interop. Mastra's combinators map cleanly into the xState shape — Mastra workflows can be compiled into xFlow definitions.

xFlow — parallel region with onDone
states: {
  review: {
    type: "parallel",
    states: {
      humanApproval: { /* ... */ },
      fraudCheck:    { /* ... */ },
    },
    onDone: "charge",                    // joins on all-done
  },
}

Axis

Tool / action runtime

Mastra: Tools are LangChain-style or AI-SDK `tool()` functions registered on an Agent. Agents call tools in-process. Workflows call tools indirectly via Agent steps. There is no notion of where a tool runs, who's authorized, or how to resolve a tool across repositories.

Consequence: Running a tool on a different machine, in a sandbox, in a browser tab, or remotely-but-signed requires a custom step that proxies. Tenant-supplied tools require a separate sandboxing product. There's no portable id@version contract for sharing tools.

What xFlow adds

Tool / action runtime

Actions are first-class registry entries (`action:<id>@<version>`). They resolve via content-addressed registry, can declare placement (`server` / `browser` / `gpu`), claim mode (authority / lease / deterministic-election / optimistic-idempotent), retries, idempotency keys, and side-effect kind. The runtime resolves actions at run-time.

xFlow — action with placement and claim
defineAction({
  id: "search",
  version: "1.0.0",
  placement: { kind: "server", capabilities: ["network.public"] },
  claim: { mode: "lease", ttlMs: 30_000 },
  sideEffects: { kind: "pure", idempotencyRequired: false },
  retry: { maxAttempts: 3, backoffMs: 2_000 },
})

Axis

Persistence model

Mastra: Storage is pluggable (libsql / pg / upstash / d1). Workflow runs persist run state to the storage layer at suspend points; agent threads persist messages, working memory, and semantic recall. Two distinct persistence concepts that don't share an event log.

Consequence: There's no single signed event log per run. Audit reconstruction means joining workflow run state with agent thread tables. No replay onto a different substrate from individual events. No tamper-evident chain of custody.

What xFlow adds

Persistence model

An xSync actor log per run: each lifecycle transition is a signed (ed25519) event with causal predecessors. Current state is a deterministic reducer over the log. Storage is pluggable down to plain S3WORM or IndexedDB. The log itself is the audit artifact.

Axis

Single executor vs multi-writer

Mastra: One process owns a runId at a time. Concurrent operations on the same run serialize through storage. The execution model assumes a single host driving the workflow.

Consequence: Browsers, edge workers, and long-running backend jobs can't be peer participants in the same run. Human-approval steps go through the suspend/resume API rather than as peer writers. Multi-region failover means hot-standby plumbing.

What xFlow adds

Single executor vs multi-writer

Multiple peers can be writers on the same run. Claim modes describe who's allowed to advance which step: authority (single designated actor), lease (TTL-bounded ownership), deterministic-election (peers derive the same winner), optimistic-idempotent (first valid output wins).

Axis

Browser participation

Mastra: Mastra is server-first. The framework runs in Node.js (and Cloudflare Workers / Vercel Functions). Browser-side code is a client of an HTTP API, not a peer participant in the workflow.

Consequence: Steps that should run client-side — local file IO, OS keychain access, offline-first capture, human approval in a tab — need a separate orchestration mechanism. There's no single audit trail across server and client work.

What xFlow adds

Browser participation

A flow can have steps with `placement: { kind: 'browser' }` that execute in a tab as typed peers, append events to the same substrate, and join the same run. xSync transports (BroadcastChannel, WebSocket, SSE) make this real today.

Axis

Signing and trust

Mastra: Workflows are TS code. Tools are TS functions. Agents are TS objects. There's no signing of definitions or events. Trust = 'whoever can deploy the code.'

Consequence: Loading a community-contributed workflow or tool means executing untrusted code. Multi-tenant platforms need a sandboxed compute layer. There's no tamper-evident history of what a run actually did.

What xFlow adds

Signing and trust

Flow defs, action defs, and event log entries are ed25519-signed. A trust list scopes which signers can run inside a substrate. Runtime can require signature verification before resolving an `id@version`.

Axis

Deployment model

Mastra: Mastra Cloud (managed), or self-host as a Node.js / Cloudflare Workers / Vercel Function deployment. The runtime + workflows + agents bundle into a deployment unit. Telemetry, evals, and tracing are first-class.

Consequence: To get production-grade durability + queueing + UI, you adopt Mastra (self-host or cloud). The runtime is the deployable unit. Workflows themselves are bundled into that deployment.

What xFlow adds

Deployment model

A flow definition is JSON in a bucket. A Next route, a Fly Docker worker, a Bitlaunch worker, and a browser tab all resolve the same `id@version` and execute the same definition. The deployment unit is the substrate + worker pool, not the workflow.

Axis

Optimization and provability

Mastra: Workflow steps and agent tools are opaque host-language functions. The framework provides telemetry, evals, and tracing — excellent observability — but no whole-flow optimization, no portable IR, no path to formal verification or ZK execution.

Consequence: There's no equivalent of DSPy for Mastra workflows. Optimization stops at human-tuned prompts and hand-edited graphs. There's no compiler that takes a workflow + a metric and emits an optimized variant. There's no path to verifiable execution because the input language is 'arbitrary TypeScript.'

What xFlow adds

Optimization and provability

A flow def is a typed, statechart-shaped IR — guards, actions, and invoked actors are symbolic references, not opaque functions. That makes whole-flow optimization tractable: prompt-rewriting passes (DSPy-style), branch reordering, dead-state elimination, action substitution, parallel-region inference, target-substrate compilation. See `/docs/optimization` for the full thesis.

Coexistence

Mastra ↔ xFlow federation is the realistic shape.

Mastra's batteries-included DX is a real win. xFlow's IR-first model is a real win. The teams that get the most from xFlow probably keep Mastra for agent + tool + memory + evals ergonomics and use xFlow for portable workflow definitions, multi-substrate execution, and the optimization / provability story.

Mastra step calls into xFlow

A Mastra step execute() can resolve and run an xFlow flow by id@version — getting the registry, multi-substrate, and signed-log story without rewriting the surrounding agent ergonomics.

xFlow action wraps a Mastra Agent

Conversely, an xFlow action body can call a Mastra Agent — taking advantage of Mastra's memory, evals, and voice while the surrounding flow definition stays as portable IR.

Honest accounting

When Mastra alone is sufficient.

If all five conditions hold, xFlow is overkill for the workflow layer specifically. Keep Mastra. The value xFlow adds is real — registry, multi-substrate, signing, optimization — but it's not free, and Mastra's TS-first DX is a real win you don't want to give up if the gaps don't bite.

TS-first product team

You're building agents in TypeScript and want strong DX with built-in evals, tracing, memory, voice, RAG. Mastra's batteries-included story is exactly the right shape.

Server-side execution

Your agents and workflows run on Node.js, Cloudflare Workers, or Vercel Functions. Nothing needs to run in a browser tab as a workflow peer.

Sequential / branchy workflows

Your workflows fit naturally into .then / .parallel / .branch / .dountil / .foreach. You don't need parallel-region join semantics, hierarchical macro-states, or history states.

Trusted authors

Workflows and tools are written and deployed by your team. You don't need signed definitions, tenant-supplied flows, or cross-org distribution.

Mastra Cloud or single-runtime self-host

You're happy adopting Mastra (managed or self-hosted) as the runtime. You don't need the same workflow definition to run on multiple substrates.

Why this tranche

When xFlow is justified.

The decision isn't 'Mastra or xFlow' in the abstract — and often isn't 'or' at all (see federation above). It's whether the gaps in the previous section describe your situation.

Multi-product family

xFlow + Switchboard.WTF + xCoder + xAgent.WTF all need to read the same workflow definitions. A registry is mandatory.

Cross-substrate flows

The same definition needs to run in a Next route, a Docker worker, a CLI, and a browser tab. Mastra's single-runtime execution model can't carry this.

Tenant-supplied flows

Customers upload workflow definitions. They must be signed, validated, and policy-checked before running — and the action surface must be sandboxable.

Statechart semantics actually used

Parallel regions with onDone, hierarchical macro-states with sub-flows, history states for resume. Mastra's composition primitives don't reach this; xFlow does.

Optimization and provability

You want flows that can be optimized by a compiler, compiled to alternative substrate targets, or executed with verifiable / zero-knowledge guarantees. That requires a typed IR — not arbitrary TS. See `/docs/optimization`.

Signed audit trail

Compliance or trust requires a tamper-evident log of every transition. xSync's signed event log gives you this for free; Mastra's storage layer doesn't.

Recap

One sentence.

Mastra is a batteries-included TS-native agent + workflow framework with excellent DX, composition primitives, memory, evals, and a single-runtime deployment story.

xFlow trades the single-runtime model for a graph-is-data model: statechart-shaped flow definitions resolved by id@version from a content-addressed registry, signed actions with placement and claim semantics, multi-substrate runtime, and a typed IR that makes optimization, target compilation, and verifiable execution tractable.

For projects in Mastra's center — TS-first, server-side, sequential/branchy, single-runtime — Mastra is the right call. For projects in the xFlow tranche — cross-substrate, multi-writer, signed-and-portable, optimization-curious — the additions pay for themselves. Federation is the most realistic shape for many teams.