xFlow documentation
xFlow is a distributed TypeScript workflow runtime built on xSync. A workflow run is one actor. Every lifecycle transition is a signed event in that actor log. The reducer gives you the current run state; the runtime decides what to do next.
Quick start
Install the workflow layer and point it at xSync.
All xFlow packages publish under the @decoperations scope. Add the GitHub Packages registry entry to your .npmrc and authenticate with a token that can read packages.
Install packages
Core workflow primitives, runtime, local provider, and xSync client.
pnpm add \
@decoperations/xflow-core \
@decoperations/xflow-runtime \
@decoperations/xflow-provider-local \
@decoperations/xsync-clientHello, flow
Start with one pure step and watch the reducer update as the run advances.
import { defineWorkflow, defineStep, link, flowRunView } from "@decoperations/xflow-core"
import { createXFlowRuntime } from "@decoperations/xflow-runtime"
import { detectLocalNode } from "@decoperations/xflow-provider-local"
import { createXSync } from "@decoperations/xsync-client"
const workflow = defineWorkflow({
id: "demo.uppercase",
version: "1.0.0",
steps: {
upper: defineStep({
id: "upper",
type: "text.uppercase",
sideEffects: { kind: "pure", idempotencyRequired: false },
}),
},
links: [],
})
const client = await createXSync({ views: { flowRun: flowRunView } })
const runtime = createXFlowRuntime({ node: detectLocalNode({ id: "local" }) })
runtime.register("text.uppercase", async (_ctx, input: string) => input.toUpperCase())
const run = await runtime.start({ client, workflow, input: "hello" })
for await (const snap of run.observe()) {
console.log(snap.status, snap.steps)
}Core model
Workflow definitions are declarative. Execution is event-driven.
xFlow owns workflow grammar, scheduling, claims, and placement. xSync owns signing, causal ordering, replication, views, and persistence.
Workflow
A graph of typed steps connected by links. Serializable by design.
Run
One xSync actor per execution. Lifecycle events live on that actor log.
Host
Observes FlowRun state, schedules next work, resolves claims, dispatches executors.
Executor
A registered function keyed by step type.
Claim
A policy for who may execute and what duplicate work means.
Placement
A contract describing where a step may run and what capabilities it needs.
Workflow primitives
Keep the public API compact and explicit.
defineWorkflow()
Collects step definitions and links into a single typed workflow declaration.
type WorkflowDefinition = {
id: string
version: string
initial?: StepId | StepId[]
steps: Record<StepId, StepDefinition>
links: LinkDefinition[]
policies?: WorkflowPolicies
}defineStep()
Step definitions describe effect kind, placement, claim mode, retries, and timeout behavior.
defineStep({
id: "publish",
type: "social.post",
requires: ["secret.twitter", "network.public"],
placement: { forbidden: ["browser-tab"] },
claim: { mode: "authority", authorityActorId: "server" },
sideEffects: { kind: "irreversible", idempotencyRequired: true },
retry: { maxAttempts: 3, backoffMs: 2_000 },
timeoutMs: 30_000,
})link()
Convenience builder for success, failure, timer, and external signal transitions.
link("generate", "review", { when: { type: "step.succeeded" } })
link("review", "publish", { when: { type: "step.succeeded" } })
link("payment", "remind", { when: { type: "timer", afterMs: 86_400_000 } })
link("draft", "publish", { when: { type: "external-signal", signal: "approved" } })Runtime
The runtime is where definitions meet an actual node.
Create a runtime for a specific environment, register executors by step type, then start or attach to runs. The host observes state and drives progress by emitting signed events.
createXFlowRuntime()
const runtime = createXFlowRuntime({
node: detectLocalNode({ id: "browser-tab-1" }),
uploadArtifact: async (data) => putArtifact(data),
log: (message, details) => console.debug(message, details),
})
runtime.register("llm.generate", async (ctx, input) => {
await ctx.progress({ phase: "drafting" })
return generateDraft(input)
})Executor context
runId
The xSync actor id for this workflow execution.
stepId
The current step being executed.
attempt
Retry attempt counter for the step.
claimId
Stable attempt identifier used across claim events.
runtimeId
The node id processing the step.
Claims and placement
Blast radius and execution environment are first-class workflow concerns.
Single designated actor claims the step. Use this for billing, email send, content publish, and other irreversible effects.
Peers independently derive the same winner for duplicate-tolerant compute.
First valid output wins. Useful for cacheable generation and content-addressed work.
Ownership lasts for a TTL and can expire back into the queue if the worker dies.
Comparisons
xFlow sits in the durable-workflow landscape, but its center of gravity is different.
As of May 2026, Vercel documents Workflow as a managed platform on top of the open-source Workflow Development Kit. Temporal, Inngest, Trigger.dev, and Cloudflare Workflows are all credible modern comparators, but they primarily optimize for durable server-side execution. xFlow is trying to preserve durable workflow semantics while also treating replicated actor state, browser participation, and pluggable storage/transport as first-class.
Fast read
xFlow.WTF
Open sourceWorkflow graph on signed xSync actor logs
Best fit
Browser/server/worker workflows that need replication, local-first behavior, and auditable event history.
How it differs from xFlow
Each run is an actor; claims and placement are first-class; storage and transports are swappable.
Workflow SDK
Open sourceDurable async functions via `use workflow` / `use step` directives
Best fit
Server-side long-running jobs and AI agents that want resumability without building queue orchestration manually.
How it differs from xFlow
Open-source, TypeScript-first, portable, and self-hostable. More function-centric than replicated-state-centric.
Vercel Workflow
Open sourceManaged platform for Workflow SDK on Vercel
Best fit
Teams that want the Workflow SDK model with Vercel-managed execution, queues, persistence, and observability.
How it differs from xFlow
Vercel-hosted layer, currently documented as Beta, built on top of the open-source kit.
Temporal
Open sourceEvent-sourced durable execution engine with workflow history and activities
Best fit
High-scale backend orchestration, long-running business processes, and teams comfortable operating or buying into a dedicated workflow service.
How it differs from xFlow
Temporal is the heavyweight benchmark. It is extremely strong on backend durability, but it is not optimized for local-first replication or browser-native execution.
Inngest
Open sourceEvent-driven durable execution with SDK-defined functions and retriable steps
Best fit
Background jobs and workflows triggered by events, cron, or webhooks across common app stacks.
How it differs from xFlow
More event-platform-oriented than xFlow. Portable across hosting targets, but the run model remains service-coordinated instead of replicated actor state.
Trigger.dev
Open sourceReliable background jobs and workflows in plain async code
Best fit
TypeScript teams building long-running jobs, AI tasks, and operational pipelines with strong DX and observability.
How it differs from xFlow
Very strong developer experience for server-side jobs. Compared with xFlow, it is job-centric rather than multi-peer replicated workflow state.
Cloudflare Workflows
Open sourceDurable multi-step execution on top of Cloudflare Workers
Best fit
Cloudflare-native applications that want managed retries, waits, and long-running orchestration at the edge.
How it differs from xFlow
A newer durable-execution entrant. Strong fit if your world already lives on Workers; less portable than xFlow's xSync-based substrate.
workflow-builder-starter
Open sourceVisual workflow product starter built with Next.js and Vercel's workflow stack
Best fit
Building customer-facing workflow builders or internal automation UIs with drag-and-drop editing.
How it differs from xFlow
Template/application, not a competing runtime primitive. Useful reference for workflow-product UX.
Architecture survey
A workflow system is ten architectural choices, not just durability.
Face-value analysis of how the workflow industry is actually built. Durability is one axis of many — runtime, scheduler, dispatch, claims, and external IO matter just as much. Mechanism categories below are referenced in the system table further down.
The ten axes
Every credible workflow engine answers each of these questions, even if implicitly. Pick a system and you can place it on every row.
Programming model
How is a workflow authored?
Directives ("use workflow" / "use step" — WDK), graph builder (defineWorkflow + link — xFlow), TypeScript class with decorators (Temporal), BPMN XML (Camunda), ASL JSON (Step Functions), DAG YAML (Argo), Python decorators (Airflow / Prefect / Dagster).
Compiler / transform
What happens at build time?
SWC plugin that rewrites step boundaries (WDK), interceptor chain (Temporal SDK), CRD validation (Argo), AST analysis to discover workflows (DBOS), no-op for most engines.
Execution runtime
Where does workflow code actually run?
Sandboxed V8 isolate (WDK, Inngest), JVM actor (Akka), CLR grain (Orleans), Firecracker microVM (Trigger.dev), edge worker (Cloudflare), browser tab or Web Worker (xFlow).
Scheduler
What decides the next step?
Pure function over current state (xFlow nextReadySteps), centralized polling scheduler (Airflow), partition leader (Zeebe), single-activation actor (Durable Objects), distributed claim against a queue (Inngest).
Dispatch / queue
How does work reach a runtime?
RabbitMQ / SQS / NATS, Postgres LISTEN/NOTIFY (Hatchet), Vercel Queues (WDK), in-memory (DBOS), claim-on-log (xFlow), BroadcastChannel for browser peers.
Persistence / event log
Where does state live on disk?
Cassandra / Postgres (Temporal), embedded RocksDB + Raft log (Zeebe, Restate), DynamoDB (AWS Step Functions), append-only event store (WDK Worlds), S3WORM + IndexedDB + FS + SQLite (xFlow via xSync stores).
State model
How is current state reconstructed from storage?
Deterministic replay (Temporal, WDK, Azure Durable Functions), step memoization without replay (Inngest, Mastra, Cloudflare Workflows), oplog continuation (Golem), process snapshot (Trigger.dev v3 via CRIU), materialized view over signed log (xFlow).
Concurrency / claims
Who is allowed to advance a step right now?
Sticky workers (Temporal), single-activation actor (Durable Objects, Orleans), queue-as-mutex (Inngest), explicit claim modes — authority, deterministic-election, optimistic-idempotent, lease (xFlow).
Time / timers
How are sleeps and deadlines durable?
External timer wheel (Temporal, Restate), sleep events on the log + scheduler tick (xFlow), fluid-compute hibernation (WDK), DAG schedule with cron (Airflow).
External IO
How does the outside world talk to a running run?
Signals / queries / updates (Temporal), webhooks + hooks (WDK), external-signal links (xFlow), pause-for-event (Inngest), wait-for-condition (Zeebe).
Cross-cutting concerns
The five concerns that cut across every axis above and end up shaping production-readiness more than any single primitive.
Identity / signing
ed25519-signed events with actor authorship (xFlow). Most engines treat the orchestrator service as the trust root and do not sign individual events.
Versioning / migration
Patch APIs (Temporal), Skew Protection pinning runs to old deploys (WDK), event-shape evolution on the reducer (xFlow). Most DAG engines redeploy and break in-flight runs.
Observability
Built-in dashboard with replay viewer (Temporal, WDK on Vercel, Airflow). xSync views as the primary observability surface, anything subscribed sees the same state (xFlow).
Failure handling
Retry policies, sagas, compensation (Temporal). Step retries with backoff (most). Claim re-resolution + retry (xFlow).
Multi-tenancy
Namespaces (Temporal), projects (Inngest, Trigger.dev), per-bucket roots (xFlow).
Durability mechanism taxonomy
Eight mechanisms cover essentially every production workflow engine. Categories A–H are referenced in the system table below.
Event-sourced deterministic replay
Invariant. Step inputs/outputs journaled before any new external call.
Determinism. Strong — sandboxed body, no wall clock or random.
Temporal · Cadence · Azure Durable Functions · Vercel WDK · Restate · DBOS
Step memoization (no replay)
Invariant. Step output is committed to a store before its return value is exposed.
Determinism. Weak — only step identity must be stable.
Inngest · Mastra · Cloudflare Workflows · Hatchet (durable mode)
Continuation snapshot
Invariant. Process or VM state captured at suspend points and restored on resume.
Determinism. None at source level; idempotency keys gate side effects.
Trigger.dev v3 (CRIU) · Golem (WASM oplog)
Persistent actor with event journal
Invariant. Single addressable actor; events appended to journal, periodic snapshots compact replay.
Determinism. Reducer must be pure.
Akka Persistence · Microsoft Orleans · Cloudflare Durable Objects
Managed-DB state machine
Invariant. State transition is a transactional row write in a managed datastore.
Determinism. None at user-code level; steps are external services.
AWS Step Functions · Azure Logic Apps · Conductor · Cloudflare Workflows (atop DO)
DAG metadata DB
Invariant. Each task instance is a row; scheduler polls the DB to advance the DAG.
Determinism. None.
Apache Airflow · Prefect · Dagster · Argo Workflows · Windmill · n8n
Stream processor on RocksDB + Raft
Invariant. Commands appended to a Raft-replicated partitioned log; deterministic processor updates embedded RocksDB.
Determinism. Stream processor must be deterministic.
Camunda 8 / Zeebe · Restate (hybrid with A)
Materialized view over signed multi-writer log
Invariant. Multiple peers append signed events to a per-actor causal DAG; current state is a deterministic reducer.
Determinism. Reducer must be deterministic; events carry causal predecessors.
xFlow.WTF
System-to-mechanism mapping
Sixteen systems classified by mechanism, replay model, and structural strength versus structural limit. The Mech column references categories A–H above.
| System | Mech | Replay model | Storage | Strength | Structural limit |
|---|---|---|---|---|---|
| xFlow.WTF | H | Reducer over causal DAG | S3WORM, IndexedDB, FS, SQLite (xSync stores) | Multi-writer + offline + signed audit | Strong exactly-once is an explicit v1 non-goal |
| Temporal | A | Deterministic replay | Cassandra, PostgreSQL, MySQL | Benchmark for long-running backend workflows | Determinism trap on code change; history-size cap |
| Restate | A + G | Journal + replay | Embedded RocksDB on each node | Lightweight self-host; HTTP service handlers | Newer; smaller ecosystem |
| Vercel WDK | A | Deterministic replay | Pluggable Worlds (Vercel, Postgres, Local) | Best-in-class Next.js DX; SWC compile-time transform | 240s replay cap, 25k events/run, iad1-only Vercel World |
| Inngest | B | Re-invoke + memoize on stepId hash | Inngest-managed | Runs on stock serverless | Function code re-runs from top each step |
| Trigger.dev v3 | C | Snapshot + resume (CRIU) | Trigger.dev-managed checkpoints | Native long-running JS; no rewrite required | Container-bound; checkpoint format opaque |
| Hatchet | F (B in durable mode) | Step memoization on durable tasks | PostgreSQL | Self-host on Postgres alone | Throughput tied to one Postgres primary |
| AWS Step Functions | E | No replay — engine drives transitions | AWS-managed (opaque) | Zero ops; deep AWS integration | 25k history events; 1-year max; per-transition pricing |
| Azure Durable Functions | A | Deterministic replay | Azure Storage / MSSQL / Netherite | Strongest replay engine outside Temporal | Storage-provider tradeoffs are real |
| Cloudflare Workflows | B atop D | Re-invoke + memoize | Durable Object SQLite | Edge-native; weeks-long sleeps free | Newer; smaller ecosystem |
| Cloudflare Durable Objects | D | No journal exposed | SQLite per object | Strong consistency at the edge | Single-writer per object is the model |
| Camunda 8 / Zeebe | G | Stream processor replays log into RocksDB | Embedded RocksDB; replicated log | Highest-throughput BPMN engine | BPMN-only authoring model |
| Akka Persistence | D | Deterministic replay into reducer | Cassandra / JDBC / R2DBC / DynamoDB plugins | Reference event-sourced actor library | JVM-only; framework-heavy |
| Microsoft Orleans | D | Snapshot + resume (event-sourced add-on available) | Azure Tables / Cosmos / ADO.NET / DynamoDB / Redis | Battle-tested at Halo / Xbox scale | .NET-only |
| Apache Airflow | F | None — DAG re-runs from failed task | PostgreSQL or MySQL metadata DB | Decades-deep batch ecosystem | Built for hourly / daily, not sub-second |
| Golem Cloud | C | Replay oplog through WASM component | Golem-managed oplog | Source-language agnostic via WASM Component Model | Newer; ecosystem still forming |
Tradeoff matrix
How each mechanism handles the workloads workflow teams actually care about. ✅ structural strength · ⚠️ partial or qualified · ❌ structurally limited.
| Mech | Long runs | High fan-out | Multi-writer | Local-first / browser | Signed audit | Multi-region HA | In-flight versioning | Operability |
|---|---|---|---|---|---|---|---|---|
| A | ✅ | ⚠️ | ❌ | ❌ | ⚠️ | ✅ | ⚠️ | ⚠️ |
| B | ✅ | ✅ | ❌ | ❌ | ⚠️ | ⚠️ | ✅ | ✅ |
| C | ✅ | ⚠️ | ❌ | ❌ | ❌ | ⚠️ | ❌ | ⚠️ |
| D | ✅ | ⚠️ | ❌ | ⚠️ | ⚠️ | ✅ | ⚠️ | ⚠️ |
| E | ⚠️ | ⚠️ | ❌ | ❌ | ✅ | ⚠️ | ⚠️ | ✅ |
| F | ⚠️ | ⚠️ | ❌ | ❌ | ⚠️ | ⚠️ | ⚠️ | ✅ |
| G | ✅ | ✅ | ❌ | ❌ | ⚠️ | ✅ | ⚠️ | ❌ |
| H | ✅ | ⚠️ | ✅ | ✅ | ✅ | ⚠️ | ✅ | ✅ |
Where xFlow sits
xFlow is the only entry in mechanism category H. A workflow run is one xSync actor whose log is a signed, content-addressed, multi-writer DAG, and flowRunView is the reducer.
What this buys. Browser, edge, server, and worker peers can all be writers on the same run. The history is a signed audit log out of the box. The store is pluggable down to plain S3WORM rather than a managed DB or Raft cluster. The closest cousins are Akka Persistence (D, single-writer JVM-only) and the Vercel Workflow Development Kit (A, server-only and unsigned).
What it does not yet match. Temporal-class deterministic-replay tooling for long-running server workflows — versioning APIs, signal / query / update primitives, multi-cluster failover. Zeebe-class throughput per partition. The strong exactly-once side-effect guarantees A and G systems give. The SPEC explicitly lists strong exactly-once semantics as a v1 non-goal. Claim policies — authority, deterministic-election, optimistic-idempotent, lease — cover the realistic spectrum, but operators choosing xFlow today are buying multi-writer + offline + signed audit, not a Temporal replacement at server scale.
What we are doubling down on. Browser participation and offline durability, signed audit history as a built-in compliance artifact, and storage portability all the way to plain S3 buckets. These are the axes where the rest of the field is structurally weak, and they are the axes where xFlow does not have to compete with Temporal on its home turf.
Packages
Phase 1 is a focused package set, not a giant platform blob.
@decoperations/xflow-core
Definitions, reducer, view model, event types.
@decoperations/xflow-runtime
Scheduler, host, lifecycle dispatch, claim handling.
@decoperations/xflow-provider-local
In-process runtime adapter for browser or Node.
@decoperations/xflow-react
Hooks such as useFlowRun() and useStep().
@decoperations/xflow-next
Helpers for Next App Router route handlers.
@decoperations/xflow-xstate
Bridge from XState machine config into workflow shape.
Storage layout
Run metadata and event history stay separate on purpose.
Bucket layout
Run event logs live under the xSync hierarchy because the run actor is still an xSync actor.
s3://<bucket>/<root>/
manifest.json
xsync/
logs/<actorId>/<writerId>/<seq>-<eventId>.json
heads/<actorId>/<writerId>.json
snapshots/<actorId>/<viewId>.json
index/events/<eventId>.json
xflow/
workflows/<workflowId>@<version>/definition.json
runs/<runId>/manifest.json
artifacts/<sha256>