Skip to content

hotmeshio/sdk-typescript

Repository files navigation

HotMesh

beta release

Run durable workflows on Postgres. No servers, no queues, just your database.

npm install @hotmeshio/hotmesh

Use HotMesh for

  • Durable pipelines — Orchestrate long-running, multi-step pipelines transactionally.
  • Crash-safe execution — Every step is a committed row. If the process dies, it picks up where it left off.
  • Distributed state machines — Build stateful applications where every component can fail and recover.
  • AI and training pipelines — Multi-step AI workloads where each stage is expensive and must not be repeated on failure. A crashed pipeline resumes from the last committed step, not from the beginning.

How it works in 30 seconds

  1. You write workflow functions. Plain TypeScript — branching, loops, error handling. HotMesh also supports a YAML syntax for declarative, functional workflows.
  2. HotMesh compiles them into a transactional execution plan. Each step becomes a committed database row backed by Postgres ACID guarantees and monotonic integers for ordering. If the process crashes mid-workflow, it resumes from the last committed step.
  3. Your Postgres database is the engine. It stores state, coordinates retries, and delivers messages. Every connected client participates in execution — there is no central server.

Quickstart

Install the package:

npm install @hotmeshio/hotmesh

The repo includes a docker-compose.yml that starts Postgres, NATS, and a development container:

docker compose up -d

Then follow the Quick Start guide for a progressive walkthrough — from a single trigger to conditional, parallel, and compositional workflows.

Two ways to write workflows

Both approaches reuse your activity functions:

// activities.ts (shared between both approaches)
export async function checkInventory(itemId: string): Promise<number> {
  return getInventoryCount(itemId);
}

export async function reserveItem(itemId: string, quantity: number): Promise<string> {
  return createReservation(itemId, quantity);
}

export async function notifyBackorder(itemId: string): Promise<void> {
  await sendBackorderEmail(itemId);
}

Option 1: Code

// workflows.ts
import { Durable } from '@hotmeshio/hotmesh';
import type * as activities from './activities';

export async function orderWorkflow(itemId: string, qty: number) {
  const { checkInventory, reserveItem, notifyBackorder } =
    Durable.workflow.proxyActivities<typeof activities>();

  const available = await checkInventory(itemId);

  if (available >= qty) {
    return await reserveItem(itemId, qty);
  } else {
    await notifyBackorder(itemId);
    return 'backordered';
  }
}

// main.ts
import * as activities from './activities';

const connection = {
  class: Postgres,
  options: { connectionString: 'postgresql://localhost:5432/mydb' }
};

await Durable.Worker.create({
  connection,
  taskQueue: 'orders',
  workflow: orderWorkflow,
  activities,
});

const client = new Durable.Client({ connection });
const handle = await client.workflow.start({
  args: ['item-123', 5],
  taskQueue: 'orders',
  workflowName: 'orderWorkflow',
  workflowId: 'order-456'
});

const result = await handle.result();

Option 2: YAML (functional approach)

# order.yaml
activities:
  trigger:
    type: trigger

  checkInventory:
    type: worker
    topic: inventory.check

  reserveItem:
    type: worker
    topic: inventory.reserve

  notifyBackorder:
    type: worker
    topic: inventory.backorder.notify

transitions:
  trigger:
    - to: checkInventory

  checkInventory:
    - to: reserveItem
      conditions:
        match:
          - expected: true
            actual:
              '@pipe':
                - ['{checkInventory.output.data.availableQty}', '{trigger.output.data.requestedQty}']
                - ['{@conditional.gte}']

    - to: notifyBackorder
      conditions:
        match:
          - expected: false
            actual:
              '@pipe':
                - ['{checkInventory.output.data.availableQty}', '{trigger.output.data.requestedQty}']
                - ['{@conditional.gte}']

Deploy and run as follows:

// main.ts (reuses same activities.ts)
import * as activities from './activities';

const hotMesh = await HotMesh.init({
  appId: 'orders',
  engine: { connection },
  workers: [
    {
      topic: 'inventory.check',
      connection,
      callback: async (data) => {
        const availableQty = await activities.checkInventory(data.data.itemId);
        return { metadata: { ...data.metadata }, data: { availableQty } };
      }
    },
    {
      topic: 'inventory.reserve',
      connection,
      callback: async (data) => {
        const reservationId = await activities.reserveItem(data.data.itemId, data.data.quantity);
        return { metadata: { ...data.metadata }, data: { reservationId } };
      }
    },
    {
      topic: 'inventory.backorder.notify',
      connection,
      callback: async (data) => {
        await activities.notifyBackorder(data.data.itemId);
        return { metadata: { ...data.metadata } };
      }
    }
  ]
});

await hotMesh.deploy('./order.yaml');
await hotMesh.activate('1');

const result = await hotMesh.pubsub('order.requested', {
  itemId: 'item-123',
  requestedQty: 5
});

Both compile to the same distributed execution model.

Common patterns

All snippets below run inside a workflow function (like orderWorkflow above). Durable methods are available as static imports:

import { Durable } from '@hotmeshio/hotmesh';

Long-running workflowssleep is durable. The process can restart; the timer survives.

// sendFollowUp is a proxied activity from proxyActivities()
await Durable.workflow.sleep('30 days');
await sendFollowUp();

Parallel execution — fan out to multiple activities and wait for all results.

// proxied activities run as durable, retryable steps
const [payment, inventory, shipment] = await Promise.all([
  processPayment(orderId),
  updateInventory(orderId),
  notifyWarehouse(orderId)
]);

Child workflows — compose workflows from other workflows.

const result = await Durable.workflow.executeChild({
  args: [orderId],
  taskQueue: 'validation',
  workflowName: 'validateOrder',
  workflowId: `validate-${orderId}`,
});

Signals — pause a workflow until an external event arrives.

const approval = await Durable.workflow.condition<{ approved: boolean }>('manager-approval');
if (!approval.approved) return 'rejected';

Retries and error handling

Activities retry automatically on failure. Configure the policy per activity or per worker:

// Durable: per-activity retry policy (activities registered at Worker.create)
const { reserveItem } = Durable.workflow.proxyActivities<typeof activities>({
  retry: {
    maximumAttempts: 5,
    backoffCoefficient: 2,
    maximumInterval: '60s'
  }
});
// HotMesh: worker-level retry policy
const hotMesh = await HotMesh.init({
  appId: 'orders',
  engine: { connection },
  workers: [{
    topic: 'inventory.reserve',
    connection,
    retry: {
      maximumAttempts: 5,
      backoffCoefficient: 2,
      maximumInterval: '60s'
    },
    callback: async (data) => { /* ... */ }
  }]
});

Defaults: 3 attempts, coefficient 10, 120s cap. Delay formula: min(coefficient ^ attempt, maximumInterval). Duration strings like '5 seconds', '2 minutes', and '1 hour' are supported.

If all retries are exhausted, the activity fails and the error propagates to the workflow function — handle it with a standard try/catch.

Workflow state is queryable data

Workflow state lives in your database as ordinary rows — jobs and jobs_attributes. Query it directly, back it up with pg_dump, replicate it, join it against your application tables.

SELECT
  j.key          AS job_key,
  j.status       AS semaphore,
  j.entity       AS workflow,
  a.symbol       AS attribute,
  a.dimension    AS dimension,
  a.value        AS value,
  j.created_at,
  j.updated_at
FROM
  jobs j
  JOIN jobs_attributes a ON a.job_id = j.id
WHERE
  j.key = 'order-456'
ORDER BY
  a.symbol, a.dimension;

What happened? Consult the database. What's still running? Query the semaphore. What failed? Read the row. The execution state isn't reconstructed from a log — it was committed transactionally as each step ran.

const handle = client.workflow.getHandle('orders', 'orderWorkflow', 'order-456');

const result = await handle.result();           // final output
const status = await handle.status();           // semaphore (0 = complete)
const state  = await handle.state(true);        // full state with metadata
const exported = await handle.export({          // selective export
  allow: ['data', 'state', 'status', 'timeline']
});

Observability

There is no proprietary dashboard. Workflow state lives in Postgres, so use whatever tools you already have:

  • Direct SQL — query jobs and jobs_attributes to inspect state, as shown above.
  • Handle APIhandle.status(), handle.state(true), and handle.export() give programmatic access to any running or completed workflow.
  • Logging — set HMSH_LOGLEVEL (debug, info, warn, error, silent) to control log verbosity.
  • OpenTelemetry — set HMSH_TELEMETRY=true to emit spans and metrics. Plug in any OTel-compatible collector.

Architecture

For a deep dive into the transactional execution model — how every step is crash-safe, how the monotonic collation ledger guarantees exactly-once delivery, and how cycles and retries remain correct under arbitrary failure — see the Collation Design Document. The symbolic system (how to design workflows) and lifecycle details (how to deploy workflows) are covered in the Architectural Overview.

Running tests

Tests run inside Docker. Start the services and run the full suite:

docker compose up -d
docker compose exec hotmesh npm test

Run a specific test group:

docker compose exec hotmesh npm run test:durable         # all Durable tests
docker compose exec hotmesh npm run test:durable:hello   # single Durable test (hello world)
docker compose exec hotmesh npm run test:virtual         # all Virtual network function (VNF) tests

License

HotMesh is source-available under the HotMesh Source Available License.

Packages

 
 
 

Contributors