emit.run

Introduction

Core concepts of the emit.run system

Introduction

emit.run is a background job processing system with reliable, stateful job execution — built-in retries, timeouts, and real-time progress tracking.

Core Concepts

Spaces

Jobs are organized into spaces. A space belongs to an organization and acts as an isolated queue — it has its own jobs, API tokens, and WebSocket event stream.

Common patterns:

  • One space per environment: production, staging
  • One space per job type: emails, video-processing, exports
  • One space per team or customer

Jobs

A job represents a unit of work. Each job has:

FieldDescription
nameString identifier like send-email or generate-report
payloadOptional JSON data your worker receives
statusscheduledpendingdeliveredrunningcompleted, dead, or killed
maxRetriesHow many times to retry on failure (default: 3)
timeoutSecondsMax execution time per attempt (default: 300s)
scheduledForOptional future timestamp — delays the job until this time
callbackUrlOptional webhook URL — receives a POST when the job reaches a terminal state
callbackHeadersOptional custom headers sent with the callback (e.g., auth tokens)

Job Lifecycle

polldelivery timeoutackcompletefail (no retries)kill (operator)retry (if attempts remain)webhookpendingdeliveredrunningcompleteddeadkilledcallback URL
transitiondelivery timeoutretrykillwebhook
  1. Created — Job enters pending state and is queued in the space's dispatch queue. If scheduledFor is set, the job enters scheduled state instead and waits until the specified time.
  2. Scheduled (optional) — Job waits until scheduledFor time, then automatically transitions to pending and enters the dispatch queue.
  3. Delivered — Worker polls and receives the job; status moves to delivered and a delivery timeout starts. If the worker never acks, the job reverts to pending automatically.
  4. Running — Worker calls ack; status moves to running and the execution timeout starts
  5. In progress — Worker sends strict progress updates (percent + user-facing message), checkpoint events, and optional custom events
  6. Completed, Dead, or Killed — Worker calls complete (→ completed) or fail. On fail: retries remaining → re-queued as pending; no retries remaining → dead. An operator or control plane can call kill at any point before completion to force-stop the job as killed.
  7. Callback — If callbackUrl was set, the system POSTs the result to that URL on completed, dead, or killed

Worker Flow

Workers are simple HTTP clients that poll for work:

Worker Loop
1
POST/spaces/:id/jobs/poll

Claim up to 10 jobs. If the queue is empty, back off and poll again.

2
POST/jobs/:id/ack

Confirm receipt. This moves the job to "running" and starts execution timeout tracking.

3
POST/jobs/:id/progressoptional

Send { percent, message } updates for user-facing progress.

4
POST/jobs/:id/keepaliveoptional

Reset the timeout window during long-running work.

5
POST/jobs/:id/complete

Report success with an optional result payload.

or
POST/jobs/:id/fail

Report failure. The job retries or transitions to dead when attempts are exhausted.

External control signal

POST/jobs/:id/kill

External operator/control-plane stop. After kill, worker mutations return 409 with code JOB_KILLED so in-flight work can exit quickly.

Scopes

API keys use granular scopes so you can give each client exactly the permissions it needs.

Granular scopes:

ScopeWhat it allows
jobs:createCreate new jobs
jobs:readList jobs, get job details, and use space/job streams
jobs:pollClaim pending jobs from the queue
jobs:ackAcknowledge receipt of a polled job
jobs:progressSend progress updates
jobs:eventPublish checkpoint and custom events
jobs:completeMark jobs as completed
jobs:failFail jobs (retry or mark dead)
jobs:killForce-stop jobs and mark killed
jobs:keepaliveExtend a running job's timeout
jobs:read:progressRead job progress via API or single-job WebSocket stream only

Shorthand scopes (expand to multiple granular scopes):

ShorthandExpands to
jobs:workerjobs:read + jobs:poll + jobs:ack + jobs:progress + jobs:event + jobs:complete + jobs:fail + jobs:kill + jobs:keepalive + jobs:read:progress (shorthand for the full worker set)
jobs:writejobs:ack + jobs:progress + jobs:event + jobs:complete + jobs:fail + jobs:kill + jobs:keepalive

Typical token setups:

Use caseScopes
Producer (submit jobs)jobs:create
Worker (process jobs)jobs:worker or select individual scopes
Dashboard / monitoringjobs:read
Producer + monitorjobs:create, jobs:read
Third-party client (live progress updates)jobs:read:progress

On this page