Testing your workflow
Run a draft workflow end-to-end with sample trigger data, then read the execution logs task by task to see exactly what happened. For anyone building a workflow in the editor — before you enable it for production, or while you're iterating on changes.
Before you run
Heads up: Test runs use your real integrations. A Slack action posts a real message; a Hue action really switches lights; a webhook really fires. Run tests against safe targets (a test Slack channel, a desk lamp instead of the boardroom lights) when you can — there's no dry-run mode.
You don't have to enable the workflow first. Testing works on whatever is currently in the editor, saved or not.
Run the whole workflow
- Open the workflow in the editor.
- Click the green play icon in the top toolbar (tooltip: Run workflow).
- If the workflow has more than one trigger, pick which one to run from the menu that appears.
- The trigger node opens with a JSON editor. This is the "event" your workflow will see. Either edit the JSON by hand, or click Import from execution to copy the payload from a past real run.
- Press Run inside the trigger dialog. The toolbar's play icon switches to Stop execution while things are in flight — click it to cancel.
[SCREENSHOT] The workflow editor's top toolbar. The green "Run workflow" play button is highlighted, with a small dropdown menu below it listing two trigger names the author can pick between.
[SCREENSHOT] The trigger input dialog, showing a Monaco JSON editor populated with a sample event payload. The "Import from execution" button sits above the editor, and a Run button is visible at the bottom right.
Read the logs
The Logs panel at the bottom of the editor opens automatically when a run starts. Its header shows the execution's status, total duration, and task count. Expand it to see three side-by-side panes:
- TASKS — every node the run touched, with a status icon and a per-task duration. Click a task to focus it.
- INPUT — the data that flowed into the focused task, with templates already resolved. This is where to look when a Slack message came out wrong or a condition behaved unexpectedly.
- OUTPUT — whatever the task produced, or the error message if it failed.
Tip: a red status icon means the run stopped at that task. The OUTPUT pane will show the error.
[SCREENSHOT] The Logs panel expanded across the bottom of the editor. The TASKS pane on the left shows five nodes with the fourth one red. The INPUT pane shows the document that entered that node; the OUTPUT pane on the right shows the error that stopped the run.
Test a single node or a branch
Running from the trigger is sometimes overkill — you just want to check that one Slack message renders correctly, or that one condition evaluates the way you expect.
- Hover a node for its inline toolbar, or right-click for the context menu.
- Test this node runs only that node, using data already in the current execution's document.
- Run from here runs that node plus everything downstream of it.
Both options are disabled until the upstream nodes have produced output in the current run — a node can't be tested in isolation without the data its inputs reference. Run the trigger once to populate the document, then iterate on any node further along the DAG.
Look back at past runs
Every test (and every real execution) is stored.
- Switch the top toolbar from Editor to Executions, or click the chevron on the left-hand EXECUTIONS strip to expand it.
- Filter by status with the chips across the top: all, running, success, error, canceled.
- Click any row to replay that execution in the canvas, with its Logs panel fully populated.
- On a trigger node, Import from execution pulls a past run's input back into the JSON editor — the fastest way to reproduce a reported issue.
What testing won't catch
- Schedule triggers don't fire on their own during testing. A "daily at 08:30" trigger only runs on its real schedule; to test everything downstream, run it manually with a sample document.
- External event timing. Triggers like booking no-shows or occupancy changes depend on the source system emitting the event — manual runs use synthetic input.
- Rate limits and quotas. Repeated test runs consume the same external quotas real runs do.
- No dry-run. Actions always call their integrations for real. Pick your test targets with that in mind.