Watchers
A watcher is a saved condition that Nimbus re-evaluates every time a connector sync completes. When the condition is satisfied, you receive a notification in the system tray and an audit-log entry is written. Optionally, a watcher can trigger a saved workflow automatically.
Because watchers evaluate against the local SQLite index — not by polling a cloud API — each check typically completes in under 10 ms. The evaluation loop runs after every sync cycle rather than on a fixed wall-clock interval, so a watcher that targets a high-frequency connector will be re-evaluated more often than one that targets a slow connector.
Concept
Section titled “Concept”Each watcher has three parts:
| Part | Purpose |
|---|---|
| condition | A query against the local index — what to watch for |
| action | What to do when the condition fires (notify, run workflow) |
| graph predicate | Optional V22 graph-layer filter applied after the index query |
A watcher fires at most once per sync cycle. After firing, it records
last_fired_at and waits for the next evaluation pass before it can fire
again. The full event record — including the matching item snapshot and the
action result — is written to the watcher_event table in the local index.
Worked example — alert notification
Section titled “Worked example — alert notification”The current condition type is alert_fired. A watcher with this type fires
when an item of type = 'alert' appears in the index with a modified_at
timestamp newer than the watcher’s last check.
Creating a watcher via IPC (a CLI nimbus watch add subcommand is not yet
implemented — use the IPC method directly or the Tauri Watchers panel):
{ "jsonrpc": "2.0", "id": 1, "method": "watcher.create", "params": { "name": "pagerduty-critical", "condition_type": "alert_fired", "condition_json": "{\"filter\":{\"service\":\"pagerduty\"}}", "action_type": "notify", "action_json": "{}" }}The resulting watcher record stored in the index looks like this:
{ "id": "3fa2c1d0-...", "name": "pagerduty-critical", "enabled": 1, "condition_type": "alert_fired", "condition_json": "{\"filter\":{\"service\":\"pagerduty\"}}", "action_type": "notify", "action_json": "{}", "created_at": 1746400000000, "last_checked_at": null, "last_fired_at": null, "graph_predicate_json": null}What happens when it fires:
- The watcher engine detects a new
alertitem from PagerDuty whosemodified_atis newer thanlast_checked_at. - A tray notification pops:
"Nimbus watcher — pagerduty-critical: pagerduty: <alert title>". - A row is inserted into
watcher_eventcontaining the condition snapshot (the matching alert ids + the raw condition JSON) and the action result. last_fired_atis updated on the watcher row.
Managing watchers from the CLI
Section titled “Managing watchers from the CLI”The nimbus watch command covers listing, pausing, and resuming:
# List all watchers (enabled and disabled)nimbus watch list
# Pause a watcher by its UUIDnimbus watch pause <id>
# Re-enable a paused watchernimbus watch resume <id>All three subcommands talk to the running Gateway over IPC; the Gateway must
be running (nimbus start) for them to work.
Watcher conditions
Section titled “Watcher conditions”Field predicates
Section titled “Field predicates”The condition_json object carries a filter map that is evaluated against
index columns. Supported filter keys:
| Key | Matches |
|---|---|
service | The service column on item (e.g. "pagerduty", "github", "linear") |
The engine queries for item rows of type = 'alert' where modified_at
is newer than the last check timestamp, optionally filtered by service. Up
to five matching items are returned; the first becomes the notification summary.
Graph predicates
Section titled “Graph predicates”Graph predicates are an opt-in layer of filtering applied after the index
query. They are enabled by setting graph_conditions = true under
[automation] in your nimbus.toml:
[automation]graph_conditions = trueA graph predicate is stored as JSON in the graph_predicate_json column of
the watcher row. It filters candidate items by their relationships in the
people / cross-service graph (schema version 22+).
Graph predicate example
Section titled “Graph predicate example”Suppose you want to be notified only about alerts that originate from items owned by a specific person in your graph — for example, alerts related to a repository owned by a team lead. The graph predicate JSON looks like this:
{ "relation": "owned_by", "target": { "type": "person", "externalId": "github:user:alice" }}The three supported relation values are:
| Value | Meaning |
|---|---|
owned_by | The alert item was authored, opened, or posted by the target person |
upstream_of | The alert item has a direct outgoing edge to the target entity |
downstream_of | The target entity has a direct outgoing edge to the alert item |
If the graph predicate fails to parse, the watcher fires closed (no notification) rather than firing without the graph filter. This is a fail-closed design: an invalid predicate suppresses fires rather than producing spurious ones.
To attach a graph predicate to an existing watcher, call the IPC method
watcher.setGraphPredicate with the watcher id and the predicate JSON
string. The Tauri Watchers panel exposes a visual editor for this.
For a deeper explanation of the graph schema and relation types, see the Architecture overview.
Fire history
Section titled “Fire history”Every watcher keeps a history of fires in the watcher_event table. Each row
records:
watcher_id— which watcher firedfired_at— unix millisecond timestampcondition_snapshot— JSON of the matching items and the conditionaction_result— JSON result of the action (e.g.{"ok":true})
The Tauri Watchers panel shows a history drawer per watcher with the last N
fires including the full payload. The nimbus connector history CLI command
surfaces connector-level sync history; a dedicated nimbus watch history
subcommand is forthcoming.
Pausing and removing a watcher
Section titled “Pausing and removing a watcher”# Pause (disables evaluation without deleting)nimbus watch pause <id>
# Re-enablenimbus watch resume <id>Removing a watcher is done through the Tauri Watchers panel or the IPC
method watcher.delete. Because deleting a watcher is a configuration write,
it is logged to the audit chain.
Limits
Section titled “Limits”Each watcher poll runs a SQL query against the local SQLite index. Because the index is on-device, polls complete in under 10 ms in typical conditions.
The evaluation loop is intended for tens of watchers per profile, not thousands. With a very large number of watchers, each sync cycle will spend proportionally longer in the evaluation loop. There is no hard cap enforced by the engine, but keeping your active watcher count in the single or double digits ensures the post-sync loop stays imperceptible.