Skip to content

Watchers

A watcher is a saved condition that Nimbus re-evaluates every time a connector sync completes. When the condition is satisfied, you receive a notification in the system tray and an audit-log entry is written. Optionally, a watcher can trigger a saved workflow automatically.

Because watchers evaluate against the local SQLite index — not by polling a cloud API — each check typically completes in under 10 ms. The evaluation loop runs after every sync cycle rather than on a fixed wall-clock interval, so a watcher that targets a high-frequency connector will be re-evaluated more often than one that targets a slow connector.


Each watcher has three parts:

PartPurpose
conditionA query against the local index — what to watch for
actionWhat to do when the condition fires (notify, run workflow)
graph predicateOptional V22 graph-layer filter applied after the index query

A watcher fires at most once per sync cycle. After firing, it records last_fired_at and waits for the next evaluation pass before it can fire again. The full event record — including the matching item snapshot and the action result — is written to the watcher_event table in the local index.


The current condition type is alert_fired. A watcher with this type fires when an item of type = 'alert' appears in the index with a modified_at timestamp newer than the watcher’s last check.

Creating a watcher via IPC (a CLI nimbus watch add subcommand is not yet implemented — use the IPC method directly or the Tauri Watchers panel):

{
"jsonrpc": "2.0",
"id": 1,
"method": "watcher.create",
"params": {
"name": "pagerduty-critical",
"condition_type": "alert_fired",
"condition_json": "{\"filter\":{\"service\":\"pagerduty\"}}",
"action_type": "notify",
"action_json": "{}"
}
}

The resulting watcher record stored in the index looks like this:

{
"id": "3fa2c1d0-...",
"name": "pagerduty-critical",
"enabled": 1,
"condition_type": "alert_fired",
"condition_json": "{\"filter\":{\"service\":\"pagerduty\"}}",
"action_type": "notify",
"action_json": "{}",
"created_at": 1746400000000,
"last_checked_at": null,
"last_fired_at": null,
"graph_predicate_json": null
}

What happens when it fires:

  1. The watcher engine detects a new alert item from PagerDuty whose modified_at is newer than last_checked_at.
  2. A tray notification pops: "Nimbus watcher — pagerduty-critical: pagerduty: <alert title>".
  3. A row is inserted into watcher_event containing the condition snapshot (the matching alert ids + the raw condition JSON) and the action result.
  4. last_fired_at is updated on the watcher row.

The nimbus watch command covers listing, pausing, and resuming:

Terminal window
# List all watchers (enabled and disabled)
nimbus watch list
# Pause a watcher by its UUID
nimbus watch pause <id>
# Re-enable a paused watcher
nimbus watch resume <id>

All three subcommands talk to the running Gateway over IPC; the Gateway must be running (nimbus start) for them to work.


The condition_json object carries a filter map that is evaluated against index columns. Supported filter keys:

KeyMatches
serviceThe service column on item (e.g. "pagerduty", "github", "linear")

The engine queries for item rows of type = 'alert' where modified_at is newer than the last check timestamp, optionally filtered by service. Up to five matching items are returned; the first becomes the notification summary.

Graph predicates are an opt-in layer of filtering applied after the index query. They are enabled by setting graph_conditions = true under [automation] in your nimbus.toml:

[automation]
graph_conditions = true

A graph predicate is stored as JSON in the graph_predicate_json column of the watcher row. It filters candidate items by their relationships in the people / cross-service graph (schema version 22+).


Suppose you want to be notified only about alerts that originate from items owned by a specific person in your graph — for example, alerts related to a repository owned by a team lead. The graph predicate JSON looks like this:

{
"relation": "owned_by",
"target": {
"type": "person",
"externalId": "github:user:alice"
}
}

The three supported relation values are:

ValueMeaning
owned_byThe alert item was authored, opened, or posted by the target person
upstream_ofThe alert item has a direct outgoing edge to the target entity
downstream_ofThe target entity has a direct outgoing edge to the alert item

If the graph predicate fails to parse, the watcher fires closed (no notification) rather than firing without the graph filter. This is a fail-closed design: an invalid predicate suppresses fires rather than producing spurious ones.

To attach a graph predicate to an existing watcher, call the IPC method watcher.setGraphPredicate with the watcher id and the predicate JSON string. The Tauri Watchers panel exposes a visual editor for this.

For a deeper explanation of the graph schema and relation types, see the Architecture overview.


Every watcher keeps a history of fires in the watcher_event table. Each row records:

  • watcher_id — which watcher fired
  • fired_at — unix millisecond timestamp
  • condition_snapshot — JSON of the matching items and the condition
  • action_result — JSON result of the action (e.g. {"ok":true})

The Tauri Watchers panel shows a history drawer per watcher with the last N fires including the full payload. The nimbus connector history CLI command surfaces connector-level sync history; a dedicated nimbus watch history subcommand is forthcoming.


Terminal window
# Pause (disables evaluation without deleting)
nimbus watch pause <id>
# Re-enable
nimbus watch resume <id>

Removing a watcher is done through the Tauri Watchers panel or the IPC method watcher.delete. Because deleting a watcher is a configuration write, it is logged to the audit chain.


Each watcher poll runs a SQL query against the local SQLite index. Because the index is on-device, polls complete in under 10 ms in typical conditions.

The evaluation loop is intended for tens of watchers per profile, not thousands. With a very large number of watchers, each sync cycle will spend proportionally longer in the evaluation loop. There is no hard cap enforced by the engine, but keeping your active watcher count in the single or double digits ensures the post-sync loop stays imperceptible.