Pipeline Event Trigger Node
Overview
The Pipeline Event Trigger Node automatically initiates a MaestroHub pipeline when another pipeline finishes execution with a matching status. This enables chaining workflows—for example, running a cleanup pipeline after a data-import pipeline completes, or triggering an alert pipeline when a critical workflow fails. Built-in chain-depth tracking and self-trigger prevention ensure that cascading pipelines cannot create infinite loops.
Core Functionality
What It Does
1. Cross-Pipeline Automation Start pipelines automatically when one or more watched pipelines reach a specified execution status—without manual intervention, polling, or external schedulers.
2. Status-Based Filtering
Subscribe to specific execution outcomes: completed, failed, completed_with_errors, cancelled, or the wildcard any to match all terminal statuses.
3. Chain-Depth Tracking
Every execution triggered by a Pipeline Event Trigger carries a chain_depth counter. When a triggered pipeline itself has a Pipeline Event Trigger, the depth increments. The configurable Max Chain Depth (1--3) caps the cascade length and prevents runaway loops.
4. Self-Trigger Prevention A pipeline cannot watch itself. This is enforced at the domain level during pipeline validation, ensuring that a pipeline's own execution events never re-trigger the same pipeline.
5. Multi-Pipeline Watching A single trigger can monitor multiple source pipelines. When any of the watched pipelines finishes with a matching status, the trigger fires.
Chain Depth & Loop Prevention
How Chain Depth Works
Chain depth tracks how many Pipeline Event Triggers have fired in sequence:
- Depth 0: A pipeline runs via manual, schedule, webhook, or any non-pipeline-event trigger
- Depth 1: A Pipeline Event Trigger fires in response to the depth-0 execution
- Depth 2: Another Pipeline Event Trigger fires in response to the depth-1 execution
- Depth N: Continues until Max Chain Depth is reached
When the current depth equals or exceeds the trigger's maxChainDepth, the event is silently dropped—the trigger does not fire.
Example
Pipeline A (manual run, depth 0)
└─ triggers Pipeline B (depth 1)
└─ triggers Pipeline C (depth 2)
└─ Pipeline C has maxChainDepth=2 → would be depth 3 → BLOCKED
Safety Guarantees
| Protection | Mechanism |
|---|---|
| Self-trigger | Domain validation rejects a pipeline that watches itself |
| Infinite chains | Chain depth counter compared against maxChainDepth (1--3) |
| Missing depth metadata | Defaults to depth 0 if metadata is absent |
The maximum allowed chain depth is 3. Design your pipeline chains to complete meaningful work within this limit. If you need deeper orchestration, consider using a single coordinator pipeline that triggers each step sequentially.
Configuration Options
Basic Information
| Field | Type | Description |
|---|---|---|
| Node Label | String (Required) | Display name for the node on the pipeline canvas. Must be non-empty (trimmed). |
| Description | String (Optional) | Explains what this trigger monitors and initiates. |
Parameters
The trigger configuration is organized across two tabs in the UI: Parameters and Settings.
| Parameter | Type | Default | Required | Constraints | Description |
|---|---|---|---|---|---|
| Watched Pipelines | string[] | [] | Yes | At least 1 pipeline; cannot include the current pipeline | Pipelines whose execution events trigger this pipeline. |
| Watched Statuses | string[] | ["completed", "failed"] | Yes | At least 1 status from the allowed set | Execution statuses that trigger this pipeline. |
| Max Chain Depth | number | 3 | No | 1--3 | Maximum cascade depth for chained Pipeline Event Triggers. |
| Enabled | boolean | true | No | -- | Enable/disable the trigger. |
Watched Statuses
| Status | Description |
|---|---|
completed | Pipeline finished successfully—all nodes passed. |
failed | Pipeline stopped due to a node failure (after retries exhausted). |
completed_with_errors | Pipeline finished but some nodes encountered errors (with On Error set to continueExecution). |
cancelled | Pipeline execution was manually or programmatically cancelled. |
any | Wildcard—matches all terminal statuses above. |
The pipeline selector in the UI automatically excludes the current pipeline from the list. At the backend, domain validation also rejects a pipeline that includes its own ID in watchedPipelineIds.
Settings
Description
A free-text area for documenting the node's purpose and behavior. Notes entered here are saved with the pipeline and visible to all team members.
Execution Settings
| Setting | Options | Default | Description |
|---|---|---|---|
| Timeout (seconds) | number | Pipeline default | Maximum execution time for this node (1--600). Leave empty for pipeline default. |
| Retry on Timeout | Pipeline Default / Enabled / Disabled | Pipeline Default | Whether to retry the node if it times out. |
| Retry on Fail | Pipeline Default / Enabled / Disabled | Pipeline Default | Whether to retry on failure. When Enabled, shows Advanced Retry Configuration. |
| On Error | Pipeline Default / Stop Pipeline / Continue Execution | Pipeline Default | Behavior when node fails after all retries. |
Advanced Retry Configuration (visible when Retry on Fail = Enabled)
| Field | Type | Default | Range | Description |
|---|---|---|---|---|
| Max Attempts | number | 3 | 1--10 | Maximum retry attempts. |
| Initial Delay (ms) | number | 1000 | 100--30,000 | Wait before first retry. |
| Max Delay (ms) | number | 120000 | 1,000--300,000 | Upper bound for backoff delay. |
| Multiplier | number | 2.0 | 1.0--5.0 | Exponential backoff multiplier. |
| Jitter Factor | number | 0.1 | 0--0.5 | Random jitter (+-percentage). |
Output Data Structure
When a watched pipeline finishes with a matching status, the trigger produces a structured output with two top-level keys: _metadata and payload.
Output Format
{
"_metadata": {
"type": "pipeline_event_trigger",
"source_pipeline_id": "bf29be94-fc0a-4dc4-8e5c-092f1b74eb4b",
"source_pipeline_name": "Daily Data Import",
"source_execution_id": "aef374c3-aa2b-454e-aabc-5657faac5950",
"source_status": "completed",
"source_duration": "2m15.3s",
"chain_depth": 1
},
"payload": {
"executionId": "aef374c3-aa2b-454e-aabc-5657faac5950",
"pipelineId": "bf29be94-fc0a-4dc4-8e5c-092f1b74eb4b",
"pipelineName": "Daily Data Import",
"status": "completed",
"duration": "2m15.3s",
"nodeCount": 8,
"successCount": 8,
"failureCount": 0,
"skippedCount": 0,
"triggerType": "schedule",
"errors": []
}
}
_metadata Fields
| Field | Type | Description |
|---|---|---|
type | string | Always "pipeline_event_trigger". |
source_pipeline_id | string | UUID of the watched pipeline that completed. |
source_pipeline_name | string | Display name of the source pipeline. |
source_execution_id | string | UUID of the specific execution that triggered this event. |
source_status | string | Terminal status of the source execution ("completed", "failed", "completed_with_errors", "cancelled"). |
source_duration | string | Total execution duration as a Go duration string (e.g., "2m15.3s", "500ms"). |
chain_depth | number | Current depth in the pipeline event chain (starts at 1 for the first triggered pipeline). |
payload Fields
| Field | Type | Description |
|---|---|---|
executionId | string | UUID of the source execution (same as _metadata.source_execution_id). |
pipelineId | string | UUID of the source pipeline (same as _metadata.source_pipeline_id). |
pipelineName | string | Display name of the source pipeline. |
status | string | Terminal status of the source execution. |
duration | string | Total execution duration as a Go duration string. |
nodeCount | number | Total number of nodes in the source pipeline. |
successCount | number | Number of nodes that completed successfully. |
failureCount | number | Number of nodes that failed. |
skippedCount | number | Number of nodes that were skipped. |
triggerType | string | How the source pipeline was originally triggered (e.g., "schedule", "webhook", "manual"). |
errors | array | Array of error messages from failed nodes (empty if no errors). |
Referencing in Downstream Nodes
Use expressions to access pipeline event data in subsequent nodes:
$trigger._metadata.type-- always"pipeline_event_trigger"$trigger._metadata.source_pipeline_id-- UUID of the source pipeline$trigger._metadata.source_pipeline_name-- name of the source pipeline$trigger._metadata.source_execution_id-- UUID of the source execution$trigger._metadata.source_status-- terminal status ("completed","failed", etc.)$trigger._metadata.source_duration-- execution duration string$trigger._metadata.chain_depth-- current chain depth$trigger.payload.nodeCount-- total nodes in the source pipeline$trigger.payload.successCount-- nodes that succeeded$trigger.payload.failureCount-- nodes that failed$trigger.payload.skippedCount-- nodes that were skipped$trigger.payload.triggerType-- how the source pipeline was triggered$trigger.payload.errors-- array of error messages
Use a Code node to inspect the source pipeline's errors:
if ($trigger.payload.failureCount > 0) {
const errors = $trigger.payload.errors;
// Log or process error details
}
Validation Rules
Parameter Validation
Node Label
- Must not be empty
- Must not consist only of whitespace
- Error: "Node name is required"
Watched Pipeline IDs
- Must be provided and non-empty
- Must contain at least one pipeline ID
- Cannot include the current pipeline's own ID (self-trigger prevention)
- Each ID must be a valid UUID referencing an existing pipeline
- Error: "At least one pipeline must be selected"
- Error: "Pipeline cannot watch itself"
Watched Statuses
- Must be provided and non-empty
- Must contain at least one status
- Each status must be one of:
completed,failed,completed_with_errors,cancelled,any - Error: "At least one status must be selected"
- Error: "Invalid status: must be one of completed, failed, completed_with_errors, cancelled, any"
Max Chain Depth
- Must be a number between 1 and 3 (inclusive)
- Error: "Max chain depth must be between 1 and 3"
Enabled Flag
- Must be a boolean if provided
- Error: "Enabled must be a boolean value"
Usage Examples
Post-Import Cleanup
Key configuration
- Label: Post-Import Cleanup
- Watched Pipelines: Daily Data Import
- Watched Statuses:
completed - Max Chain Depth: 1
- Enabled: true
- Settings: retry disabled, on error
stop
Downstream usage: $trigger._metadata.source_execution_id to correlate with the import execution, $trigger.payload.nodeCount and $trigger.payload.successCount to verify all import steps ran.
Failure Alert Pipeline
Key configuration
- Label: Critical Pipeline Failure Alert
- Watched Pipelines: Payment Processor, Order Fulfillment, Inventory Sync
- Watched Statuses:
failed,completed_with_errors - Max Chain Depth: 1
- Enabled: true
- Settings: retry enabled, on error
continue
Downstream usage: $trigger._metadata.source_pipeline_name to identify which pipeline failed, $trigger.payload.errors to include error details in the alert, $trigger.payload.triggerType to understand the original trigger context.
Cascading ETL Pipeline
Key configuration
- Label: Transform After Extract
- Watched Pipelines: Data Extraction Pipeline
- Watched Statuses:
completed - Max Chain Depth: 2
- Enabled: true
- Settings: retry disabled, on error
stop
Downstream usage: $trigger._metadata.chain_depth to track position in the ETL chain, $trigger.payload.duration to log extraction time, $trigger._metadata.source_status to confirm successful extraction before transforming.
Audit Logging
Key configuration
- Label: Execution Audit Logger
- Watched Pipelines: All critical business pipelines
- Watched Statuses:
any - Max Chain Depth: 1
- Enabled: true
- Settings: retry enabled, on error
continue
Downstream usage: Log $trigger.payload.pipelineName, $trigger.payload.status, $trigger.payload.duration, $trigger.payload.nodeCount, $trigger.payload.successCount, $trigger.payload.failureCount, and $trigger.payload.skippedCount to an audit database for compliance tracking.
Best Practices
Designing Pipeline Chains
| Practice | Rationale |
|---|---|
| Keep chains short | Use max chain depth of 1--2 for most workflows; reserve 3 for complex ETL sequences |
| Use a coordinator pattern | For deep orchestration, have one pipeline that triggers each step via webhooks or schedules |
| Avoid circular dependencies | Even though self-trigger is blocked, A → B → A is possible; use chain depth limits |
| Document chain relationships | Use the Settings description to note which pipelines watch this one |
Status Selection
- Use
completedfor success-dependent workflows (ETL chains, post-processing) - Use
failedorcompleted_with_errorsfor alerting and remediation pipelines - Use
anysparingly—primarily for audit logging or monitoring dashboards - Combine statuses when a single response pipeline handles multiple outcomes
Chain Depth Configuration
| Max Depth | Use Case |
|---|---|
| 1 | Simple two-pipeline chains (source → handler). Best for alerting, cleanup, and logging. |
| 2 | Three-pipeline chains (extract → transform → load). Suitable for staged ETL workflows. |
| 3 | Four-pipeline chains. Use only when each stage performs distinct, necessary work. |
Begin with maxChainDepth: 1 and increase only when you need deeper chaining. Lower depth limits are easier to reason about and debug.
Error Handling Strategies
For Critical Chains:
- Set On Error to
stopPipelineon the triggered pipeline - Watch for
failedstatus on the triggered pipeline with a separate alert pipeline - Use
$trigger.payload.errorsto propagate error context through the chain
For Best-Effort Workflows:
- Set On Error to
continueExecution - Use
$trigger.payload.failureCountto detect partial failures - Log results for later analysis without blocking the chain
Performance Considerations
| Scenario | Recommendation |
|---|---|
| High-frequency source pipelines | Ensure downstream pipelines complete before the next trigger fires to avoid queue buildup |
| Many watchers on one pipeline | Each watcher creates an independent execution; monitor total throughput |
| Long-running triggered pipelines | Set appropriate timeouts; consider whether the source pipeline's next run will re-trigger |
| Chain depth 3 with parallel watchers | Can create exponential fan-out; map the full chain before deploying |
Enable vs. Disable
- Use the trigger's Enabled parameter to temporarily pause event-driven chaining without modifying pipeline configuration
- Disable triggers during maintenance on source pipelines to prevent cascading failures
- Document the reason for disabled triggers in the node's Notes field
- Re-enable triggers only after verifying source pipelines are stable