Skip to main content
Version: 2.2-dev

Pipeline Event Trigger Node

Overview

The Pipeline Event Trigger Node automatically initiates a MaestroHub pipeline when another pipeline finishes execution with a matching status. This enables chaining workflows—for example, running a cleanup pipeline after a data-import pipeline completes, or triggering an alert pipeline when a critical workflow fails. Built-in chain-depth tracking and self-trigger prevention ensure that cascading pipelines cannot create infinite loops.


Core Functionality

What It Does

1. Cross-Pipeline Automation Start pipelines automatically when one or more watched pipelines reach a specified execution status—without manual intervention, polling, or external schedulers.

2. Status-Based Filtering Subscribe to specific execution outcomes: completed, failed, completed_with_errors, cancelled, or the wildcard any to match all terminal statuses.

3. Chain-Depth Tracking Every execution triggered by a Pipeline Event Trigger carries a chain_depth counter. When a triggered pipeline itself has a Pipeline Event Trigger, the depth increments. The configurable Max Chain Depth (1--3) caps the cascade length and prevents runaway loops.

4. Self-Trigger Prevention A pipeline cannot watch itself. This is enforced at the domain level during pipeline validation, ensuring that a pipeline's own execution events never re-trigger the same pipeline.

5. Multi-Pipeline Watching A single trigger can monitor multiple source pipelines. When any of the watched pipelines finishes with a matching status, the trigger fires.


Chain Depth & Loop Prevention

How Chain Depth Works

Chain depth tracks how many Pipeline Event Triggers have fired in sequence:

  1. Depth 0: A pipeline runs via manual, schedule, webhook, or any non-pipeline-event trigger
  2. Depth 1: A Pipeline Event Trigger fires in response to the depth-0 execution
  3. Depth 2: Another Pipeline Event Trigger fires in response to the depth-1 execution
  4. Depth N: Continues until Max Chain Depth is reached

When the current depth equals or exceeds the trigger's maxChainDepth, the event is silently dropped—the trigger does not fire.

Example

Pipeline A (manual run, depth 0)
└─ triggers Pipeline B (depth 1)
└─ triggers Pipeline C (depth 2)
└─ Pipeline C has maxChainDepth=2 → would be depth 3 → BLOCKED

Safety Guarantees

ProtectionMechanism
Self-triggerDomain validation rejects a pipeline that watches itself
Infinite chainsChain depth counter compared against maxChainDepth (1--3)
Missing depth metadataDefaults to depth 0 if metadata is absent
Chain Depth Limits

The maximum allowed chain depth is 3. Design your pipeline chains to complete meaningful work within this limit. If you need deeper orchestration, consider using a single coordinator pipeline that triggers each step sequentially.


Configuration Options

Basic Information

FieldTypeDescription
Node LabelString (Required)Display name for the node on the pipeline canvas. Must be non-empty (trimmed).
DescriptionString (Optional)Explains what this trigger monitors and initiates.

Parameters

The trigger configuration is organized across two tabs in the UI: Parameters and Settings.

ParameterTypeDefaultRequiredConstraintsDescription
Watched Pipelinesstring[][]YesAt least 1 pipeline; cannot include the current pipelinePipelines whose execution events trigger this pipeline.
Watched Statusesstring[]["completed", "failed"]YesAt least 1 status from the allowed setExecution statuses that trigger this pipeline.
Max Chain Depthnumber3No1--3Maximum cascade depth for chained Pipeline Event Triggers.
EnabledbooleantrueNo--Enable/disable the trigger.

Watched Statuses

StatusDescription
completedPipeline finished successfully—all nodes passed.
failedPipeline stopped due to a node failure (after retries exhausted).
completed_with_errorsPipeline finished but some nodes encountered errors (with On Error set to continueExecution).
cancelledPipeline execution was manually or programmatically cancelled.
anyWildcard—matches all terminal statuses above.
Self-Trigger Prevention

The pipeline selector in the UI automatically excludes the current pipeline from the list. At the backend, domain validation also rejects a pipeline that includes its own ID in watchedPipelineIds.


Settings

Description

A free-text area for documenting the node's purpose and behavior. Notes entered here are saved with the pipeline and visible to all team members.

Execution Settings

SettingOptionsDefaultDescription
Timeout (seconds)numberPipeline defaultMaximum execution time for this node (1--600). Leave empty for pipeline default.
Retry on TimeoutPipeline Default / Enabled / DisabledPipeline DefaultWhether to retry the node if it times out.
Retry on FailPipeline Default / Enabled / DisabledPipeline DefaultWhether to retry on failure. When Enabled, shows Advanced Retry Configuration.
On ErrorPipeline Default / Stop Pipeline / Continue ExecutionPipeline DefaultBehavior when node fails after all retries.

Advanced Retry Configuration (visible when Retry on Fail = Enabled)

FieldTypeDefaultRangeDescription
Max Attemptsnumber31--10Maximum retry attempts.
Initial Delay (ms)number1000100--30,000Wait before first retry.
Max Delay (ms)number1200001,000--300,000Upper bound for backoff delay.
Multipliernumber2.01.0--5.0Exponential backoff multiplier.
Jitter Factornumber0.10--0.5Random jitter (+-percentage).

Output Data Structure

When a watched pipeline finishes with a matching status, the trigger produces a structured output with two top-level keys: _metadata and payload.

Output Format

{
"_metadata": {
"type": "pipeline_event_trigger",
"source_pipeline_id": "bf29be94-fc0a-4dc4-8e5c-092f1b74eb4b",
"source_pipeline_name": "Daily Data Import",
"source_execution_id": "aef374c3-aa2b-454e-aabc-5657faac5950",
"source_status": "completed",
"source_duration": "2m15.3s",
"chain_depth": 1
},
"payload": {
"executionId": "aef374c3-aa2b-454e-aabc-5657faac5950",
"pipelineId": "bf29be94-fc0a-4dc4-8e5c-092f1b74eb4b",
"pipelineName": "Daily Data Import",
"status": "completed",
"duration": "2m15.3s",
"nodeCount": 8,
"successCount": 8,
"failureCount": 0,
"skippedCount": 0,
"triggerType": "schedule",
"errors": []
}
}

_metadata Fields

FieldTypeDescription
typestringAlways "pipeline_event_trigger".
source_pipeline_idstringUUID of the watched pipeline that completed.
source_pipeline_namestringDisplay name of the source pipeline.
source_execution_idstringUUID of the specific execution that triggered this event.
source_statusstringTerminal status of the source execution ("completed", "failed", "completed_with_errors", "cancelled").
source_durationstringTotal execution duration as a Go duration string (e.g., "2m15.3s", "500ms").
chain_depthnumberCurrent depth in the pipeline event chain (starts at 1 for the first triggered pipeline).

payload Fields

FieldTypeDescription
executionIdstringUUID of the source execution (same as _metadata.source_execution_id).
pipelineIdstringUUID of the source pipeline (same as _metadata.source_pipeline_id).
pipelineNamestringDisplay name of the source pipeline.
statusstringTerminal status of the source execution.
durationstringTotal execution duration as a Go duration string.
nodeCountnumberTotal number of nodes in the source pipeline.
successCountnumberNumber of nodes that completed successfully.
failureCountnumberNumber of nodes that failed.
skippedCountnumberNumber of nodes that were skipped.
triggerTypestringHow the source pipeline was originally triggered (e.g., "schedule", "webhook", "manual").
errorsarrayArray of error messages from failed nodes (empty if no errors).

Referencing in Downstream Nodes

Use expressions to access pipeline event data in subsequent nodes:

  • $trigger._metadata.type -- always "pipeline_event_trigger"
  • $trigger._metadata.source_pipeline_id -- UUID of the source pipeline
  • $trigger._metadata.source_pipeline_name -- name of the source pipeline
  • $trigger._metadata.source_execution_id -- UUID of the source execution
  • $trigger._metadata.source_status -- terminal status ("completed", "failed", etc.)
  • $trigger._metadata.source_duration -- execution duration string
  • $trigger._metadata.chain_depth -- current chain depth
  • $trigger.payload.nodeCount -- total nodes in the source pipeline
  • $trigger.payload.successCount -- nodes that succeeded
  • $trigger.payload.failureCount -- nodes that failed
  • $trigger.payload.skippedCount -- nodes that were skipped
  • $trigger.payload.triggerType -- how the source pipeline was triggered
  • $trigger.payload.errors -- array of error messages
Checking for Errors

Use a Code node to inspect the source pipeline's errors:

if ($trigger.payload.failureCount > 0) {
const errors = $trigger.payload.errors;
// Log or process error details
}

Validation Rules

Parameter Validation

Node Label

  • Must not be empty
  • Must not consist only of whitespace
  • Error: "Node name is required"

Watched Pipeline IDs

  • Must be provided and non-empty
  • Must contain at least one pipeline ID
  • Cannot include the current pipeline's own ID (self-trigger prevention)
  • Each ID must be a valid UUID referencing an existing pipeline
  • Error: "At least one pipeline must be selected"
  • Error: "Pipeline cannot watch itself"

Watched Statuses

  • Must be provided and non-empty
  • Must contain at least one status
  • Each status must be one of: completed, failed, completed_with_errors, cancelled, any
  • Error: "At least one status must be selected"
  • Error: "Invalid status: must be one of completed, failed, completed_with_errors, cancelled, any"

Max Chain Depth

  • Must be a number between 1 and 3 (inclusive)
  • Error: "Max chain depth must be between 1 and 3"

Enabled Flag

  • Must be a boolean if provided
  • Error: "Enabled must be a boolean value"

Usage Examples

Post-Import Cleanup

Key configuration

  • Label: Post-Import Cleanup
  • Watched Pipelines: Daily Data Import
  • Watched Statuses: completed
  • Max Chain Depth: 1
  • Enabled: true
  • Settings: retry disabled, on error stop

Downstream usage: $trigger._metadata.source_execution_id to correlate with the import execution, $trigger.payload.nodeCount and $trigger.payload.successCount to verify all import steps ran.

Failure Alert Pipeline

Key configuration

  • Label: Critical Pipeline Failure Alert
  • Watched Pipelines: Payment Processor, Order Fulfillment, Inventory Sync
  • Watched Statuses: failed, completed_with_errors
  • Max Chain Depth: 1
  • Enabled: true
  • Settings: retry enabled, on error continue

Downstream usage: $trigger._metadata.source_pipeline_name to identify which pipeline failed, $trigger.payload.errors to include error details in the alert, $trigger.payload.triggerType to understand the original trigger context.

Cascading ETL Pipeline

Key configuration

  • Label: Transform After Extract
  • Watched Pipelines: Data Extraction Pipeline
  • Watched Statuses: completed
  • Max Chain Depth: 2
  • Enabled: true
  • Settings: retry disabled, on error stop

Downstream usage: $trigger._metadata.chain_depth to track position in the ETL chain, $trigger.payload.duration to log extraction time, $trigger._metadata.source_status to confirm successful extraction before transforming.

Audit Logging

Key configuration

  • Label: Execution Audit Logger
  • Watched Pipelines: All critical business pipelines
  • Watched Statuses: any
  • Max Chain Depth: 1
  • Enabled: true
  • Settings: retry enabled, on error continue

Downstream usage: Log $trigger.payload.pipelineName, $trigger.payload.status, $trigger.payload.duration, $trigger.payload.nodeCount, $trigger.payload.successCount, $trigger.payload.failureCount, and $trigger.payload.skippedCount to an audit database for compliance tracking.


Best Practices

Designing Pipeline Chains

PracticeRationale
Keep chains shortUse max chain depth of 1--2 for most workflows; reserve 3 for complex ETL sequences
Use a coordinator patternFor deep orchestration, have one pipeline that triggers each step via webhooks or schedules
Avoid circular dependenciesEven though self-trigger is blocked, A → B → A is possible; use chain depth limits
Document chain relationshipsUse the Settings description to note which pipelines watch this one

Status Selection

  • Use completed for success-dependent workflows (ETL chains, post-processing)
  • Use failed or completed_with_errors for alerting and remediation pipelines
  • Use any sparingly—primarily for audit logging or monitoring dashboards
  • Combine statuses when a single response pipeline handles multiple outcomes

Chain Depth Configuration

Max DepthUse Case
1Simple two-pipeline chains (source → handler). Best for alerting, cleanup, and logging.
2Three-pipeline chains (extract → transform → load). Suitable for staged ETL workflows.
3Four-pipeline chains. Use only when each stage performs distinct, necessary work.
Start Conservative

Begin with maxChainDepth: 1 and increase only when you need deeper chaining. Lower depth limits are easier to reason about and debug.

Error Handling Strategies

For Critical Chains:

  • Set On Error to stopPipeline on the triggered pipeline
  • Watch for failed status on the triggered pipeline with a separate alert pipeline
  • Use $trigger.payload.errors to propagate error context through the chain

For Best-Effort Workflows:

  • Set On Error to continueExecution
  • Use $trigger.payload.failureCount to detect partial failures
  • Log results for later analysis without blocking the chain

Performance Considerations

ScenarioRecommendation
High-frequency source pipelinesEnsure downstream pipelines complete before the next trigger fires to avoid queue buildup
Many watchers on one pipelineEach watcher creates an independent execution; monitor total throughput
Long-running triggered pipelinesSet appropriate timeouts; consider whether the source pipeline's next run will re-trigger
Chain depth 3 with parallel watchersCan create exponential fan-out; map the full chain before deploying

Enable vs. Disable

  • Use the trigger's Enabled parameter to temporarily pause event-driven chaining without modifying pipeline configuration
  • Disable triggers during maintenance on source pipelines to prevent cascading failures
  • Document the reason for disabled triggers in the node's Notes field
  • Re-enable triggers only after verifying source pipelines are stable