Skip to main content
Version: 2.2-dev
RabbitMQ Trigger Node interface

RabbitMQ trigger node

RabbitMQ Trigger Node

Overview

The RabbitMQ Trigger Node automatically initiates MaestroHub pipelines when messages arrive on subscribed RabbitMQ queues. Unlike the RabbitMQ Consume connector node which receives messages within an already-running pipeline, the RabbitMQ Trigger starts new pipeline executions in response to incoming messages—enabling fully event-driven automation.


Core Functionality

What It Does

RabbitMQ Trigger enables real-time, event-driven pipeline execution by:

1. Event-Driven Pipeline Execution Start pipelines automatically when RabbitMQ messages arrive, without manual intervention or polling. Perfect for task processing, event handling, and real-time data flows.

2. Automatic Subscription Management Queue subscriptions are created and cleaned up automatically—no manual subscription management required.

3. Message Payload Passthrough Incoming RabbitMQ message payloads are passed directly to the pipeline, along with message metadata (routing key, exchange, headers), making them available to all downstream nodes via the $trigger variable.


Reconnection Handling

MaestroHub automatically handles connection disruptions to ensure reliable message delivery.

Automatic Recovery

When a RabbitMQ connection is lost and restored:

  1. Connection Lost: The system detects the disconnection automatically
  2. Channel Recovery: If only the AMQP channel fails (not the connection), it is automatically recreated without a full reconnect
  3. Connection Restored: Queue subscriptions are automatically re-established within seconds
  4. Transparent Recovery: Pipelines continue to receive messages once the connection is restored—no manual intervention required

What This Means for Your Workflows

ScenarioBehavior
Brief network interruptionAutomatic resubscription after reconnection
AMQP channel errorChannel automatically recreated from the existing connection
Broker restartSubscriptions automatically restored when broker comes back online
MaestroHub restartAll triggers for enabled pipelines are restored on startup
Message Durability

To minimize message loss during brief outages, configure your RabbitMQ queues as durable and enable message persistence (delivery mode 2) on the publisher side. With Auto Acknowledge disabled, unacknowledged messages are returned to the queue when the consumer disconnects.


Configuration Options

Basic Information

FieldTypeDescription
Node LabelString (Required)Display name for the node on the pipeline canvas
DescriptionString (Optional)Explains what this trigger initiates

Parameters

ParameterTypeDefaultRequiredConstraintsDescription
Connection IDstring""Yes--RabbitMQ connection profile to use.
Function IDstring""Yes--Consume function within the connection.
Trigger Modeselect"always"Noalways / onChangealways: Trigger on every message. onChange: Only trigger when payload differs.
EnabledbooleantrueNo--Enable/disable the trigger.
Dedup Max Keysnumber1000If onChange1–10,000Maximum tracked message hashes for change detection. Uses LRU eviction when the limit is reached.
Dedup TTLselect--If onChange1h / 6h / 12h / 24h / 72h / 168hDedup state TTL. Enterprise only.
Function Requirement

The selected function must be a RabbitMQ Consume function type. Publish functions cannot be used with RabbitMQ Trigger nodes.


Settings

Description

A free-text area for documenting the node's purpose and behavior. Notes entered here are saved with the pipeline and visible to all team members.

Execution Settings

SettingOptionsDefaultDescription
Timeout (seconds)numberPipeline defaultMaximum execution time for this node (1–600). Leave empty for pipeline default.
Retry on TimeoutPipeline Default / Enabled / DisabledPipeline DefaultWhether to retry the node if it times out.
Retry on FailPipeline Default / Enabled / DisabledPipeline DefaultWhether to retry on failure. When Enabled, shows Advanced Retry Configuration.
On ErrorPipeline Default / Stop Pipeline / Continue ExecutionPipeline DefaultBehavior when node fails after all retries.

Advanced Retry Configuration (visible when Retry on Fail = Enabled)

FieldTypeDefaultRangeDescription
Max Attemptsnumber31–10Maximum retry attempts.
Initial Delay (ms)number1000100–30,000Wait before first retry.
Max Delay (ms)number1200001,000–300,000Upper bound for backoff delay.
Multipliernumber2.01.0–5.0Exponential backoff multiplier.
Jitter Factornumber0.10–0.5Random jitter (+-percentage).

Output Data Structure

When a RabbitMQ message triggers pipeline execution, the following data is available to downstream nodes via the $trigger variable.

Output Format

{
"_metadata": {
"type": "rabbitmq_trigger",
"connection_id": "bf29be94-fc0a-4dc4-8e5c-092f1b74eb4b",
"function_id": "aef374c3-aa2b-454e-aabc-5657faac5950",
"queue": "task-queue",
"exchange": "amq.topic",
"routingKey": "tasks.new",
"consumerTag": "ctag-maestrohub-001",
"contentType": "application/json",
"timestamp": "2026-02-13T10:30:00Z"
},
"payload": {
"taskId": "abc-123",
"action": "process",
"priority": 5
}
}

Accessing Message Data

In downstream nodes, use the $trigger variable to access the trigger output:

FieldExpressionDescription
Message Payload$trigger.payloadThe parsed RabbitMQ message content (JSON object or raw value)
Queue Name$trigger._metadata.queueThe queue the message was consumed from
Exchange$trigger._metadata.exchangeThe exchange the message was published to
Routing Key$trigger._metadata.routingKeyThe routing key of the message
Content Type$trigger._metadata.contentTypeMIME type of the message payload
Consumer Tag$trigger._metadata.consumerTagThe consumer identifier
Connection ID$trigger._metadata.connection_idThe RabbitMQ connection profile used
Function ID$trigger._metadata.function_idThe RabbitMQ Consume function that received the message
Trigger Type$trigger._metadata.typeAlways rabbitmq_trigger
Timestamp$trigger._metadata.timestampWhen the message was received
Accessing Nested Payload Data

If your RabbitMQ messages contain JSON, access nested fields directly:

  • $trigger.payload.taskId — Access a specific field
  • $trigger.payload — Access the entire message object

Validation Rules

The RabbitMQ Trigger Node enforces these validation requirements:

Parameter Validation

Connection ID

  • Must be provided and non-empty
  • Must reference a valid RabbitMQ connection profile
  • Error: "connectionId is required"

Function ID

  • Must be provided and non-empty
  • Must reference a valid RabbitMQ Consume function
  • Function must belong to the specified connection
  • Error: "functionId is required"

Enabled Flag

  • Must be a boolean if provided
  • Error: "enabled must be a boolean"

Settings Validation

On Error

  • Must be one of: stopPipeline, continueExecution, retryNode
  • Error: "onError must be a valid error handling option"

Usage Examples

Task Queue Processing

Scenario: Process incoming work items from a task queue in real-time.

Configuration:

  • Label: Task Queue Processor
  • Connection: Production RabbitMQ Broker
  • Function: Consume from task-queue with Auto Acknowledge disabled
  • Trigger Mode: always
  • Enabled: true

Downstream Processing:

  • Parse JSON payload to extract task details
  • Route to appropriate processing node based on task type
  • Update task status in database
  • Publish result to response queue

Event-Driven Order Processing

Scenario: Automatically process orders published to a topic exchange.

Configuration:

  • Label: Order Event Handler
  • Connection: Enterprise RabbitMQ
  • Function: Consume from orders.processing queue (bound to orders topic exchange with routing key orders.new.*)
  • Trigger Mode: always
  • Enabled: true

Downstream Processing:

  • Validate order structure
  • Check inventory availability
  • Process payment
  • Send confirmation notification
  • Publish order status update

Sensor Data Deduplication

Scenario: Ingest sensor readings but only trigger processing when values change.

Configuration:

  • Label: Sensor Change Detector
  • Connection: IoT RabbitMQ Broker
  • Function: Consume from sensor-readings queue
  • Trigger Mode: onChange
  • Dedup Max Keys: 5000
  • Enabled: true

Downstream Processing:

  • Extract sensor ID and reading value
  • Store changed readings in time-series database
  • Evaluate alert thresholds
  • Publish alerts for abnormal values

Best Practices

Connection Health Monitoring

  • Configure your RabbitMQ connection with appropriate Heartbeat intervals (default: 60 seconds) for your network
  • Monitor connection status through MaestroHub's connection health dashboard
  • MaestroHub automatically recovers from both channel-level and connection-level failures

Designing for Reliability

PracticeRationale
Disable Auto Acknowledge for critical messagesEnsures messages are re-queued if processing fails
Use durable queues and persistent messagesMessages survive broker restarts
Set an appropriate Prefetch CountControls throughput vs. memory; start with 1 for reliability, increase for throughput
Use exclusive consumers sparinglyPrevents competing consumers—only use when single-consumer semantics are required

Error Handling Strategies

For Critical Workflows:

  • Set On Error to stopPipeline
  • Monitor pipeline execution failures
  • Implement alerting for stopped pipelines

For Best-Effort Processing:

  • Set On Error to continueExecution
  • Log failed messages for later analysis
  • Ensure downstream nodes handle partial failures gracefully

Performance Considerations

ScenarioRecommendation
High message volumeIncrease prefetch count; use competing consumers with multiple pipeline instances
Large payloadsEnsure adequate memory for payload processing; consider chunking large messages
Burst trafficRabbitMQ queues naturally buffer bursts; monitor queue depth
Multiple pipelines on same queueRabbitMQ distributes messages round-robin across consumers automatically

Enable vs. Disable

  • Use the trigger's Enabled parameter to temporarily pause message processing without changing pipeline state
  • Disable triggers during maintenance windows to prevent message processing
  • Document the reason for disabled triggers in the node's Description field