Skip to main content
Version: 2.0
Redis Trigger Node interface

Redis trigger node

Redis Trigger Node

Overview

The Redis Trigger Node automatically initiates MaestroHub pipelines when messages arrive on subscribed Redis Pub/Sub channels. Unlike the Redis Command and Publish connector nodes which operate within an already-running pipeline, the Redis Trigger starts new pipeline executions in response to incoming messages—enabling fully event-driven automation.


Core Functionality

What It Does

Redis Trigger enables real-time, event-driven pipeline execution by:

1. Event-Driven Pipeline Execution Start pipelines automatically when messages are published to subscribed Redis channels, without manual intervention or polling. Perfect for real-time event processing, inter-service messaging, and live data stream handling.

2. Automatic Subscription Management Subscriptions are created and cleaned up automatically—no manual subscription management required. Supports both exact channel names and glob-style pattern matching via PSUBSCRIBE.

3. Message Payload Passthrough Incoming Redis Pub/Sub message payloads are passed directly to the pipeline, making them available to all downstream nodes via the $node and $trigger variables.


Channel Pattern Matching

When the underlying Subscribe function has Use Patterns enabled, the trigger uses Redis PSUBSCRIBE for glob-style wildcard matching.

PatternMatchesDoes Not Match
sensor:*sensor:temperature, sensor:humiditysensor:data:raw
events:*:errorevents:user:error, events:system:errorevents:error
alerts:*alerts:critical, alerts:warningevents:alerts
Pattern vs Exact Subscriptions
  • Exact channels: Uses Redis SUBSCRIBE command. Best for known, fixed channel names.
  • Pattern channels: Uses Redis PSUBSCRIBE command. Best for dynamic or hierarchical channel structures.

Pattern matching is configured in the Subscribe function, not in the trigger node itself. See Redis Subscribe Function for details.


Configuration Options

Basic Information

FieldTypeDescription
Node LabelString (Required)Display name for the node on the pipeline canvas
DescriptionString (Optional)Explains what this trigger initiates

Parameters

ParameterTypeDefaultRequiredConstraintsDescription
Connection IDstring""Yes--Redis connection profile to use.
Function IDstring""Yes--Subscribe function within the connection. Only Subscribe functions are listed.
Trigger Modeselect"always"Noalways / onChangealways: Trigger on every message. onChange: Only trigger when payload differs from last received value.
EnabledbooleantrueNo--Enable/disable the trigger. When disabled, no channel subscriptions are active.
Dedup Max Keysnumber1000If onChange1–10,000Maximum number of distinct channels tracked for change detection. Least recently used channel is evicted when exceeded.
Dedup TTLselect--If onChange1h / 6h / 12h / 24h / 72h / 168hChange detection state TTL. Enterprise edition only.
Function Requirement

The selected function must be a Redis Subscribe function type. Command and Publish functions cannot be used with Redis Trigger nodes.


Settings

Description

A free-text area for documenting the node's purpose and behavior. Notes entered here are saved with the pipeline and visible to all team members.

Execution Settings

SettingOptionsDefaultDescription
Timeout (seconds)numberPipeline defaultMaximum execution time for this node (1–600). Leave empty for pipeline default.
Retry on TimeoutPipeline Default / Enabled / DisabledPipeline DefaultWhether to retry the node if it times out.
Retry on FailPipeline Default / Enabled / DisabledPipeline DefaultWhether to retry on failure. When Enabled, shows Advanced Retry Configuration.
On ErrorPipeline Default / Stop Pipeline / Continue ExecutionPipeline DefaultBehavior when node fails after all retries.

Advanced Retry Configuration (visible when Retry on Fail = Enabled)

FieldTypeDefaultRangeDescription
Max Attemptsnumber31–10Maximum retry attempts.
Initial Delay (ms)number1000100–30,000Wait before first retry.
Max Delay (ms)number1200001,000–300,000Upper bound for backoff delay.
Multipliernumber2.01.0–5.0Exponential backoff multiplier.
Jitter Factornumber0.10–0.5Random jitter (+-percentage).

Validation Rules

The Redis Trigger Node enforces these validation requirements:

Parameter Validation

Connection ID

  • Must be provided and non-empty
  • Must reference a valid Redis connection profile
  • Error: "Redis connection is required"

Function ID

  • Must be provided and non-empty
  • Must reference a valid Redis Subscribe function
  • Function must belong to the specified connection
  • Error: "Subscribe function is required"

Enabled Flag

  • Must be a boolean if provided
  • Error: "Enabled must be a boolean value"

Usage Examples

Real-Time Event Processing

Scenario: Process equipment status events published by other services in real-time.

Configuration:

  • Label: Equipment Status Handler
  • Connection: Production Redis
  • Function: Subscribe to equipment:status:* (pattern mode)
  • Trigger Mode: always
  • Enabled: true

Downstream Processing:

  • Parse JSON payload to extract equipment ID and status
  • Route through condition node based on status type
  • Update MongoDB with latest equipment state
  • Send critical alerts via MS Teams if status is "fault"

Change-Only Data Capture

Scenario: Trigger data processing only when sensor values actually change, reducing unnecessary pipeline executions.

Configuration:

  • Label: Sensor Change Detector
  • Connection: Edge Redis
  • Function: Subscribe to sensor:data
  • Trigger Mode: onChange
  • Dedup Max Keys: 5000
  • Enabled: true

Downstream Processing:

  • Extract changed sensor reading
  • Compare with threshold values
  • Store only changed values in InfluxDB
  • Publish change event to downstream consumers

Inter-Pipeline Communication

Scenario: Chain multiple pipelines together using Redis Pub/Sub as the communication layer.

Configuration:

  • Label: Pipeline Stage 2 Trigger
  • Connection: Internal Redis
  • Function: Subscribe to pipeline:stage1:results
  • Trigger Mode: always
  • Enabled: true

Downstream Processing:

  • Receive processed results from Stage 1 pipeline
  • Apply additional transformations
  • Write final results to PostgreSQL
  • Publish completion notification to pipeline:stage2:complete