Skip to main content
Version: 2.0-beta.1

Orchestrate - Pipeline Automation & Workflows

Orchestrate is where data comes to life. Build visual, no-code pipelines that automate data collection, transformation, analysis, and action—all without writing a single line of code.

Why Orchestrate?

Manufacturing generates mountains of data, but turning that data into action requires complex logic: filtering, transforming, aggregating, routing, and triggering alerts. Traditional approaches require custom coding and IT resources. Orchestrate changes this by:

  • Empowering domain experts – Build pipelines without coding skills using drag-and-drop design
  • Accelerating development – What took weeks now takes hours with visual workflows
  • Ensuring reliability – Built-in retry logic, error handling, and execution monitoring
  • Enabling collaboration – Visual pipelines are self-documenting and easy to understand
  • Scaling effortlessly – Parallel execution, concurrency control, and enterprise-grade performance

Core Concepts

Pipelines

Pipelines are visual workflows that connect data sources, transformations, and actions into automated processes:

  • Visual designer – Drag-and-drop canvas for building workflows
  • Execution strategies – Level-based parallelization or sequential-branch control
  • Overlap modes – Queue, restart, or skip for intelligent concurrency management
  • Real-time monitoring – Live execution tracking with performance metrics
  • Version control – Automatic change tracking with full rollback capability
  • Error handling – Retry policies, fallbacks, and continue-on-error logic

Common pipeline patterns:

  • Collect sensor data → Filter outliers → Aggregate by shift → Write to database
  • Query ERP system → Transform to standard format → Publish to UNS → Send alerts
  • Monitor machine status → Detect anomalies → Trigger maintenance workflow → Log events

Nodes

Nodes are the building blocks of pipelines. Each node performs a specific action and connects to other nodes to create workflows:

Triggers

Define when pipelines execute:

  • Manual – On-demand execution for testing and ad-hoc analysis
  • Scheduler – Cron-based triggers for periodic reports and batch jobs

Connectors

Read and write data from any source:

  • OPC UA, Modbus, Siemens S7 – Industrial protocol nodes
  • MQTT, REST, SQL – Enterprise and IoT integration nodes
  • UNS Publish/Subscribe – Unified Namespace data exchange

Transforms

Shape and modify your data:

  • Buffer – Handle backpressure and smooth data spikes
  • Set – Update fields, add metadata, enrich data
  • Map & Filter – Select, rename, transform data structures
  • Aggregate – Roll up data across time windows or groups

Logic

Add intelligence to your workflows:

  • Conditions – Branch pipelines based on business rules
  • JavaScript – Write custom logic for complex transformations
  • Merge – Combine data from multiple sources
  • Switch – Route data based on dynamic conditions

Actions

Execute business processes:

  • Function Instances – Run reusable business logic from Compose
  • Database Write – Insert or update records in SQL databases
  • API Calls – Trigger external systems and webhooks
  • Alerts – Send notifications when thresholds are breached

Common Use Cases

Real-Time OEE Calculation

Trigger on PLC data → Calculate availability, performance, quality → Aggregate by shift → Publish to dashboard → Alert on low OEE

Predictive Maintenance

Scheduled trigger → Query sensor history → Run anomaly detection logic → Compare to baseline → Send alert to maintenance team → Log to database

Quality Traceability

Scheduled pipeline aligned with vision inspection cycles → Extract defect data → Link to batch instance → Store in quality database → Notify quality manager if threshold exceeded

Energy Monitoring

Cron trigger every 15 minutes → Read meter values → Calculate energy per unit → Compare to targets → Publish to UNS → Flag inefficiencies

ERP Synchronization

Scheduled batch job → Query production counts → Transform to ERP format → Call REST API → Handle failures with retry → Log completion status

Getting Started

Ready to automate your data workflows?

  1. Review Pipeline basics – Understand execution strategies and configuration
  2. Explore Node categories – Discover available building blocks
  3. Build your first workflow – Start with a simple pipeline and expand from there

Each section provides detailed configuration references, expression syntax guides, and real-world examples to help you build production-ready pipelines.


Pro Tip

Start with manual triggers while building and testing your pipeline. Switch to scheduled or event-driven triggers once you've validated the workflow.