Skip to main content
Version: 2.4.0

From Industrial Protocols to Production Intelligence
in 15 Minutes

See what your factory has been hiding from you.

Install MaestroHub, connect to a simulated factory, and build your first data pipeline.

The Journey

8 steps

From a cold install to a live production dashboard.

Install MaestroHub

MaestroHub runs as a single binary or Docker container. No external databases, no message brokers — everything is embedded.

Download from the MaestroHub Portal and run:

  1. Download the ZIP file and extract it

After extraction, you'll see the following structure:

maestrohub-lite/
├── README.txt
├── ThirdPartyNotices.txt
├── starter.bat
├── maestrohub-lite.exe
├── admin-cli.exe
└── config.yaml
  1. Double-click starter.bat (or run maestrohub-lite.exe directly)

The browser opens automatically at http://localhost:6163.

Create Your Admin Account

On first launch, you will see the account creation screen. Enter your name, email, and password.

Create admin account form

Create your initial administrator account to get started with MaestroHub.

After creating your account, you are signed in automatically and taken to the Overview page.

Connect Your First Data Source

Navigate to Connect in the sidebar. Click Add Connection and choose a protocol.

Digital Factory Simulator

No real factory? Use our Digital Factory Simulator — a realistic 4-station production line with OPC UA and Modbus data. It comes with ready-to-import dashboards, connections, and pipelines — no manual setup required. Set it up in 3 minutes →

From here on, we assume the Digital Factory Simulator is running

The connection endpoints, register addresses, topics, and example payloads in the rest of this guide come straight from the Digital Factory Simulator. If you skip it, you can still follow the steps with your own PLC/server — just substitute your host, port, and register map for ours.

What the simulator exposes. 4 stations running real industrial protocols — the same ones a real brake caliper production cell would use:

ProtocolEndpointStation / Service
Modbus TCPlocalhost:15020OP10 — CNC Rough Machining
Modbus TCPlocalhost:15021OP30 — Parts Washer
OPC UAopc.tcp://localhost:14840OP20 — CNC Finish Machining
OPC UAopc.tcp://localhost:14841OP40 — Leak Test
REST APIhttp://localhost:18080/api/v1MES — Production Data
REST APIhttp://localhost:18081/api/v1ERP — Work Orders & Inventory
MQTTmqtt://localhost:11883Event Broker
PostgreSQLpostgres://localhost:15432Database
Factory Profiles

Switch between world_class (85%+ OEE), typical (65–75%), and struggling (45–55%) via the DFS dashboard dropdown. Great for testing different scenarios later.

For the rest of this guide we'll use the OP10 CNC Rough Machining station — a Modbus TCP device. Modbus is a deliberately harder example than OPC UA: it has no discovery, so register addresses come from the vendor's documentation. That's exactly the real-world case for most PLCs.

1. Create the connection

In MaestroHub, click Connect in the sidebar, then Add ConnectionModbus TCP. Copy these values (from the Digital Factory Simulator docs):

FieldValue
NameCaliperline-OP10-CNC
Hostlocalhost
Port15020
Unit ID1
Timeout5000 ms

Click Test Connection — green checkmark — then Save.

2. Add two read functions

Back on the connection, open the Functions tab and add two Read Holding Registers functions. MaestroHub's register address field is 0-based (0–65535), so 40001 becomes address 0 and 40010 becomes address 9:

NameRegister AddressData TypeQuantityDescription
State0Raw (uint16)1Station status (0=OFF, 1=IDLE, 2=RUNNING, 3=FAULTED…)
SpindleSpeed9Raw (uint16)1Spindle RPM

Organize Data in Your Unified Namespace

The Unified Namespace (UNS) organizes all your factory data into a hierarchical topic tree. Instead of point-to-point connections between systems, every data source publishes to the UNS, and every consumer reads from it. It follows the ISA-95 model:

enterprise/
site/
area/
line/
station/
metric (optional — see below)

Two common patterns, both valid:

  • One metric per topic — e.g. acme/berlin/machining/line1/op10/spindle_speed. Each metric is its own topic; fine-grained subscriptions, more topics to manage.
  • Station-level payload — e.g. acme/berlin/machining/line1/op10 carrying a JSON object with spindle_speed, state, state_label, etc. Fewer topics, atomic per-cycle snapshots. The pipeline you'll build next uses this pattern.

Browse live data. Click UNSData Explorer in the sidebar. The topic tree on the left shows every topic that has received data; select a topic to see live values on the right.

UNS Data Explorer — topic tree on the left with stations, live values on the right

UNS Data Explorer — topic tree panel on the left, live data in the right panel.

Define a schema. Schemas define what data a topic expects — data types, units, valid ranges. Click on a topic and select Edit Schema. Define fields like spindle_speed (float, RPM, range 0–15000) or temperature (float, °C, range 0–200). Schemas act as a shared contract for producers and consumers; runtime enforcement is on the roadmap — today they're informational.

MQTT under the hood

The UNS is powered by MQTT. Any MQTT client can publish or subscribe to topics. MaestroHub adds schema validation, history, and governance on top.

Build Your First Pipeline

Pipelines are the heart of MaestroHub — they read data, transform it, and publish the result. The basic pattern:

Trigger → Read Data → Transform → Publish to UNS

Go to OrchestratePipelinesCreate Pipeline and name it OP10 Data Collection.

1. Add a Schedule Trigger. Drag a Schedule Trigger node onto the canvas and set it to run every 5 seconds. Your pipeline will collect data 12 times per minute. (Other triggers: MQTT Subscribe, Manual, Webhook, OPC UA Subscribe.)

2. Add a Modbus Read Group. Drag a Modbus Read Group node from the connector palette and connect it to the trigger. Unlike a single Read, this node batches multiple registers into one request — fewer round-trips, tighter timing.

  • Select your Caliperline-OP10-CNC connection
  • Pick both functions: State, SpindleSpeed
  • The node outputs a single JSON object keyed by function name

3. Add a JavaScript transform. Drag a JavaScript node and connect it to the Read Group. This is where raw register values become a clean, publish-ready payload:

// Enrich OP10 readings before publishing
var output = {};

// Pull upstream values from the Modbus Read Group node
const spindleSpeed = $node["Modbus Read Group"]?.results?.SpindleSpeed?.value;
const state = $node["Modbus Read Group"]?.results?.State?.value;

// 1) Pass-through — keep raw values
output.spindle_speed = spindleSpeed;
output.state = state;

// 2) Derived — normalize load, human-readable state label
output.spindle_load_pct = typeof spindleSpeed === "number"
? (spindleSpeed / 15000) * 100
: null;
const stateLabels = {
0: "OFF", 1: "IDLE", 2: "RUNNING", 3: "FAULTED",
4: "BLOCKED", 5: "STARVED", 6: "CHANGEOVER", 7: "MAINTENANCE"
};
output.state_label = stateLabels[state] || "UNKNOWN";

// 3) Alert — cheap flag downstream can consume
output.alert_fault = state === 3;

// 4) Enrich — station id + ISO timestamp
output.station = "OP10";
output.timestamp = new Date().toISOString();

return output;

4. Add a UNS Publish node. Drag a UNS Publish node and connect it to the JavaScript node.

  • Topic: acme/berlin/machining/line1/op10
  • Value: {{$node["JavaScript"].result}} — references the upstream JavaScript node's output

5. Run it. Click Save, then toggle the pipeline to Enabled. Watch the execution counter tick up. Go to UNSData Explorer — live data should appear at acme/berlin/machining/line1/op10.

50+ node types

MaestroHub has nodes for OPC UA, Modbus, MQTT, HTTP, databases, email, Slack, JavaScript, Python, conditional logic, loops, and more. Browse the full catalog at Data Integration → Orchestrate → Nodes.

Build a Live Dashboard

Go to UNSDashboardsCreate Dashboard and name it Production Overview.

Click Add Panel and wire each one to the fields your pipeline is publishing to acme/berlin/machining/line1/op10:

  • Stat — spindle load percentage (spindle_load_pct, 0–100%)
  • Status Timeline — machine state (state_label: RUNNING / FAULTED / IDLE)
  • Gauge — current spindle speed (spindle_speed, 0–15,000 RPM)
  • Line Chart — spindle speed trend over the last hour (spindle_speed)

Each panel connects to a UNS topic. Select your topic, choose the field, and the panel updates in real-time.

Arrange & style. Drag panels to rearrange. Resize them. Change colors and thresholds (green for RUNNING, red for FAULTED). The dashboard auto-saves.

Production Overview dashboard — Spindle Load stat, Machine State status timeline, Spindle Speed gauge, and Spindle Speed Trend line chart, all live from OP10

Production Overview dashboard — Spindle Load stat, Machine State timeline, Spindle Speed gauge, and trend line chart, live from OP10.

Monitor & Operate

Keep everything running smoothly. MaestroHub monitors its own health and your pipeline executions.

System health overview. Go to Monitor in the sidebar for a single view of system health: CPU, memory, active connections, running pipelines, and recent errors.

Monitor overview — system health, active connections, and pipeline execution summary

Monitor overview — system health, connection status, and pipeline execution summary at a glance.

Execution history. Every pipeline run is logged with inputs, outputs, and timing. Click on any pipeline to see its execution history — filter by status (success / failed), inspect individual runs, and drill down into node-level output when something looks off.

Try a failure scenario

Change the Digital Factory Simulator to struggling mode. Watch your dashboard react — spindle load drops, the state flips to FAULTED, alert flags fire. Then switch to world_class and see it recover.

What to Explore Next

In ~15 minutes you went from a cold install to a live production dashboard — the same foundation every real MaestroHub deployment is built on. Before jumping into the deep docs, try one more quick win to cement what you just learned.

Top launchpads into the rest of the docs:

Data Integration

Connect more protocols, build complex pipelines with 50+ node types, and model your data.

Learn more →

Unified Namespace

Build your topic tree, define schemas, create dashboards, and enable real-time streaming.

Learn more →

Digital Factory Simulator

Practice with a 4-station production line — OPC UA, Modbus, REST APIs, zero hardware.

Learn more →

Reference

Configuration

Customize MaestroHub by editing config.yaml:

http:
port: 8080

Or use environment variables with the MAESTROHUB_ prefix:

export MAESTROHUB_HTTP_PORT=8080

Restart the application after changes.

Docker lifecycle commands
# View logs
docker logs -f maestrohub # or: docker compose logs -f

# Stop (data preserved)
docker stop maestrohub # or: docker compose stop

# Start again
docker start maestrohub # or: docker compose start

# Health check
curl http://localhost:8080/health

# Full reset
docker stop maestrohub && docker rm maestrohub
docker volume rm maestrohub-data
Troubleshooting
  • Port in use: Check lsof -i :6163 (binary) or lsof -i :8080 (Docker)
  • macOS security block: System Settings > Privacy & Security > Open Anyway
  • Clean restart: Delete ~/maestrohub/data/ (binary) or remove Docker volume
  • Container not starting: Check docker logs maestrohub
Encryption keys & runtime secrets

MaestroHub uses several runtime secrets — JWT signing keys, the OAuth2 client secret, and encryption keys for the Connectors and UNS databases. They are persisted under a secrets/ subfolder of your MaestroHub data directory (mode 0600):

secrets/
├── auth_jwt_access_secret
├── auth_jwt_refresh_secret
├── auth_jwt_password_reset_secret
├── oauth2_secret
├── connectors_encryption_key
└── uns_encryption_key
InstallWhere the secrets/ folder lives
Binary~/maestrohub/data/secrets/ (i.e. inside $HOME/maestrohub/data/, alongside the SQLite database files)
Dockerinside the volume you mounted at /data — full path /data/data/secrets/ from inside the container

Once a file exists in this directory, MaestroHub uses its contents as-is — keys never silently rotate underneath the data they protect.

Bring your own keys. Two ways, in order of precedence:

  1. Environment variables. Set the value before the first boot (or before you next restart). Useful for Docker / orchestrators.

    VariablePurpose
    MAESTROHUB_MODULES_AUTH_JWT_ACCESSSECRETJWT access-token signing key
    MAESTROHUB_MODULES_AUTH_JWT_REFRESHSECRETJWT refresh-token signing key
    MAESTROHUB_MODULES_AUTH_JWT_PASSWORDRESETSECRETPassword-reset token signing key
    MAESTROHUB_MODULES_OAUTH2_SECRETOAuth2 client secret
    MAESTROHUB_MODULES_CONNECTORS_ENCRYPTIONKEYAES key for connector secrets
    MAESTROHUB_MODULES_UNS_ENCRYPTIONKEYAES key for UNS settings secrets
  2. Pre-seed the secrets file. Write your value to data/secrets/<file> (mode 0600) before first boot. The runtime sees the file and uses it as-is.

Connector and UNS encryption keys must be 16, 24, or 32 bytes (AES-128 / 192 / 256). They can be supplied as raw bytes (ASCII) or as a base64-encoded string.

Rotating an encryption key. To rotate the Connectors or UNS encryption key against existing data, use admin-cli reencrypt — it re-wraps every encrypted row under the new key and updates the secrets file atomically.