Skip to main content
Version: 2.1

OracleOracle Database Integration Guide

Connect MaestroHub pipelines to Oracle Database instances for enterprise-grade data ingestion and orchestration. This guide highlights Oracle specifics such as PL/SQL, service names, and authentication modes so you can configure pipelines with confidence.

Overview

The Oracle Database connector unlocks relational workloads with:

  • Bidirectional data flows for reading and writing SQL and PL/SQL queries
  • Stored procedure execution with parameter binding and output handling
  • Secure connectivity with SSL/TLS encryption and granular connection tuning
Supported Versions

The connector is validated with Oracle Database 10g and later versions. Supported Oracle versions: 10g, 11g, 12c, 18c, 19c, 21c+

Connection Configuration

Creating an Oracle Database Connection

Open ConnectionsNew ConnectionOracle and provide the following details:

Oracle Database Connection Creation Fields

1. Profile Information
FieldDefaultDescription
Profile Name-A descriptive name for this connection profile (required, max 100 characters)
Description-Optional description for this Oracle connection
2. Database Configuration
FieldDefaultDescription
HostlocalhostServer hostname or IP address (required)
Port1521Oracle listener port number (1-65535)
Service Name-Oracle service name (required, e.g., XEPDB1, ORCLPDB1, FREEPDB1). Modern Oracle databases use service names for connection
Connect Timeout (sec)30Connection timeout in seconds (0-600) - required
Notes
  • Supported Oracle Versions: 10g, 11g, 12c, 18c, 19c, 21c+
  • Service Name: Oracle databases use service names for connections. This is the modern approach replacing SID
  • Pluggable Databases (PDB): For multitenant environments, specify the PDB service name (e.g., ORCLPDB1)
  • Container Databases (CDB): To connect to CDB root, use the CDB service name
3. Basic Authentication
FieldDefaultDescription
Username-Oracle database username (required, e.g., SYSTEM, HR, APPUSER)
Password-Oracle user password (required)
Oracle User Privileges

Ensure the user has appropriate privileges:

  • SELECT privilege for read operations
  • INSERT, UPDATE, DELETE for write operations
  • EXECUTE for stored procedures
  • CREATE SESSION for establishing connections
4. SSL/TLS Settings
4a. SSL Configuration
FieldDefaultDescription
Enable SSL/TLSfalseEnable encrypted connections to Oracle server
Trust Server CertificatetrueAccept all server certificates (required when SSL is enabled - no wallet support)
SSL/TLS Configuration

When SSL/TLS is enabled:

  • All server certificates are automatically trusted
  • Oracle Wallet is not supported in the current version
  • Suitable for development and testing environments
  • For production, ensure your Oracle server has valid certificates
5. Connection Pool Settings
FieldDefaultDescription
Max Open Connections100Maximum number of simultaneous database connections (1-1000). Higher values increase concurrency but add DB load
Max Idle Connections25Idle connections kept ready for reuse to reduce latency (0-1000). Set 0 to close idle connections immediately
Connection Max Lifetime (sec)900Maximum age of a connection before it is recycled (0-86400 sec, 0 = keep indefinitely)
Connection Max Idle Time (sec)300How long an idle connection may remain unused before being closed (0-86400 sec, 0 = disable idle timeout)

Connection Pool Best Practices

  • Max Open Connections: Balance between concurrency and database load. Default 100 is suitable for most applications
  • Max Idle Connections: Keeping idle connections reduces connection establishment latency
  • Max Lifetime: Prevents connection leaks and helps with database maintenance windows (default 900 sec = 15 minutes)
  • Max Idle Time: Closes unused connections to free up database resources (default 300 sec = 5 minutes)
6. Retry Configuration
FieldDefaultDescription
Retries3Number of connection retry attempts (0-10). 0 means no retry
Retry Delay (ms)200Initial delay between retry attempts in milliseconds (0-3,600,000 ms = 1 hour max)
Retry Backoff Multiplier2Exponential factor for retry delay growth (1.0-10.0, e.g., 2 means delay doubles each retry)

Retry Behavior Example

  • 1st retry: wait 200ms
  • 2nd retry: wait 400ms (200 × 2)
  • 3rd retry: wait 800ms (400 × 2)
7. Connection Labels
FieldDefaultDescription
Labels-Key-value pairs to categorize and organize this Oracle connection (max 10 labels)

Example Labels

  • env: prod - Environment (production, staging, development)
  • team: data - Responsible team
  • region: us-east - Geographical region
  • application: erp - Associated application
Notes
  • Oracle Version Support: This connector supports Oracle Database 10g, 11g, 12c, 18c, 19c, 21c+
  • Service Names: Oracle uses service names for modern connection management.
  • SSL/TLS Recommendations: Development: Use SSL disabled or trust all certificates for convenience. Production: Use SSL enabled with valid server certificates
  • Connection Pooling: The driver uses Go's database/sql connection pool, which efficiently manages connections
  • Retry Strategy: Implements exponential backoff for connection failures to handle transient network issues
  • Fields marked as required must be filled out before creating/saving the connection
  • All timeout values are in seconds unless otherwise specified (Retry Delay is in milliseconds)

Function Builder

Creating Oracle Functions

After the connection is saved, create reusable functions for different operation types:

  1. Navigate to FunctionsNew Function
  2. Select one of the Oracle function types: Query, Execute, or Write
  3. Select the Oracle connection you configured
  4. Configure the function parameters
Oracle Database Function Creation

Oracle function creation interface showing available function types: Query, Execute, and Write

Query Function

Purpose: Execute SQL queries (SELECT) to read data from Oracle Database. Returns rows as structured data.

Configuration Fields

FieldTypeRequiredDefaultDescription
SQL QueryStringYes-SQL SELECT statement to execute. Supports parameterized queries with ((param)) syntax.
Timeout (seconds)NumberNo0Per-execution timeout in seconds (0 = no limit).

Use Cases:

  • SELECT machine KPIs (OEE, downtime) from telemetry tables
  • SELECT COUNT(*) from orders grouped by status
  • SELECT with JOINs across multiple tables using bind variables (:1, :2)
  • SELECT with WHERE filters, ORDER BY, and ROWNUM

Execute Function

Purpose: Execute DML/DDL statements (INSERT, UPDATE, DELETE, CREATE, ALTER, DROP) on Oracle Database. Returns rowsAffected instead of row data. Use this for data modifications and schema changes.

Configuration Fields

FieldTypeRequiredDefaultDescription
SQL StatementStringYes-DML/DDL SQL statement to execute. Supports parameterized queries with ((param)) syntax.
Timeout (seconds)NumberNo0Per-execution timeout in seconds (0 = no limit).

Use Cases:

  • INSERT production events with SYSDATE timestamps
  • UPDATE work orders, inventory counts, or maintenance schedules
  • DELETE or archive records older than SYSDATE - 30
  • CREATE TABLE or ALTER TABLE for schema changes

Write Function

Purpose: Write pipeline data to an Oracle table with automatic schema detection. This is not a SQL editor — it loads structured data (rows/objects) from your pipeline into a target table.

Load data into an Oracle table with two modes: Auto (default) detects the table schema and maps incoming data fields to columns automatically; Manual lets you specify exact columns for precise control. Supports batching for efficient bulk loading.

Configuration Fields

FieldTypeRequiredDefaultDescription
Table NameStringYes-Target table to insert data into.
Write ModeStringNoautoauto = detect schema and map fields automatically, manual = specify exact columns.
ColumnsArrayNo-Column names to insert into (manual mode only).
Create Table If Not ExistsBooleanNofalseAuto-create the table if it does not exist (auto mode only). When enabled, also performs schema evolution — adds missing columns and widens types as needed.
Batch SizeNumberNo100Number of rows per INSERT batch (1–10,000).
Timeout (seconds)NumberNo0Per-execution timeout in seconds (0 = no limit).

Use Cases:

  • Auto-detect table schema and map pipeline data
  • Create tables on-the-fly from pipeline data (NUMBER(19,0), CLOB, etc.)
  • Manual column mapping for precise control
  • Bulk insert production events with auto schema detection

How Write Works Step by Step

1. Batching

Oracle uses INSERT ALL INTO ... SELECT 1 FROM DUAL for multi-row inserts. If your pipeline sends 10 rows and Batch Size is 5, exactly 2 INSERT ALL statements are executed (2 batches of 5 rows). If it sends 13 rows, 3 statements are executed (5 + 5 + 3). Single-row inserts use regular INSERT INTO syntax for efficiency.

2. Column Matching

Write uses case-insensitive matching between incoming data field names and table column names. For example, a data field named deviceId will match a table column DEVICEID or DeviceId. The original column casing from the database is preserved in the generated INSERT statement.

3. Table Exists — Normal Flow

  1. The function first queries ALL_TABLES to check if the target table exists (scoped to OWNER = USER)
  2. If the table exists, it fetches all column metadata from ALL_TAB_COLUMNS
  3. It matches incoming data fields to table columns (case-insensitive)
  4. Fields in data that don't match any column are silently skipped — they are tracked and returned in the response metadata as skippedFields
  5. Batch INSERT statements are generated and executed using only the matched columns

4. Table Does Not Exist

Create Table If Not ExistsBehavior
Disabled (default)Returns an error immediately: "table 'X' does not exist. Enable 'Create table if not exists' or create it manually". No insert is attempted.
Enabled, no schema providedTypes are inferred from the first row of data using the rules below. A CREATE TABLE is executed, then all rows (including the first) are inserted — no data loss.
Enabled, with schemaThe provided schema is used for CREATE TABLE with full column definitions (types, primary keys, indexes). After creation, all rows are inserted.

Data Type Inference — When no schema is provided, the Write function infers column types from the first row of data:

Data ValueInferred TypeNotes
true / falseNUMBER(1,0)JSON boolean values
42, -7, 1000NUMBER(19,0)Whole numbers (no fractional part)
3.14, 0.5BINARY_DOUBLENumbers with fractional part
"2024-01-15T10:30:00Z"TIMESTAMPStrings matching ISO 8601 / RFC 3339 and other common timestamp formats are automatically detected
"hello", "sensor-01"CLOBAll other strings
nullCLOBNull values default to text
info

Integer vs. decimal distinction is based on whether the number has a fractional part: 55NUMBER(19,0), 55.0BINARY_DOUBLE. If you need a specific numeric type (e.g., NUMBER(10,0), NUMBER), define it in the schema using the Visual Editor.

5. Schema Evolution (Table Exists + Create Table If Not Exists Enabled)

When Create Table If Not Exists is enabled and the table already exists, the function performs schema evolution before inserting:

  • New columns: If data contains fields not present in the table, ALTER TABLE ADD is executed for each missing field. If a schema is defined in the Visual Editor, the specified type is used; otherwise, the type is inferred from the first row of data values.
  • Type widening: If an existing column's type is too narrow for the incoming data, it is safely widened using the rules below.
  • After schema evolution completes, column metadata is re-fetched from the database to ensure accuracy, then all rows are inserted with the updated column set.

Schema evolution only widens types — it never narrows or changes types across incompatible categories. The following rules apply:

RuleDirectionExamples
Same category, higher rankNUMBER(3,0)NUMBER(5,0)NUMBER(10,0)NUMBER(19,0)NUMBER(20,0)Integer grows to hold larger values
Same category, higher rankBINARY_FLOATBINARY_DOUBLE / NUMBERFloat gains precision
Same category, higher rankVARCHAR2CLOBString grows to hold larger content
Cross-category (boolean → integer)NUMBER(1,0)NUMBER(10,0)Boolean promoted to integer
Cross-category (integer → float)NUMBER(19,0)BINARY_DOUBLEInteger promoted to float

The following changes are not supported by schema evolution and require manual ALTER TABLE statements:

  • Narrowing types (e.g., NUMBER(19,0)NUMBER(5,0), CLOBVARCHAR2)
  • Incompatible category changes (e.g., CLOBNUMBER, BINARY_DOUBLENUMBER(1,0), TIMESTAMPNUMBER(10,0))
  • Changing an already-created column's type by updating the Visual Editor schema alone — the schema definition only affects new columns being added
  • Dropping columns — removing a column from the Visual Editor schema does not delete it from the database table. The column remains in the table and receives NULL (or its default value) for new inserts.
tip

If you need to change an existing column's type or drop a column, use the Execute function to run ALTER TABLE ... MODIFY ... or ALTER TABLE ... DROP COLUMN ... manually.

If Create Table If Not Exists is disabled and data has extra columns, those fields are silently skipped — no error, no schema change.

6. Partial Failure

Each batch executes independently — there is no transaction wrapping all batches. If batch 3 of 5 fails, batches 1 and 2 have already been committed. The error response still includes rowsInserted showing how many rows succeeded before the failure.

Response Metadata

On success, the response includes:

FieldDescription
rowsInsertedTotal number of rows successfully inserted
matchedColumnsColumn names that had matching data fields
skippedFieldsData fields that didn't match any table column (if any)
schemaEvolutionALTER TABLE actions performed — add or widen (if any)
tableCreatedtrue if the table was created during this call
batchSizeBatch size used
totalRowsTotal number of input rows

Using Parameters

MaestroHub detects the ((parameterName)) syntax and exposes each parameter for validation and runtime binding for Query and Execute functions.

ConfigurationDescriptionExample
TypeEnforce data types for incoming valuesstring, number, boolean, datetime, array
RequiredMark parameters as mandatory or optionalRequired / Optional
Default ValueFallback value applied when callers omit a parameterSYSTIMESTAMP, 0, 'Active'
DescriptionAdd guidance for downstream users"UTC start time for the query window"
Oracle Database Function Parameters Configuration

Parameter configuration with validation rules and helper text

Oracle Bind Variables

Oracle automatically uses bind variables for parameters, which:

  • Improves query performance through execution plan reuse
  • Prevents SQL injection attacks
  • Reduces parsing overhead in the database

Pipeline Integration

Use the Oracle Database functions you publish here as nodes inside the Pipeline Designer to keep plant-floor activity aligned with enterprise data. Drag the appropriate node type into your flow, bind parameters to upstream outputs or constants, and adjust error handling or retry behavior directly on the canvas.

Each function type maps to a dedicated pipeline node — Query, Execute, and Write — so you can clearly separate read, write, and data-loading operations in your flows.

When orchestrating larger workflows, visit the Connector Nodes page for guidance on where Oracle nodes fit within multi-system automation patterns.

Common Use Cases

Query: Production Events

Scenario: Retrieve production events from MES tables for a specific time range and production line.

SELECT
event_id,
line_id,
event_type,
quantity_produced,
event_time,
operator_name
FROM ProductionEvents
WHERE event_time >= TO_DATE('((startDate))', 'YYYY-MM-DD')
AND event_time < TO_DATE('((endDate))', 'YYYY-MM-DD') + 1
AND line_id = ((lineId))
ORDER BY event_time DESC;

Pipeline Integration: Feed real-time production data to dashboards, trigger alerts based on thresholds, or sync with ERP systems.


Execute: Updating Work Order Progress

Scenario: Update work order completion details from barcode scanners or PLC events.

UPDATE WorkOrders
SET
completed_quantity = ((completedQuantity)),
scrap_quantity = ((scrapQuantity)),
status = ((newStatus)),
last_updated_at = SYSTIMESTAMP,
last_updated_by = ((userId))
WHERE work_order_id = ((workOrderId));

Pipeline Integration: Trigger on event streams to keep ERP and MES systems synchronized in real time.


Execute: Recording Inventory Transactions

Scenario: Record inventory movements from automation equipment into ERP tables.

INSERT INTO InventoryTransactions (
item_number, location_code, quantity_change,
transaction_type, transaction_time, operator_id
) VALUES (
((itemNumber)), ((locationCode)), ((quantityChange)),
((transactionType)), SYSTIMESTAMP, ((operatorId))
);

Pipeline Integration: Combine with Modbus, OPC UA, or REST nodes to maintain accurate inventory records.


Write: Bulk Loading Sensor Data

Scenario: Load real-time sensor readings from IoT devices into an Oracle table.

Configure a Write function with:

  • Table Name: SENSOR_READINGS
  • Write Mode: auto
  • Create Table If Not Exists: enabled

Connect it after data collection nodes (MQTT, OPC UA, Modbus, etc.) in your pipeline. The Write function automatically maps incoming fields (sensor_id, temperature, pressure, vibration) to table columns and inserts them in batches using INSERT ALL statements. If the table doesn't exist, it is created automatically with inferred types (NUMBER(19,0), BINARY_DOUBLE, CLOB).