Roadmap¶
Current status and future direction for elastic-script β a procedural language for Elasticsearch inspired by Oracle PL/SQL.
β Completed Features (v1.0)¶
Core Language¶
- Procedure Creation -
CREATE PROCEDURE ... END PROCEDURE - Variable System -
DECLARE,VAR,CONSTwith type inference - Control Flow - IF/THEN/ELSEIF/ELSE, FOR loops, WHILE loops
- Data Types - STRING, NUMBER, BOOLEAN, ARRAY, DOCUMENT, DATE
- Functions with Parameters - IN/OUT/INOUT parameter modes
Built-in Functions (118 total)¶
- String Functions (18) - LENGTH, SUBSTR, REPLACE, REGEXP_*, etc.
- Number Functions (11) - ABS, ROUND, SQRT, LOG, etc.
- Array Functions (18) - ARRAY_LENGTH, APPEND, FILTER, MAP, etc.
- Date Functions (8) - CURRENT_DATE, DATE_ADD, EXTRACT_*, etc.
- Document Functions (6) - DOCUMENT_GET, KEYS, VALUES, MERGE, etc.
- Elasticsearch Functions (5) - ESQL_QUERY, ES_GET, ES_INDEX, etc.
- AI/LLM Functions (6) - LLM_COMPLETE, LLM_SUMMARIZE, ES_INFERENCE
- Integration Functions (~30) - Slack, PagerDuty, K8s, AWS, HTTP
Async Execution Model¶
- Pipe-Driven Syntax -
procedure() | ON_DONE handler(@result) - Error Continuations -
ON_FAIL,FINALLYhandlers - Parallel Execution -
PARALLEL [proc1(), proc2()] | ON_ALL_DONE - Execution Control -
EXECUTION('name') | STATUS/CANCEL/RETRY - State Persistence - Execution state stored in
.escript_executions
First-Class Commands¶
- INDEX Command -
INDEX document INTO 'index-name';(Planned) - DELETE Command -
DELETE FROM 'index-name' WHERE id;(Planned) - SEARCH Command -
SEARCH 'index-name' QUERY {...};(Planned) - REFRESH Command -
REFRESH 'index-name';(Planned) - CREATE INDEX Command -
CREATE INDEX 'name' WITH MAPPINGS {...};(Planned)
Type-Aware ES|QL Binding¶
- ARRAY Binding -
DECLARE errors ARRAY FROM FROM logs-* | WHERE level = 'ERROR';(Planned) - DOCUMENT Binding -
DECLARE user DOCUMENT FROM FROM users | WHERE id = 'john' | LIMIT 1;(Planned) - NUMBER Binding -
DECLARE count NUMBER FROM FROM logs-* | STATS count = COUNT(*);(Planned) - STRING Binding -
DECLARE name STRING FROM FROM config | KEEP value | LIMIT 1;(Planned) - DATE Binding -
DECLARE last_login DATE FROM FROM users | KEEP login_time | LIMIT 1;(Planned) - BOOLEAN Binding -
DECLARE has_errors BOOLEAN FROM FROM logs | STATS has = COUNT(*) > 0;(Planned)
Type-Namespaced Functions¶
- Namespaced Syntax -
NAMESPACE.METHOD()for organized function calls (Planned) - Type Namespaces -
ARRAY.MAP(),STRING.UPPER(),DOCUMENT.KEYS(),DATE.ADD()(Planned) - Extension Namespaces -
K8S.GET_PODS(),AWS.S3_GET(),HTTP.GET()(Planned) - Keyword Support - Type keywords (ARRAY, STRING, etc.) work as namespace identifiers (Planned)
Developer Experience¶
- Quick Start Script -
./scripts/quick-start.shfor one-command setup - Jupyter Integration - Custom kernel for interactive development
- Sample Notebooks - 7 comprehensive tutorial notebooks + first-class commands demo
- E2E Test Framework - Automated notebook execution with HTML reports
- GitHub Pages Documentation - Full documentation site
π Feature Gap Analysis (PL/SQL Comparison)¶
The table below compares elastic-script to Oracle PL/SQL and identifies missing features:
| Category | Feature | PL/SQL | elastic-script | Status |
|---|---|---|---|---|
| Error Handling | TRY/CATCH blocks | β | β | β Done |
| Named exceptions | β | β | β Done | |
| RAISE/THROW | β | β | β Done | |
| Functions | User-defined functions | β | β | β Done |
| Function overloading | β | β | π΅ P2 | |
| Recursive functions | β | β | β Done | |
| Cursors | Explicit cursors | β | π | π‘ Partial |
| FETCH INTO | β | π | π‘ Partial | |
| BULK COLLECT | β | β | β Done | |
| Modules | Packages | β | π | π‘ Partial |
| Package state | β | β | π΄ P1 | |
| Public/Private | β | β | π΄ P1 | |
| Events | Triggers | β | π | π΅ Planned |
| Scheduled jobs | β | π | π΅ Planned | |
| Collections | Associative arrays (MAP) | β | β | β Done |
| User-defined types | β | β | β Done | |
| Dynamic | EXECUTE IMMEDIATE | β | β | β Done |
| Bind variables | β | β | β Done | |
| Bulk Ops | FORALL | β | β | β Done |
| SAVE EXCEPTIONS | β | β | β Done | |
| Security | GRANT/REVOKE | β | β | β Done |
| AUTHID | β | π | π΅ Planned | |
| Debug | Profiler | β | π | π‘ Partial |
| Breakpoints | β | β | π΅ P2 |
Legend: β Done | π‘ Partial | π Planned | π΄ High Priority | π΅ Lower Priority
π§ Phase 1: Core Language Completeness (Q1-Q2 2026)¶
1.1 Exception Handling (TRY/CATCH)¶
Status: β Complete | Priority: P0
Full exception handling with named exceptions and propagation.
TRY
SET result = HTTP_GET('https://api.example.com/data')
SET parsed = JSON_PARSE(result)
CATCH http_error
PRINT 'HTTP call failed: ' || error['message']
CALL log_error(error)
CATCH parse_error
PRINT 'JSON parsing failed'
SET parsed = {}
CATCH
-- Catch-all for any other errors
PRINT 'Unexpected error: ' || error['message']
FINALLY
-- Always runs (cleanup)
CALL close_connections()
END TRY
-- THROW/RAISE with error codes
THROW 'Resource not found' WITH CODE 'HTTP_404';
RAISE error_msg WITH CODE error_code; -- Expressions supported
Implemented Features:
- β
Named exception types (
http_error,timeout_error,division_error, etc.) - β
@errorbinding as DOCUMENT withmessage,code,type,stack_trace,cause - β
THROWandRAISEstatements (aliases) - β
WITH CODEclause for error codes - β Expression support in THROW/RAISE (not just string literals)
- β Multiple CATCH blocks with exception type matching
- β Catch-all CATCH block (no exception name)
- β
FINALLYblock for cleanup (always runs) - β
EScriptExceptionclass with type inference from Java exceptions
Exception Types:
| Type | Description |
|---|---|
error | Generic (catch-all) |
http_error | HTTP/network errors |
timeout_error | Timeout errors |
division_error | Division by zero |
null_reference_error | Null pointer errors |
type_error | Type mismatch |
validation_error | Validation failures |
not_found_error | Resource not found |
permission_error | Auth/permission errors |
esql_error | ES|QL query errors |
function_error | Built-in function errors |
1.2 User-Defined Functions (CREATE FUNCTION)¶
Status: β Complete | Priority: P0
Distinguish functions (return values) from procedures (side effects).
-- Define a function that returns a value
CREATE FUNCTION calculate_severity(error_count NUMBER, warn_count NUMBER)
RETURNS STRING AS
BEGIN
DECLARE score NUMBER = error_count * 10 + warn_count
IF score > 100 THEN
RETURN 'critical'
ELSIF score > 50 THEN
RETURN 'high'
ELSIF score > 20 THEN
RETURN 'medium'
ELSE
RETURN 'low'
END IF
END FUNCTION
-- Usage: Functions can be used in expressions
SET severity = calculate_severity(errors, warnings)
SET message = 'Status: ' || calculate_severity(5, 10)
Key Differences from Procedures:
| Aspect | PROCEDURE | FUNCTION |
|---|---|---|
| Returns value | No (OUT params only) | Yes (RETURN statement) |
| Use in expressions | No | Yes |
| Side effects | Expected | Discouraged |
| Call syntax | CALL proc() | func() in expressions |
Implemented Features:
- β
CREATE FUNCTION ... RETURNS type AS BEGIN ... END FUNCTIONsyntax - β
DELETE FUNCTION function_nameto remove stored functions - β
Functions stored in
.elastic_script_functionsindex - β Automatic loading of stored functions on first call
- β
StoredFunctionDefinitionclass for persistent function representation - β RETURN statement with expression support
- β All parameter modes (IN, OUT, INOUT)
1.3 Dynamic ES|QL (EXECUTE IMMEDIATE)¶
Status: β Completed | Priority: P0
Build and execute queries dynamically at runtime.
-- Build query dynamically based on conditions
DECLARE query STRING = 'FROM logs-*'
IF severity_filter IS NOT NULL THEN
SET query = query || ' | WHERE level = ''' || severity_filter || ''''
END IF
IF service_filter IS NOT NULL THEN
SET query = query || ' | WHERE service = ''' || service_filter || ''''
END IF
SET query = query || ' | LIMIT ' || max_results
-- Execute the dynamic query
EXECUTE IMMEDIATE query INTO results
-- With bind variables (SQL injection safe)
EXECUTE IMMEDIATE
'FROM logs-* | WHERE service = :svc AND level = :lvl | LIMIT :lim'
USING service_name, 'ERROR', 100
INTO results
Safety Features:
- Bind variables prevent injection attacks
- Query validation before execution
- Clear error messages for syntax errors
Implemented Features:
- β
EXECUTE IMMEDIATE expressionsyntax - β
INTO variableclause to capture results - β
INTO var1, var2, var3for multiple column capture - β
USING expr1, expr2for bind variables (:1,:2, etc.) - β String and numeric bind variable substitution
- β Auto-declaration of variables with inferred types
- β Expression evaluation for dynamic query building
1.4 Associative Arrays (MAP Type)¶
Status: β Complete | Priority: P0
Key-value data structures for counting, grouping, and caching.
-- MAP literal syntax
DECLARE config MAP = MAP { 'host' => 'localhost', 'port' => 9200 };
-- Create empty map
DECLARE counts MAP = MAP {};
-- Add/update entries with MAP_PUT (returns new map)
SET counts = MAP_PUT(counts, 'api-service', 42);
SET counts = MAP_PUT(counts, 'db-service',
MAP_GET_OR_DEFAULT(counts, 'db-service', 0) + 1);
-- Get values
DECLARE val = MAP_GET(config, 'host');
DECLARE port = MAP_GET_OR_DEFAULT(config, 'timeout', 30);
-- Check existence
IF MAP_CONTAINS_KEY(counts, 'api-service') THEN
PRINT 'API has ' || MAP_GET(counts, 'api-service') || ' errors';
END IF
-- Get keys and values as arrays
DECLARE keys ARRAY = MAP_KEYS(counts);
DECLARE values ARRAY = MAP_VALUES(counts);
-- Iterate over keys
DECLARE i NUMBER;
FOR i IN 1..ARRAY_LENGTH(keys) LOOP
DECLARE k STRING = keys[i];
IF MAP_GET(counts, k) > 10 THEN
CALL alert_team(k, MAP_GET(counts, k));
END IF
END LOOP
-- Merge maps
SET config = MAP_MERGE(defaults, overrides);
-- Create from arrays
SET my_map = MAP_FROM_ARRAYS(['a', 'b', 'c'], [1, 2, 3]);
Implemented Features:
- β
MAPtype declaration - β
MAP literal syntax:
MAP { 'key' => value, ... } - β
Bracket access for values:
map['key'] - β Nested map support
- β 12 built-in MAP functions:
| Function | Description |
|---|---|
MAP_GET(map, key) | Get value by key |
MAP_GET_OR_DEFAULT(map, key, default) | Get with fallback |
MAP_PUT(map, key, value) | Return new map with key added |
MAP_REMOVE(map, key) | Return new map without key |
MAP_KEYS(map) | Get all keys as array |
MAP_VALUES(map) | Get all values as array |
MAP_SIZE(map) | Count entries |
MAP_CONTAINS_KEY(map, key) | Check if key exists |
MAP_CONTAINS_VALUE(map, value) | Check if value exists |
MAP_MERGE(map1, map2) | Merge two maps |
MAP_FROM_ARRAYS(keys, values) | Create from parallel arrays |
MAP_ENTRIES(map) | Get array of {key, value} docs |
π§ Phase 2: Scale & Performance (Q2-Q3 2026)¶
2.1 Cursor Management & Streaming¶
Status: π‘ Partial | Priority: P0
Handle large result sets without memory exhaustion.
-- Explicit cursor for large datasets
DECLARE CURSOR log_cursor FOR
FROM logs-*
| WHERE @timestamp > NOW() - 1 HOUR
| LIMIT 100000
OPEN log_cursor
-- Process in batches
DECLARE batch ARRAY<DOCUMENT>
DECLARE processed NUMBER = 0
WHILE FETCH log_cursor LIMIT 1000 INTO batch LOOP
-- Process batch
FOR doc IN batch LOOP
CALL process_log(doc)
END LOOP
SET processed = processed + ARRAY_LENGTH(batch)
PRINT 'Processed: ' || processed || ' documents'
-- Optional: yield control for long-running operations
IF processed % 10000 = 0 THEN
COMMIT WORK -- Checkpoint progress
END IF
END LOOP
CLOSE log_cursor
Cursor Features:
| Feature | Description |
|---|---|
OPEN cursor | Initialize and execute query |
FETCH cursor INTO var | Get next row |
FETCH cursor LIMIT n INTO arr | Get next n rows as array |
CLOSE cursor | Release resources |
cursor%ROWCOUNT | Number of rows fetched |
cursor%NOTFOUND | True when no more rows |
2.2 Bulk Operations (FORALL)¶
Status: β Complete | Priority: P0
Efficient batch processing with error handling.
-- Bulk collect from query
DECLARE logs ARRAY<DOCUMENT>
BULK COLLECT INTO logs
FROM logs-*
| WHERE level = 'ERROR'
| LIMIT 5000
-- Bulk process with FORALL
FORALL log IN logs
CALL process_and_archive(log)
SAVE EXCEPTIONS -- Continue on individual failures
-- Check for errors
IF @bulk_errors.COUNT > 0 THEN
PRINT @bulk_errors.COUNT || ' documents failed processing'
FOR err IN @bulk_errors LOOP
PRINT 'Index ' || err.index || ': ' || err.message
END LOOP
END IF
-- Bulk index with retry
FORALL doc IN transformed_docs
INDEX_DOCUMENT('output-index', doc)
ON_FAIL RETRY 3 THEN SKIP
2.3 Scheduled Jobs (CREATE JOB)¶
Status: π Planned | Priority: P0
Built-in job scheduling with cron syntax.
-- Create a recurring job
CREATE JOB daily_log_cleanup
SCHEDULE '0 2 * * *' -- 2 AM daily (cron syntax)
TIMEZONE 'UTC'
ENABLED true
AS
BEGIN
PRINT 'Starting daily cleanup at ' || CURRENT_TIMESTAMP
-- Archive logs older than 30 days
CALL archive_old_logs(30)
-- Clean up temporary indices
CALL cleanup_temp_indices()
-- Send daily report
CALL generate_and_send_report()
PRINT 'Cleanup completed'
END JOB
-- Job management
ALTER JOB daily_log_cleanup DISABLE
ALTER JOB daily_log_cleanup SCHEDULE '0 3 * * *' -- Change to 3 AM
DROP JOB daily_log_cleanup
-- View job history
SELECT * FROM @job_runs
WHERE job_name = 'daily_log_cleanup'
ORDER BY start_time DESC
LIMIT 10
Schedule Patterns:
| Pattern | Description |
|---|---|
0 * * * * | Every hour |
*/15 * * * * | Every 15 minutes |
0 2 * * * | Daily at 2 AM |
0 0 * * 0 | Weekly on Sunday |
0 0 1 * * | Monthly on 1st |
@hourly | Alias for every hour |
@daily | Alias for midnight daily |
2.4 Triggers & Event-Driven Execution¶
Status: π Planned | Priority: P0
React to Elasticsearch events automatically.
-- Trigger on new documents
CREATE TRIGGER on_critical_error
WHEN DOCUMENT INSERTED INTO logs-*
WHERE level = 'ERROR' AND service IN ('payment', 'auth', 'checkout')
BEGIN
-- @document contains the new document
DECLARE doc DOCUMENT = @document
-- Immediate alerting for critical services
CALL SLACK_SEND(
'#critical-alerts',
'π¨ Critical Error in ' || doc.service || ': ' || doc.message
)
-- Check if this is a pattern
DECLARE recent_count NUMBER
SET recent_count = ESQL_QUERY(
'FROM logs-*
| WHERE service = ''' || doc.service || '''
AND level = ''ERROR''
AND @timestamp > NOW() - 5 MINUTES
| STATS count = COUNT(*)'
)[0].count
IF recent_count > 10 THEN
CALL PAGERDUTY_TRIGGER(
'Error storm in ' || doc.service,
'critical',
{'service': doc.service, 'count': recent_count}
)
END IF
END TRIGGER
-- Trigger on alert firing (Elasticsearch Alerting integration)
CREATE TRIGGER on_alert_fire
WHEN ALERT 'high-error-rate' FIRES
BEGIN
-- @alert contains alert context
CALL escalate_to_oncall(@alert)
END TRIGGER
-- Trigger on index lifecycle events
CREATE TRIGGER on_index_rollover
WHEN INDEX ROLLED OVER IN logs-*
BEGIN
-- @old_index, @new_index available
PRINT 'Index rolled over: ' || @old_index || ' -> ' || @new_index
CALL archive_to_s3(@old_index)
END TRIGGER
-- Trigger management
ALTER TRIGGER on_critical_error DISABLE
DROP TRIGGER on_critical_error
SHOW TRIGGERS
Trigger Event Types:
| Event | Description | Variables |
|---|---|---|
DOCUMENT INSERTED INTO index | New document indexed | @document |
DOCUMENT UPDATED IN index | Document updated | @document, @old_document |
DOCUMENT DELETED FROM index | Document deleted | @document_id |
ALERT name FIRES | Elasticsearch alert fires | @alert |
INDEX ROLLED OVER IN pattern | ILM rollover | @old_index, @new_index |
INDEX CREATED pattern | New index created | @index |
CLUSTER STATUS CHANGED TO status | Cluster health change | @status, @previous_status |
π§ Phase 3: Enterprise Features (Q3-Q4 2026)¶
3.1 Packages & Modules¶
Status: π‘ Partial | Priority: P1
Organize related procedures and functions into packages.
-- Package specification (public interface)
CREATE PACKAGE incident_response AS
-- Public procedures
PROCEDURE handle_incident(incident_id STRING)
PROCEDURE escalate(incident_id STRING, level NUMBER)
PROCEDURE resolve(incident_id STRING, resolution STRING)
-- Public functions
FUNCTION get_severity(incident_id STRING) RETURNS STRING
FUNCTION get_oncall() RETURNS STRING
-- Package constants
CONSTANT DEFAULT_TIMEOUT NUMBER = 300
CONSTANT ESCALATION_LEVELS ARRAY = ['low', 'medium', 'high', 'critical']
END PACKAGE
-- Package body (implementation)
CREATE PACKAGE BODY incident_response AS
-- Private state (per-session)
DECLARE active_incidents MAP<STRING, DOCUMENT> = {}
-- Private helper (not visible outside package)
PROCEDURE internal_notify(channel STRING, message STRING) AS
BEGIN
CALL SLACK_SEND(channel, message)
END
-- Public procedure implementation
PROCEDURE handle_incident(incident_id STRING) AS
BEGIN
CALL internal_notify('#incidents', 'Handling: ' || incident_id)
SET active_incidents[incident_id] = {'status': 'in_progress'}
END
-- Public function implementation
FUNCTION get_severity(incident_id STRING) RETURNS STRING AS
BEGIN
DECLARE incident DOCUMENT
SET incident = active_incidents[incident_id]
RETURN incident.severity ?? 'unknown'
END
END PACKAGE BODY
-- Usage
CALL incident_response.handle_incident('INC-001')
SET sev = incident_response.get_severity('INC-001')
PRINT incident_response.DEFAULT_TIMEOUT
3.2 Security & Access Control¶
Status: π‘ Planned | Priority: P1
Fine-grained access control for procedures and packages.
-- Grant execute permission
GRANT EXECUTE ON PROCEDURE analyze_logs TO ROLE 'analyst'
GRANT EXECUTE ON PACKAGE incident_response TO ROLE 'sre'
-- Revoke permission
REVOKE EXECUTE ON PROCEDURE delete_old_data FROM ROLE 'analyst'
-- Invoker vs definer rights
CREATE PROCEDURE admin_cleanup()
AUTHID DEFINER -- Runs with procedure owner's privileges
AS
BEGIN
-- Can perform admin operations even if caller is limited user
CALL delete_old_indices()
CALL vacuum_data()
END
CREATE PROCEDURE user_report()
AUTHID CURRENT_USER -- Runs with caller's privileges (default)
AS
BEGIN
-- Limited to what the calling user can access
CALL generate_report()
END
-- Secure credential reference (no plaintext secrets)
CALL HTTP_POST(
'https://api.pagerduty.com/incidents',
headers = {'Authorization': CREDENTIAL('pagerduty_api_key')},
body = incident_data
)
3.3 Debugging & Profiling¶
Status: π‘ Planned | Priority: P1
Built-in performance analysis and debugging.
-- Enable profiling for session
SET PROFILING ON
-- Run procedure
CALL complex_data_pipeline()
-- View execution profile
SHOW PROFILE
-- Output:
-- βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
-- β Line β Statement β Time β Calls β
-- ββββββββΌβββββββββββββββββββββββββββββββββββΌββββββββββΌβββββββββ€
-- β 10 β SET results = ESQL_QUERY(...) β 2.345s β 1 β <-- Bottleneck
-- β 15 β FOR doc IN results LOOP β 0.523s β 1000 β
-- β 20 β CALL process_document(doc) β 0.412s β 1000 β
-- β 25 β CALL HTTP_POST(...) β 1.890s β 1000 β <-- Bottleneck
-- ββββββββΌβββββββββββββββββββββββββββββββββββΌββββββββββΌβββββββββ€
-- β β TOTAL β 5.170s β β
-- βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
-- Assertions for testing
ASSERT result > 0, 'Result should be positive'
ASSERT response.status = 200, 'HTTP call should succeed'
ASSERT ARRAY_LENGTH(items) <= 100, 'Too many items returned'
-- Debug logging
SET DEBUG ON
-- Shows variable assignments, function calls, branch decisions
π§ Phase 4: Elasticsearch-Native Features (2027+)¶
These features leverage Elasticsearch's unique capabilities beyond traditional databases.
4.1 Vector Search & ML Integration¶
-- Semantic search with embeddings
DECLARE similar_docs ARRAY<DOCUMENT>
VECTOR_SEARCH INTO similar_docs
FROM knowledge-base
QUERY_VECTOR LLM_EMBED(user_question)
FIELD 'embedding'
K 10
NUM_CANDIDATES 100
-- RAG (Retrieval Augmented Generation) pattern
DECLARE context STRING = ARRAY_JOIN(
ARRAY_MAP(similar_docs, d => d.content),
'\n---\n'
)
SET answer = LLM_COMPLETE(
'Answer based on context:\n' || context || '\n\nQuestion: ' || user_question
)
4.2 Index Lifecycle Automation¶
-- Programmatic ILM
CREATE PROCEDURE smart_retention(pattern STRING, hot_days NUMBER, warm_days NUMBER) AS
BEGIN
FOR idx IN (SHOW INDICES pattern) LOOP
DECLARE age_days NUMBER = DATE_DIFF(NOW(), idx.creation_date, 'days')
IF age_days > hot_days + warm_days THEN
CALL archive_to_s3(idx.name)
CALL delete_index(idx.name)
ELSIF age_days > hot_days THEN
CALL move_to_warm_tier(idx.name)
END IF
END LOOP
END
4.3 Cross-Cluster Operations¶
-- Query remote clusters
DECLARE remote_errors ARRAY<DOCUMENT>
FROM cluster:us-west/logs-* | WHERE level = 'ERROR' INTO remote_errors
-- Aggregate across clusters
DECLARE global_stats DOCUMENT
AGGREGATE INTO global_stats
FROM cluster:*/logs-*
| STATS total = COUNT(*), errors = COUNT(*) WHERE level = 'ERROR'
BY cluster
π€ Modernization Framework¶
Guiding Principles¶
The procedural style (BEGIN/END, DECLARE, PROCEDURE) is a strength β familiar to database developers, SREs, and data engineers. Modernization focuses on capabilities and tooling, not syntax changes.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β ELASTIC-SCRIPT PRINCIPLES β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β βββββββββββ ββββββββββββ ββββββββββββββ βββββββββββββββββββββββββββ β
β β AI β β EASE β β DATA β β ELASTICSEARCH β β
β β NATIVE β β OF USE β β DRIVEN β β EVERYTHING β β
β ββββββ¬βββββ ββββββ¬ββββββ βββββββ¬βββββββ βββββββββββββ¬ββββββββββββββ β
β ββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββββ β
β β β
β βββββββββββββββββββββββΌββββββββββββββββββββββ β
β ββββββ΄ββββββ ββββββββ΄βββββββ ββββββββ΄βββββββ β
β β MODERN β β INTEROPER- β β PLUGGABLE β β
β β TECH β β ABLE β β β β
β ββββββββββββ βββββββββββββββ βββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Key Design Decisions¶
ES|QL is Untouched
elastic-script augments ES|QL with new commands (like INTO, PROCESS WITH) β it does not modify ES|QL itself. ES|QL remains the standard query language.
Leverage Existing Elastic Platform APIs
elastic-script is an orchestration layer that uses existing Elastic APIs:
- Agent Builder β Build and manage AI agents
- One Workflow β Create and execute workflows
- Dashboard-as-Code β Define dashboards programmatically
- Elasticsearch APIs β Full access to all ES functionality
- Kibana APIs β Saved objects, spaces, features
1. AI Native¶
Vision: elastic-script is the language AI agents speak to operate Elasticsearch.
Agent-First Architecture (via Agent Builder API)¶
-- Create agent using Elastic Agent Builder
CREATE AGENT log_analyst
USING AGENT_BUILDER {
MODEL 'azure-openai-gpt4'
CAPABILITIES ['query_logs', 'identify_patterns', 'summarize']
PROMPT "You are an expert at analyzing log data and identifying anomalies"
TOOLS [
PROCEDURE analyze_errors,
PROCEDURE summarize_trends,
FUNCTION ESQL_QUERY
]
}
-- Invoke agent
DECLARE analysis = AGENT log_analyst
TASK "Analyze payment errors in the last hour and identify patterns"
-- Multi-agent orchestration
CREATE WORKFLOW incident_investigation
USING ONE_WORKFLOW AS
BEGIN
SET analysis = AGENT log_analyst TASK "Investigate errors"
IF analysis.severity = 'critical' THEN
AGENT incident_responder TASK "Create P1 incident"
END IF
END WORKFLOW
MCP Server (Model Context Protocol)¶
Enable external AI agents (Claude, GPT, etc.) to operate Elasticsearch:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β External AI Agents β
β (Claude, GPT, Custom Agents) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β MCP Protocol β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β elastic-script MCP Server β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββββββββββββββ β
β βexecute_code β βcall_procedureβ β discover_procedures β β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Elasticsearch β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
| MCP Tool | Description |
|---|---|
execute_escript | Run arbitrary elastic-script code |
call_procedure | Call a stored procedure with arguments |
discover_procedures | List available procedures with descriptions |
query_elasticsearch | Execute ES|QL queries |
Natural Language Interface¶
-- Direct natural language execution
EXECUTE "Find all payment errors in the last hour and alert the team"
-- Generate procedures from description
GENERATE PROCEDURE "Monitor payment service, alert on 5+ errors in 5 minutes"
SAVE AS payment_monitor
-- Semantic procedure discovery
DISCOVER PROCEDURES LIKE "handle incidents"
-- Returns: incident_response, alert_handler, on_call_escalation
-- AI-powered recommendations
RECOMMEND PROCEDURES FOR "I need to set up monitoring for a new microservice"
Agent Memory & Context¶
-- Persistent context across sessions
REMEMBER "The payment service has been unstable since deployment v2.3.1"
-- Recall in procedures
CREATE PROCEDURE smart_alert()
BEGIN
DECLARE context = RECALL "recent incidents for payment service"
IF context.has_ongoing_incident THEN
PRINT "Suppressing alert - ongoing incident exists"
ELSE
CALL create_incident()
END IF
END PROCEDURE
2. Ease of Use¶
Vision: From zero to productive in minutes. Progressive complexity.
Smart Defaults & One-Liners¶
-- Minimal syntax, sensible defaults
CREATE JOB cleanup SCHEDULE '@daily' AS
CALL delete_old_logs()
END JOB
-- Defaults: ENABLED true, TIMEZONE UTC
-- One-liners for common tasks
ALERT ON (FROM logs-* | WHERE level = 'ERROR' | STATS count) > 10
SEND SLACK '#alerts'
-- Quick monitoring
MONITOR 'payment-service' EVERY 5 MINUTES
ALERT IF error_rate > 0.01
Progressive Disclosure¶
-- Level 1: Simple inline
FROM logs-* | WHERE level = 'ERROR' | STATS count
-- Level 2: Named procedure
CREATE PROCEDURE count_errors()
RETURN FROM logs-* | WHERE level = 'ERROR' | STATS count
END PROCEDURE
-- Level 3: Parameterized with defaults
CREATE PROCEDURE count_errors(time_range STRING DEFAULT '24h')
RETURN FROM logs-*
| WHERE level = 'ERROR' AND @timestamp > NOW() - @time_range
| STATS count
END PROCEDURE
-- Level 4: Full-featured with security, observability
@description "Counts errors with alerting"
CREATE PROCEDURE count_errors(time_range STRING, alert_threshold NUMBER)
WITH { TRACING ON, MAX_EXECUTION_TIME = 30 SECONDS }
BEGIN
TRY
DECLARE count = FROM logs-* | WHERE ... | STATS count
IF count > alert_threshold THEN CALL alert_team(count) END IF
RETURN count
CATCH
CALL log_error(@error)
RAISE
END TRY
END PROCEDURE
Contextual Help¶
-- Inline help
HELP SLACK_SEND
-- Shows: signature, parameters, examples
-- Interactive examples
EXAMPLE "send slack notification"
-- Returns runnable example code
-- Smart error messages
> CALL SLCK_SEND('#alerts', 'test')
-- Error: Unknown function 'SLCK_SEND'. Did you mean 'SLACK_SEND'?
3. Data-Driven (Close to ES|QL)¶
Vision: ES|QL is the native query language. elastic-script augments it, never replaces it.
ES|QL Augmentation (Not Modification)¶
-- ES|QL is used as-is
DECLARE errors = FROM logs-* | WHERE level = 'ERROR' | LIMIT 100
-- elastic-script ADDS commands that work with ES|QL results
FROM logs-*
| WHERE level = 'ERROR'
| INTO my_results -- NEW: Store results
FROM logs-*
| PROCESS WITH analyze_error -- NEW: Call procedure per row
-- ES|QL in expressions (ES|QL unchanged, elastic-script wraps)
IF (FROM metrics-* | STATS AVG(cpu)) > 80 THEN
CALL alert_high_cpu()
END IF
Query Composition¶
-- Build queries programmatically (generates valid ES|QL)
CREATE FUNCTION build_log_query(
indices STRING DEFAULT 'logs-*',
level STRING DEFAULT NULL,
service STRING DEFAULT NULL
) RETURNS QUERY AS
BEGIN
DECLARE q = QUERY FROM @indices
IF level IS NOT NULL THEN
SET q = q | WHERE level = @level
END IF
IF service IS NOT NULL THEN
SET q = q | WHERE service = @service
END IF
RETURN q
END FUNCTION
-- Execute composed query
DECLARE results = EXECUTE build_log_query(level := 'ERROR') | LIMIT 100
Schema Awareness¶
-- Introspect index mappings (uses ES _mapping API)
DECLARE schema = SCHEMA FOR 'logs-*'
PRINT schema.fields.level.type -- 'keyword'
-- Validate data against schema
VALIDATE document AGAINST SCHEMA 'logs-*'
Streaming & Continuous Queries¶
-- Continuous query (uses ES async search / PIT)
CREATE STREAM error_monitor AS
FROM logs-*
| WHERE level = 'ERROR'
| WINDOW TUMBLING 5 MINUTES
| STATS count BY service
| EMIT TO PROCEDURE handle_high_errors
4. Everything Elasticsearch¶
Vision: Full access to the Elasticsearch ecosystem via existing APIs.
Complete API Coverage¶
-- Uses ES Cluster APIs
CLUSTER HEALTH
CLUSTER SETTINGS SET 'cluster.routing.allocation.enable' = 'all'
-- Uses ES Index APIs
CREATE INDEX 'my-index' WITH MAPPINGS { ... }
REINDEX FROM 'source-*' TO 'dest'
-- Uses ES Alias APIs
CREATE ALIAS 'current' FOR 'logs-2026.01'
SWAP ALIAS 'current' FROM 'logs-2026.01' TO 'logs-2026.02'
ILM Integration¶
-- Uses ES ILM APIs
CREATE ILM POLICY 'logs-policy' AS {
HOT { ROLLOVER MAX_SIZE '50GB' MAX_AGE '1d' }
WARM { MIN_AGE '7d', SHRINK NUMBER_OF_SHARDS 1 }
DELETE { MIN_AGE '90d' }
}
APPLY ILM POLICY 'logs-policy' TO INDEX TEMPLATE 'logs-template'
Alerting Integration¶
-- Uses ES Alerting / Watcher APIs
CREATE ALERT high_error_rate
TRIGGER SCHEDULE EVERY 5 MINUTES
INPUT (FROM logs-* | WHERE level = 'ERROR' | STATS count)
CONDITION result.count > 100
ACTIONS {
SLACK '#alerts' MESSAGE 'High error rate: {{count}}'
}
ML Integration¶
-- Uses ES ML APIs
CREATE ML JOB error_anomaly
ANALYSIS_CONFIG { DETECTORS [{ FUNCTION 'count' }], BUCKET_SPAN '15m' }
DATAFEED (FROM logs-* | WHERE level = 'ERROR')
-- Uses ES Inference APIs
DECLARE sentiment = INFER 'sentiment-model' WITH { text: message }
Ingest Pipeline Integration¶
-- Uses ES Ingest APIs
CREATE INGEST PIPELINE 'enrich-logs' AS {
GROK FIELD 'message' PATTERNS ['%{TIMESTAMP:ts} %{LOGLEVEL:level}']
ENRICH POLICY 'geo-lookup' FIELD 'ip' TARGET 'geo'
}
INDEX document INTO 'logs' PIPELINE 'enrich-logs'
5. Modern Technologies¶
Vision: Built with and for modern infrastructure.
Real-Time Streaming¶
-- Uses ES async capabilities
WEBSOCKET NOTIFY '#channel' ON
FROM logs-* | WHERE level = 'ERROR' | STATS count > 10
-- Event publishing (to Kafka, etc.)
PUBLISH EVENT 'order.created' TO 'events-topic' WITH order_data
Container & Kubernetes Native¶
-- Uses K8s API
DECLARE pods = K8S_GET_PODS(namespace := 'production')
FOR pod IN pods LOOP
IF (FROM logs-* | WHERE k8s.pod.name = pod.name | WHERE level = 'ERROR' | STATS count) > 10 THEN
CALL K8S_RESTART_POD(pod.name)
END IF
END LOOP
Serverless Functions¶
-- Serverless execution model
CREATE FUNCTION process_webhook(event DOCUMENT)
SERVERLESS
TRIGGER HTTP POST '/webhook/github'
AS
BEGIN
CALL handle_github_event(event.body)
END FUNCTION
GitOps & Infrastructure as Code¶
-- Procedures stored as files, deployed via CI/CD
-- File: procedures/incident_response.escript
-- Deploy command
DEPLOY FROM GIT 'main'
PREVIEW DEPLOY 'procedures/*.escript'
6. Interoperable¶
Vision: Works with everything. Standards-based.
OpenTelemetry Native¶
-- Automatic OTEL instrumentation
CREATE PROCEDURE my_operation()
WITH OTEL { SERVICE_NAME 'my-service' }
AS
BEGIN
-- Traces automatically created and sent to APM
END PROCEDURE
-- Custom spans
OTEL_SPAN 'process-batch' BEGIN
FOR item IN batch LOOP
OTEL_METRIC 'items.processed' INCREMENT 1
END LOOP
END OTEL_SPAN
Protocol Support¶
-- HTTP
DECLARE response = HTTP_GET('https://api.example.com/data')
-- gRPC
DECLARE response = GRPC_CALL('orders.OrderService/GetOrder', { id: '123' })
-- GraphQL
DECLARE response = GRAPHQL_QUERY('https://api/graphql',
'query { user(id: "123") { name } }')
-- Message Queues
KAFKA_PRODUCE('topic', message)
RABBITMQ_PUBLISH('exchange', 'routing.key', message)
Data Format Support¶
-- Parquet (for data interchange)
EXPORT (FROM logs-* | LIMIT 10000) TO PARQUET 's3://bucket/logs.parquet'
IMPORT PARQUET 's3://bucket/data.parquet' INTO 'imported-data'
-- CSV, JSON, YAML, XML
DECLARE records = PARSE_CSV(csv_string)
DECLARE config = PARSE_YAML(yaml_string)
Cloud Provider Integration¶
-- AWS
DECLARE secret = AWS_SECRETS_MANAGER_GET('my-secret')
CALL AWS_LAMBDA_INVOKE('my-function', payload)
-- GCP
CALL GCP_PUBSUB_PUBLISH('projects/.../topics/my-topic', message)
-- Azure
DECLARE secret = AZURE_KEYVAULT_GET('my-secret')
7. Pluggable¶
Vision: Extensible at every layer without forking.
Custom Functions¶
-- Register custom function (Java)
REGISTER FUNCTION my_custom_function
CLASS 'com.mycompany.escript.MyFunction'
JAR 's3://plugins/my-functions.jar'
-- Register from external service
REGISTER FUNCTION external_calc
HTTP POST 'https://calc.service/compute'
Function Registries¶
-- Connect to function registry
CONNECT REGISTRY 'https://registry.company.com/escript-functions'
-- Install functions from registry
INSTALL FUNCTIONS FROM 'company/data-quality' VERSION '^2.0'
Middleware / Interceptors¶
-- Define middleware
CREATE MIDDLEWARE audit_all
BEFORE EXECUTE ANY PROCEDURE
AS
BEGIN
CALL log_execution_start(@procedure, @args, @user)
END MIDDLEWARE
AFTER EXECUTE ANY PROCEDURE
AS
BEGIN
CALL log_execution_end(@procedure, @result, @duration)
END MIDDLEWARE
-- Apply middleware
APPLY MIDDLEWARE audit_all TO ALL PROCEDURES
Custom Statements (DSL Extensions)¶
-- Extend grammar with domain-specific syntax
EXTEND GRAMMAR WITH {
'MONITOR' service:STRING 'FOR' duration:DURATION 'ALERT' 'IF' condition:EXPRESSION
=> CREATE TRIGGER ... (generates standard elastic-script)
}
-- Use new syntax
MONITOR 'payment-api' FOR 5 MINUTES ALERT IF error_rate > 0.05
Security & Governance¶
Enterprise-grade security controls.
Sandboxing & Resource Limits¶
CREATE PROCEDURE untrusted_operation()
WITH {
-- Execution limits
MAX_EXECUTION_TIME = 60 SECONDS
MAX_MEMORY = 256MB
MAX_ES_QUERIES = 100
MAX_RESULT_SIZE = 10MB
-- Data access restrictions
ALLOWED_INDICES = ['logs-*', 'metrics-*']
DENIED_INDICES = ['security-*', '.kibana*']
-- Network restrictions
NETWORK = DENY
-- OR
ALLOWED_HOSTS = ['api.slack.com', 'api.pagerduty.com']
-- Function restrictions
DENIED_FUNCTIONS = ['ES_DELETE', 'DROP_INDEX']
}
BEGIN
-- Runs in sandboxed environment
END PROCEDURE
Execution Policies¶
CREATE POLICY production_safety AS
BEGIN
REQUIRE MAX_EXECUTION_TIME <= 300 SECONDS
DENY FUNCTION 'ES_DELETE' UNLESS ROLE IN ('admin')
DENY INDEX_PATTERN '.security-*'
REQUIRE APPROVAL FOR FUNCTION 'DROP_INDEX'
END POLICY
APPLY POLICY production_safety TO ROLE 'developer'
Secrets Management¶
-- Reference secrets by name (uses ES keystore or external vault)
CALL HTTP_POST(
'https://api.pagerduty.com/incidents',
headers = {'Authorization': SECRET('pagerduty_key')}
)
Audit Trail & RBAC¶
-- Audit logging
SHOW AUDIT LOG FOR PROCEDURE sensitive_operation
WHERE user = 'john' AND @timestamp > NOW() - 7 DAYS
-- Role-based access
GRANT EXECUTE ON PROCEDURE analyze_logs TO ROLE 'analyst'
REVOKE EXECUTE ON PROCEDURE delete_data FROM ROLE 'developer'
Developer Experience¶
Modern tooling around the procedural language.
Language Server Protocol (LSP)¶
| Feature | Description |
|---|---|
| Autocomplete | Procedures, functions, variables, ES|QL fields |
| Hover docs | Function signatures, procedure documentation |
| Go to definition | Jump to procedure source |
| Diagnostics | Real-time error detection |
Enables: VS Code extension, Cursor extension, JetBrains plugin.
Rich Notebook Outputs¶
-- Rich output in Jupyter notebooks
DISPLAY TABLE errors WITH { TITLE 'Errors by Service', CHART 'bar' }
DISPLAY CHART { TYPE 'timeseries', DATA query_results }
Procedure Versioning¶
SHOW PROCEDURE HISTORY my_procedure
DIFF PROCEDURE my_procedure VERSION 3 WITH VERSION 5
ROLLBACK PROCEDURE my_procedure TO VERSION 3
Built-in OpenTelemetry¶
CREATE PROCEDURE process_orders()
WITH TRACING ON
BEGIN
-- Traces automatically created and sent to APM
END PROCEDURE
SHOW PROCEDURE METRICS my_procedure
-- Shows: executions, success rate, P50/P99 duration
Syntax Modernization: Commands & Type-Namespaced Functions¶
Status: π΄ Not Started | Priority: P0
First-Class Commands¶
Core Elasticsearch operations become language keywords (not functions):
-- β
Proposed: First-class commands
INDEX document INTO 'my-index';
DELETE FROM 'my-index' WHERE _id = '123';
SEARCH 'my-index' QUERY { "match": { "title": "elastic" } };
REFRESH 'my-index';
CREATE INDEX 'new-index' WITH { "mappings": {...} };
-- β Current: Function calls (will be deprecated)
INDEX_DOCUMENT('my-index', document);
ES_DELETE('my-index', '123');
Type-Namespaced Functions (UPPERCASE)¶
Functions use TYPE.FUNCTION() pattern for clarity and to avoid ambiguity:
-- β
Proposed: Type-namespaced functions
DECLARE len = ARRAY.LENGTH(my_array);
DECLARE keys = DOCUMENT.KEYS(my_doc);
DECLARE upper_name = STRING.UPPER(name);
DECLARE tomorrow = DATE.ADD(today, 1, 'DAY');
-- Extensions follow same pattern
DECLARE pods = K8S.GET_PODS('default');
DECLARE result = OPENAI.COMPLETE(prompt);
DECLARE msg = SLACK.POST_MESSAGE(channel, text);
-- β Current: SNAKE_CASE functions (will be deprecated)
ARRAY_LENGTH(my_array);
DOCUMENT_KEYS(my_doc);
STRING_UPPER(name);
Benefits: - No ambiguity with variable names - Self-documenting type expectations - Consistent UPPERCASE style - Extensible namespace pattern
Type-Aware ES|QL Binding¶
Status: π΄ Not Started | Priority: P0
Inline ES|QL with type-specific result binding:
-- βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
-- CURSOR: Multiple rows β Iterate one at a time
-- βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
DECLARE logs CURSOR FOR
FROM logs-* | WHERE level = 'ERROR' | LIMIT 100;
FOR log IN logs LOOP
PRINT log.message;
END LOOP
-- βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
-- ARRAY: Multiple rows β Capture all rows at once
-- βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
DECLARE logs ARRAY FROM logs-* | WHERE level = 'ERROR' | LIMIT 100;
PRINT 'Found ' || ARRAY.LENGTH(logs) || ' errors';
-- βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
-- DOCUMENT: Single row with multiple fields β One document
-- βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
DECLARE stats DOCUMENT FROM logs-*
| STATS count = COUNT(*), errors = COUNT(*) WHERE level = 'ERROR';
PRINT 'Total: ' || stats.count || ', Errors: ' || stats.errors;
-- βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
-- SCALAR: Single row, single column β Direct value
-- βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
DECLARE total_count NUMBER FROM logs-* | STATS c = COUNT(*);
PRINT 'Total logs: ' || total_count; -- Directly a number, not {c: 42}
DECLARE latest DATE FROM logs-* | STATS latest = MAX(@timestamp);
PRINT 'Latest: ' || latest;
DECLARE service STRING FROM services
| WHERE id = 'svc-001' | KEEP name | LIMIT 1;
PRINT 'Service: ' || service;
Type Binding Summary:
| Declaration | ES|QL Result Expected | Binding |
|---|---|---|
DECLARE x CURSOR FOR <esql> | Multiple rows | Iterate with FOR |
DECLARE x ARRAY FROM <esql> | Multiple rows | All rows as array |
DECLARE x DOCUMENT FROM <esql> | Single row | Row as document |
DECLARE x NUMBER FROM <esql> | 1 row, 1 column | Scalar value |
DECLARE x STRING FROM <esql> | 1 row, 1 column | Scalar value |
DECLARE x DATE FROM <esql> | 1 row, 1 column | Scalar value |
DECLARE x BOOLEAN FROM <esql> | 1 row, 1 column | Scalar value |
Runtime Validation:
-- ERROR: Expected scalar but got multiple rows
DECLARE count NUMBER FROM logs-* | LIMIT 10;
-- β RuntimeError: Query returned 10 rows, expected 1 for NUMBER
-- ERROR: Expected scalar but got multiple columns
DECLARE count NUMBER FROM logs-* | STATS a = COUNT(*), b = SUM(size);
-- β RuntimeError: Query returned 2 columns, expected 1 for NUMBER
Modernization Priority¶
| Strategy | Impact | Effort | Priority |
|---|---|---|---|
| Syntax: First-Class Commands | π₯π₯π₯ | Medium | β P0 |
| Syntax: Type-Namespaced Functions | π₯π₯π₯ | Medium | β P0 |
| Type-Aware ES|QL Binding | π₯π₯π₯ | Medium | β P0 |
| MCP Server | π₯π₯π₯ | Medium | β P0 |
| Agent Builder Integration | π₯π₯π₯ | Medium | β P0 |
| Sandboxing & Resource Limits | π₯π₯π₯ | Medium | β P0 |
| Natural Language Interface | π₯π₯π₯ | High | P1 |
| ES|QL Augmentation (INTO, PROCESS) | π₯π₯ | Medium | P1 |
| One Workflow Integration | π₯π₯ | Medium | P1 |
| Dashboard-as-Code Integration | π₯π₯ | Medium | P1 |
| Secrets Management | π₯π₯ | Low | P1 |
| LSP Implementation | π₯π₯ | High | P1 |
| OpenTelemetry Integration | π₯π₯ | Medium | P1 |
| Custom Function Registry | π₯π₯ | High | P2 |
| Procedure Versioning | π₯ | Low | P2 |
π App Deployment Platform (Vercel-Style)¶
Deploy data-driven applications directly from elastic-script. Write code, get a deployed app URL.
Vision¶
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Developer Experience β
β CREATE APP inventory_manager β
β ROUTE '/inventory' β
β AS BEGIN β
β RENDER EUI.Page { ... } β
β RENDER DASHBOARD 'inventory-overview' β
β END APP β
β β DEPLOY β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β EScript App Runtime β
β β’ Hosts apps at /apps/{app-name} β
β β’ Renders EUI components β
β β’ Embeds Dashboards-as-Code β
β β’ Executes elastic-script logic β
β β’ Connects to Elasticsearch for data β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Elasticsearch β
β .escript_apps Your Data Indices .kibana_dashboards β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Architecture¶
Standalone Runtime (not a Kibana plugin):
- Serves apps at routes like
https://apps.company.com/inventory - Uses Kibana's rendering stack (EUI, Elastic Charts, React)
- Embeds dashboards via dashboards-as-code API (March 2026)
- Executes elastic-script for business logic
- Authenticates via Elasticsearch security
- Apps can work standalone OR be embedded in Kibana later
App Definition Syntax¶
CREATE APP incident_response
ROUTE '/incidents' -- URL path
TITLE 'Incident Response Center' -- Browser title
ICON 'alert' -- EUI icon
AUTH REQUIRED -- Requires login
ROLES ['sre', 'oncall'] -- Who can access
AS
BEGIN
-- App state (reactive)
STATE selected_incident DOCUMENT DEFAULT NULL
STATE severity_filter STRING DEFAULT 'all'
-- UI components
RENDER HEADER 'Incident Response' WITH {
ACTIONS [
BUTTON 'Create' ON_CLICK => MODAL create_modal
]
}
RENDER STATS { ... }
RENDER TABLE { ... }
RENDER CHART { ... }
RENDER FORM { ... }
RENDER DASHBOARD 'service-health' { ... }
END APP
EUI Component Mapping¶
Data Display¶
-- Table (EUI.EuiBasicTable)
RENDER TABLE inventory_table {
DATA (FROM inventory-* | SORT last_updated DESC | LIMIT 100)
COLUMNS [
{ FIELD 'sku' LABEL 'SKU' SORTABLE },
{ FIELD 'name' LABEL 'Product Name' },
{ FIELD 'quantity' LABEL 'Qty' TYPE 'number' }
]
PAGINATION { PAGE_SIZE 25 }
ACTIONS [
{ LABEL 'Edit' ON_CLICK (row) => CALL edit_item(row) }
]
}
-- Stats (EUI.EuiStat)
RENDER STATS {
STAT { TITLE 'Total Items' VALUE (FROM inventory-* | STATS COUNT(*)) }
STAT { TITLE 'Low Stock' VALUE (...) COLOR 'danger' }
}
-- Chart (Elastic Charts)
RENDER CHART inventory_trend {
TYPE 'area'
DATA (FROM inventory-* | STATS sum(quantity) BY @timestamp BUCKET 1d)
X '@timestamp'
Y 'sum_quantity'
}
Forms & Input¶
RENDER FORM add_item_form {
FIELD sku { TYPE 'text' LABEL 'SKU' REQUIRED }
FIELD quantity { TYPE 'number' DEFAULT 0 }
FIELD category {
TYPE 'select'
OPTIONS (FROM categories-* | STATS DISTINCT(name))
}
ON_SUBMIT (values) => BEGIN
CALL add_inventory_item(values)
TOAST 'Item added' TYPE 'success'
REFRESH inventory_table
END
}
RENDER SEARCH {
PLACEHOLDER 'Search...'
FILTERS [
{ FIELD 'category' TYPE 'select' OPTIONS [...] }
]
ON_CHANGE (query) => REFRESH table WITH FILTER query
}
Dashboard Embedding¶
-- Embed existing dashboard
RENDER DASHBOARD 'abc-123-def' {
HEIGHT '600px'
FILTERS { 'service' = @selected_service }
}
-- Or define inline (dashboards-as-code)
RENDER DASHBOARD {
TITLE 'Error Analysis'
TIME_RANGE 'now-24h' TO 'now'
PANEL 'errors_over_time' {
TYPE 'lens'
VISUALIZATION 'line'
DATA (FROM logs-* | WHERE level = 'ERROR' | STATS count BY @timestamp)
}
}
Layout¶
RENDER ROW {
RENDER COLUMN { WIDTH '30%' ... }
RENDER COLUMN { WIDTH '70%' ... }
}
RENDER TABS {
TAB 'Overview' { ... }
TAB 'Details' { ... }
}
MODAL create_modal {
TITLE 'Create New Item'
RENDER FORM { ... }
ON_SUBMIT => BEGIN ... CLOSE MODAL END
}
State & Interactivity¶
CREATE APP interactive_demo ROUTE '/demo' AS
BEGIN
-- Reactive state
STATE selected_service STRING DEFAULT 'all'
STATE date_range STRING DEFAULT 'now-24h'
-- Components react to state changes
RENDER SELECT {
VALUE @selected_service
ON_CHANGE (val) => SET selected_service = val
}
RENDER TABLE {
DATA (FROM logs-* | WHERE service = @selected_service)
REFRESH EVERY 30 SECONDS
}
-- Lifecycle hooks
ON_MOUNT => CALL log_app_access()
ON_UNMOUNT => CALL cleanup()
END APP
Deployment Management¶
-- Deploy an app
DEPLOY APP incident_response
-- View all apps
SHOW APPS
-- ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
-- β Name β Route β Status β Versionβ
-- βββββββββββββββββββββΌβββββββββββββΌββββββββββΌβββββββββ€
-- β incident_response β /incidents β Running β v3 β
-- β inventory_manager β /inventory β Running β v1 β
-- ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
-- Version management
ROLLBACK APP incident_response TO VERSION 2
STOP APP my_app
START APP my_app
DROP APP my_app
Access URLs:
Implementation Phases¶
| Phase | Deliverables | Duration |
|---|---|---|
| 1. Runtime Foundation | App Server, Router, Auth | 2 weeks |
| 2. Core Components | TABLE, STATS, CHART, FORM | 2 weeks |
| 3. Dashboard Integration | Embed dashboards-as-code | 1 week |
| 4. Interactivity | STATE, Events, Modals | 2 weeks |
| 5. Advanced Components | Search, Filters, Tabs | 1 week |
| 6. Deployment CLI | DEPLOY, SHOW, ROLLBACK | 1 week |
ποΈ Release Timeline¶
| Version | Target | Focus |
|---|---|---|
| v1.0 | β Current | Core language, 106 functions, async execution |
| v1.1 | Q1 2026 | Exception handling, user-defined functions |
| v1.2 | Q2 2026 | Dynamic ES|QL, associative arrays, sandboxing |
| v1.3 | Q3 2026 | Cursors, bulk operations |
| v2.0 | Q4 2026 | Triggers, scheduled jobs |
| v2.1 | Q1 2027 | Packages, security, secrets management |
| v2.5 | Q2 2027 | MCP Server, natural language interface |
| v3.0 | Q3 2027 | App Deployment Platform (Phase 1-3) |
| v3.5 | Q4 2027 | App Platform (Phase 4-6), LSP |
| v4.0 | 2028 | Vector search, cross-cluster, package registry |
π Known Issues¶
| Issue | Status | Workaround |
|---|---|---|
.escript_executions index not auto-created | π In Progress | Create manually before STATUS calls |
| 106 functions registered per-request | π Planned | Move to startup registration |
| No transaction support | π Backlog | Use compensating actions |
π‘ Feature Requests¶
Have an idea for elastic-script? We'd love to hear it!
π€ Contributing¶
Want to help build these features? Check out the Contributing Guide to get started!
π Current Implementation: Triggers & Scheduling¶
Overview¶
Polling-based architecture for non-invasive event automation:
- Scheduled Jobs: Cron-based recurring execution
- Event Triggers: React to new documents in indices
- No indexing impact: Polling doesn't affect write performance
Syntax Preview¶
-- Scheduled Job
CREATE JOB daily_cleanup
SCHEDULE '0 2 * * *'
AS
BEGIN
CALL archive_old_logs(30)
END JOB
-- Event Trigger
CREATE TRIGGER on_payment_error
ON INDEX 'logs-*'
WHEN level = 'ERROR' AND service = 'payment'
EVERY 5 SECONDS
AS
BEGIN
FOR doc IN @documents LOOP
CALL SLACK_SEND('#alerts', doc.message)
END LOOP
END TRIGGER
Implementation Phases¶
| Phase | Deliverables | Duration |
|---|---|---|
| 1. Grammar & Storage | ANTLR rules, index mappings | 1 week |
| 2. Statement Handlers | CREATE/ALTER/DROP/SHOW handlers | 1 week |
| 3. Execution Services | Scheduler, polling, leader election | 1 week |
| 4. Testing & Docs | Unit tests, E2E notebooks | 1 week |
See Triggers & Scheduling for full documentation.