Orchestrator Online: Our MVP Demo Flows Are Live

Sprint 5 was a big one for us. We didn’t just wire up a few endpoints — we turned on the entire core of the platform and made it demoable end-to-end.

We now have a live MVP experience that runs real signals through the system, with a small UI on top so anyone can see what’s happening without reading a line of code.

Three Flows, One Pipeline

The new Platform Demo Flows page drives everything through the orchestrator and the real services behind it:
1. Transaction Normalization
• You enter a transaction ID and amount.
• The orchestrator calls our schema engine (ASLF) to normalize the event.
• The normalized data is ingested into KGF, then evaluated by RLIE.
2. Access Policy Evaluation
• You provide a user and resource (for example, alice / doc-1).
• ASLF normalizes the access event.
• RLIE evaluates it against current signals and returns a decision-style result.
3. Freeform Text
• You send in arbitrary text (e.g., “hello world”).
• The platform runs it through the same ASLF → KGF → RLIE path, exercising the text side of the pipeline.

All of this is orchestrated via the job worker and queues — the same path we’ll use for real workloads, not a separate “demo only” system.

Live Platform Stats, Not Just Pretty JSON

On the right side of the demo page there’s a Live Platform Stats panel driven by RLIE’s adaptive learning state.

When you click Refresh stats, the UI calls a small debug endpoint in RLIE via the orchestrator, and shows:
• Total RLIE ingests (how many events the model has seen)
• Top source services
• Top task names
• Last updated timestamp

As you run more flows, these numbers change. It’s a simple way to show that the platform isn’t just responding to requests — it’s accumulating experience over time.

Under the Hood: Real Services, Real Signals

A few important details:
• Real ASLF normalization
• The orchestrator calls the actual /schemas/normalize endpoint in the schema engine, authenticated with an API key.
• Real KGF ingestion
• Normalized events are ingested into KGF, which returns real ingest/result IDs.
• Real RLIE evaluation
• RLIE consumes those ingests, updates its adaptive stats, and returns genuine evaluation output (not hard-coded responses).

The same flows used in the demo are backed by the same code paths we’ll use for production workloads.

Observability and Load Testing

To make this usable for engineers (and future SREs), we also tightened up observability:
• Structured JSON logs from the orchestrator, worker, and job queue
• Include component, event name, flowId, jobId, durationMs, backend, mode, and error details.
• Lightweight load test harness
• A small script can fire concurrent demo jobs through the orchestrator.
• Early runs (20 jobs, concurrency 5) successfully exercised the full stack with good latency and no backlog.

This gives us a quick way to validate changes and spot regressions as we keep evolving the platform.

Why This Matters

With Sprint 5 complete, we now have:
• A real pipeline from ASLF → KGF → RLIE
• A job-based orchestrator coordinating multi-step flows
• A simple demo UI anyone can use to see it in action
• Live stats that show the platform learning over time
• Tests and tooling to prove it all works under load

In other words: the core of the platform is not just “built” — it’s running, observable, and demo-ready.

Next up, we’ll be focusing on packaging, deployment, and integration hardening as we move from MVP prototype toward something investors and partners can poke at directly.

Related articles

Shipping the Partner Demo: Making a Complex Platform Click

One of the hardest parts of building Grey Hat Labs hasn’t been the technology — it’s been explaining it. Our platform spans schema normalization, graph ingestion, policy evaluation, and adaptive risk — all working together across multiple domains. It’s powerful, but that power can be hard to see without context. This week, we shipped something […]

Learn More

Grey Hat Labs Secures Foundational Intellectual Property Filings

We’re excited to share that Grey Hat Labs has filed three provisional patent applications covering core innovations within our Adaptive Intelligence Stack — including schema-learning, reinforcement-learning interoperability, and secure custody orchestration. These filings represent major steps in protecting the novel AI-driven infrastructure that powers our mission to enable seamless, secure interoperability across blockchain networks. As […]

Learn More

🚀 Multi-Chain Architecture Lands in the Platform

This sprint delivered the foundation for true multi-chain operation across the GHL platform. Our new Cross-Chain Operations Module (CCOM) introduces a unified interface for interacting with different blockchains, normalizing their events and transactions into a consistent ASLF format for downstream systems. ✅ What’s new • Cross-Chain Operations Module (CCOM) A chain-agnostic layer that handles transaction […]

Learn More

Leave a Reply

Your email address will not be published. Required fields are marked *