LLM generated analytic plans


LLMs generate accurate multi-step analytic plans for each question. 

Analytic plans begin with a question.

Each question establishes a context for a plan, including which databases, tables, and columns are relevant, and steps needed to answer the question. Analytic plans are organized sequentially, beginning with a join graph of columns and tables, and other steps including statistical measures, logical features, groupings and aggregations.

LEDGE analytics pipeline

Binding the LLM for reliable plans

The LEDGE server binds the LLM to a step-wise prompt and response pattern, focusing first on identifying the data needed, and then on sequential steps including data cleansing and normalization, statistical measures, logical features, grouping, and aggregations. The LLM is not given freedom to deviate from the sequenced, ordered steps. 

Human and automated plan validation

As the LLM generates steps in the plan, the steps are validated automatically and errors are corrected.  Each completed plan is saved for human review and validation, and modification. This also ensures that use of LLMs is transparent, easily explained, and auditable.

Explore more capabilities

MCP db context icon

Orchestration:  automated database context 

LEDGE automatically delivers complete database context for LLMs to comprehend multiple databases simultaneously at scale. Like a skilled engineer, once an LLM understands databases it can contribute to solution design.

Read more
MCP data plan icon

Orchestration: analytic plan

LEDGE binds LLMs to deliver accurate analytic plans for user queries.  Plans are saved, easily validated and modified, and run to deliver analytics data within minutes of the user query.

Read more
LEDGE governance icon

 

Governance

PII safeguards, authorization controls, data residency rules, firewall restrictions, and token-governance policies are built-in by design.  No sensitive data leaves governed systems.

Read more
login

Plan management

LLM generated plans are saved, easily reviewed and validated, modified, and executed, for LLM use that is transparent, explainable, and repeatable. 

Read more
structured-data

Database cloning and containers

On demand database clones with containers provide Agent developers with production database copies (with optional masking) for agentic AI dev/test.

Read more
data-base-subsetting

Database subsetting and synthetic data 

Database subsetting with synthetic data provides added context for working with complex multi-database environments.

Read more