Skip to content

Generative Output

GESA Generates — It Does Not Retrieve

The word "Generative" in GESA is precise. The system does not return cached answers. It produces novel candidate strategies by synthesising three inputs.


The Distinction

ApproachWhat It Returns
Database lookupExact match from stored records
Case-Based ReasoningMost similar past case, adapted
GESANovel hypothesis synthesised from multiple episodes + current context + temperature

A GESA recommendation may have no exact precedent in the episode store. It combines patterns from multiple episodes, filtered through current conditions, weighted by the annealing temperature. The output is always a new thing — even when it's informed by old things.


The Generate Function

Generate(episodes, context, temperature) → CandidateStrategy[]

Inputs

Episodes — Retrieved by similarity function. Not just the most similar episode, but a set of relevant episodes that together inform the generation. Includes both successful and failed episodes.

Context — The current system state: DRIFT score, Fetch score, Chirp/Perch/Wake values, active domain, active dimension.

Temperature — The current annealing temperature. High temperature unlocks bold, novel, untested candidates. Low temperature filters to conservative, validated candidates only.

Output

A ranked list of candidate strategies:

typescript
CandidateStrategy {
  strategy:          string   // What to do
  confidence:        number   // 0–100
  episodicSupport:   number   // How many similar episodes back this
  explorationBias:   boolean  // Generated at high temp (novel) or low temp (proven)?
  reasoning:         string   // Human-readable explanation
  episodeRefs:       string[] // IDs of supporting episodes
}

Why Failures Are Included

GESA retrieves both successful and failed episodes. Failed episodes inform the generator what not to produce — especially at lower temperatures where avoiding known failures is as important as repeating successes.

The OutcomePolarity component of the similarity function weights failed episodes:

Similarity(e₁, e₂) =
    w₁ × DomainMatch(e₁, e₂)
  + w₂ × DriftProximity(e₁, e₂)
  + w₃ × DimensionMatch(e₁, e₂)
  + w₄ × TemperatureProximity(e₁, e₂)
  + w₅ × OutcomePolarity(e₁, e₂)

Default weights: [0.30, 0.25, 0.20, 0.15, 0.10]

A failed episode with a similar context fingerprint is a constraint on the generator: "don't produce strategies that look like this."


The Generator Architecture

GESA's generator interface is intentionally domain-flexible. Three implementation options:

ArchitectureStrengthsWeaknesses
Rule-basedFully observable, deterministicNarrow; can't handle novel patterns
LLM-promptedRich generation, handles noveltyNon-deterministic; harder to trace
HybridRule-filtered LLM generationCombines observability with richness

The hybrid approach — where rules filter and constrain LLM generation — is likely optimal for most production deployments. The rules encode domain knowledge and episode constraints; the LLM provides the synthesis and novelty.

In StratIQX, the Synthesis Engine step operates as a production generator: it consolidates outputs from 8 specialised agents, identifies cross-cutting themes, and eliminates redundancy. This is the GENERATE step running at report-level on every execution.


Observable Properties

Consistent with Semantic Intent principles, every GESA output is explainable. No black boxes.

typescript
interface GESARecommendation {
  strategy:               string    // What to do
  confidence:             number    // 0–100
  episodicSupport:        number    // How many similar episodes back this
  temperature:            number    // Current annealing temperature
  explorationBias:        boolean   // Novel (high temp) or proven (low temp)?
  reasoning:              string    // Human-readable explanation
  episodeRefs:            string[]  // IDs of supporting episodes
  alternativesConsidered: number    // How many candidates were evaluated
  gapVelocity:            number    // Current trajectory of DRIFT
}

Every field traces back to a specific episode, a specific measurement, or a specific point in the annealing schedule. The recommendation is auditable from generation back to origin.


A Worked Example

Domain: Workplace (HEAT integration) Situation: Team A, 3-day pain streak, 90% cognitive load, 15 active Jira tasks

GESA retrieves 12 episodes with similar context (high cognitive load, similar team size, similar sprint phase)

GESA generates 4 candidate strategies:

typescript
[
  {
    strategy: "Reduce WIP limits from 8 to 4 for this sprint",
    confidence: 84,
    episodicSupport: 9,
    explorationBias: false,    // Low-temp: proven
    reasoning: "9/12 similar episodes resolved within 4 days with WIP reduction",
    episodeRefs: ["ep_0234", "ep_0189", "ep_0156", ...],
    alternativesConsidered: 4,
    gapVelocity: +3.2          // Gap is widening — act sooner
  },
  {
    strategy: "Cancel all non-critical meetings this week",
    confidence: 71,
    episodicSupport: 6,
    explorationBias: false,
    reasoning: "6/12 similar episodes showed meeting reduction effective",
    ...
  },
  {
    strategy: "Cross-train adjacent team member on critical path",
    confidence: 58,
    episodicSupport: 3,
    explorationBias: true,     // High-temp: novel/less proven
    reasoning: "3 episodes; temperature 0.74 allows inclusion of less-proven candidates",
    ...
  },
  {
    strategy: "Redistribute 3 tasks to Team B temporarily",
    confidence: 44,
    episodicSupport: 2,
    explorationBias: true,
    reasoning: "Only 2 episodes; included due to high current temperature",
    ...
  }
]

GESA selects the highest-confidence strategy within temperature constraints. At T = 0.74, all four candidates pass the temperature filter.

At T = 0.20, only the first two would be included — the system has enough history to filter to proven strategies.


→ Next: The GESA Loop