top of page
    Search

    Why “Knowing What Happened” Isn’t Enough — And How AI Can Change That

    • Writer: James 'Jim' Eselgroth
      James 'Jim' Eselgroth
    • Jul 17
    • 4 min read
    Evolve learning loops, not just logs. Image by AI
    Evolve learning loops, not just logs. Image by AI
    “We need to move from Lessons Observed to Lessons Learned.”— Gen. Randy A. George, U.S. Army Chief of Staff, AI+ Expo 2025

    In every corner of the mission space — from warfighting to cyber defense to federal emergency response — after-action reports (AARs) and debriefs are part of the standard playbook. We gather. We reflect. We record what happened. And yet, too often, we fall short of what matters most: apply the lesson improving how we do it in the future.

    We’ve built institutional processes for observing failures. What we haven’t done is operationalize those observations at scale.

    That’s the real gap General George was calling out at the AI+ Expo: we don’t just need better documentation — we need a smarter, faster, more integrated loop from observation to transformation. And with the emergence of Generative AI and Agentic AI, we now have the tools to build that loop.

    But first, we have to get honest about what’s broken.

    The Problem: Observation Isn’t Learning

    Most AARs and debriefs are manual, linear, and retrospective — focused on what happened, not evaluating what prepared us for the operation or identifying which actions need to be addressed to improve future outcomes. We rarely validate the full spectrum of preparation: how teams were organized, how individuals were trained, how they were equipped, and whether all of it was appropriate to the operational context.

    The result? A recurring pattern of documentation without transformation. Lessons are observed. Sometimes even reported. But too rarely are they internalized, scaled, or institutionalized in a way that changes behavior, processes, or policies.

    We’re stuck in a loop of reflection instead of one of readiness.

    The Blockers: Why Learning Loops Break

    Checklists aren’t enough. Context matters. Image by AI
    Checklists aren’t enough. Context matters. Image by AI

    It’s not just a lack of effort — it’s a mismatch in capability, ownership, and mindset. Let’s break it down:

    • Manual processes: Most observations must be typed, recorded, or transcribed. Valuable insights are trapped in PDFs, notes, or after-the-fact emails.

    • Fragmented ownership: Who is accountable for turning a lesson into a changed training plan? A modified policy? A new piece of kit? Often, no one.

    • Siloed learning: Lessons learned in one unit, department, or mission rarely make it to the next — let alone across the enterprise.

    • Trust deficit with AI: Many decision-makers still hesitate to act on machine-generated insights. They want to see the full context, the source, and the reasoning. Rightfully so.

    And perhaps most dangerously: we often evaluate preparation as a static checklist — did we conduct training? Did we issue equipment? Did we hold the brief? — instead of asking the deeper question: Was the preparation fit for this mission, this environment, this risk profile?

    The Shift: Agentic AI and the Intelligent Learning Loop

    This is where the next wave of AI offers a game-changing opportunity. GenAI and Agentic AI can fundamentally reshape the feedback loop by capturing richer data, surfacing causal insights, and recommending — even triggering — concrete actions.

    Here’s how:

    • Real-time, multimodal ingestion | Text, voice, video, telemetry, chat logs — GenAI can process and extract meaning from it all.

    • Context-aware summarization | Not just what happened, but why it happened, what contributed to success or failure, and what might have prevented it.

    • Causal mapping and root cause analysis | AI agents can detect patterns across incidents and suggest systemic causes: training gaps, policy mismatches, organizational silos.

    • Agent-triggered workflows | Agentic systems can recommend — or even initiate — next steps: update a SOP, schedule retraining, trigger procurement review, or submit a change ticket in ServiceNow.

    This goes far beyond dashboards and reports. This is decision support that acts, operating with defined authority levels:

    • Flag-only for human awareness

    • Recommend-and-review for semi-automated change

    • Auto-execute within trusted parameters

    Of course, the key to adoption is trust — which is why agentic systems must be auditable, explainable, and integrated with existing operational tools.

    The System: Embedding Lessons at Scale

    Even when AI surfaces useful insight, it’s meaningless unless that insight can travel. What we need isn’t just smart agents — we need a System of Memory.

    That means:

    • Federated knowledge graphs that link lessons to roles, systems, and mission types

    • Semantic tagging that makes insight retrievable across time and team boundaries

    • Integration with planning and execution cycles, not just postmortem documentation


    Capture. Recommend. Trigger. Learn. Repeat. Image by AI
    Capture. Recommend. Trigger. Learn. Repeat. Image by AI

    At Red Cedar, we think of this as part of the Intelligent Transformation (ITX) process — specifically Phase 4: Learn & Optimize. In that phase, our OXYGEN framework acts as the engine for not just capturing observations, but linking them to actionable change across the 5Ps: People, Policies, Processes, Partners, and Platforms.

    In practical terms: a failed software deployment shouldn’t just lead to a blameless postmortem — it should lead to updated deployment playbooks, revised readiness checklists, retraining of DevSecOps teams, and continuous monitoring of similar risk flags in future projects.

    The Path Forward: From Reflection to Readiness

    Agentic AI and GenAI give us the tools. But closing the gap between observed and operationalized requires more than technology. It demands new processes, new thresholds for automation, and most importantly, a new mindset.

    Here’s what that looks like:

    • Dynamic preparation reviews: Evaluate readiness not by checklist, but by mission-fit, contextualized by real-time risk and evolving threat models.

    • Embedded agents in the loop: AI should be part of the decision fabric, not a bolt-on analysis tool.

    • Defined rules of engagement for agentic systems: when to flag, when to act, when to escalate.

    • Culture shift from "capture the report" to "improve the system."

    A Final Thought

    The gap Gen. George described — the chasm between “lessons observed” and “lessons learned” — is real, but not insurmountable. It’s time we stopped admiring the problem and started fixing the system.

    Agentic AI offers us the leverage. Intelligent Transformation gives us the map.

    Now it’s on us to build the loop that learns — and acts.

     

    Comments


    Contact Us

    Thanks, we will be in touch!

    161 Fort Evans Rd NE, Suite 200, Leesburg, VA - 20176

    +1 703 214 2778

    bottom of page