fitme·story
Flagship · v5.1

7 min read

Summary card · 60-second read

First Feature Under the New Architecture — AI Engine Adaptation

Version
v5.1
Date
2026-04-13
Tier
flagship

First feature where 'how we build' and 'what we build' used the same architectural principles. Adapter protocol, validation gate, analytics naming, goal-aware weights ported PM-framework → AI engine. 45% L2 cache hit rate.

Honest disclosures
  • Cache-hit rate is from the L1/L2 lookup ledger introduced in v4.4 — pre-v6.0 instrumentation, denominator counts may understate misses.
  • Pattern transfer cleared research in 15 minutes because every architectural layer had a proven analog in the framework — direct reuse, not novel design.
How to read this case studyT1/T2/T3 · ledger · kill criterion
T1Instrumented
Numbers come from a machine-generated ledger or commit. Reproducible. Highest reader trust.
T2Declared
Numbers stated by a structured declaration (PRD, plan, frontmatter) but not directly measured.
T3Narrative
Estimates and observations from session memory. Useful for context; not citable as evidence.
Ledger
Where to verify the claim — a file path, GitHub issue, or backlog entry. Anything labelled ledger: is the audit trail.
Kill criterion
The pre-registered threshold under which this work would have been killed mid-flight. Not fired = work shipped without hitting the threshold.
Deferred
Items intentionally not closed in this version. Each cites the ledger that tracks remaining work.
Research15m
PRD15m
Tasks10m
Implementation35m
Testing10m
Review+Merge5m
13 tasks, 17 files, 986 insertions — delivered in 90 minutes end-to-end. Implementation was the longest phase at 39% of total time. · Total: 1h 30m

First Feature Under the New Architecture — AI Engine Adaptation

What happens when the patterns you built for your development process turn out to be the same patterns your product needs?

45%
cache hit rate — first framework-to-product pattern transfer

Context

The AI engine in FitMe uses a three-tier architecture: local inference, cloud API, and on-device foundation models. But the adapter layer feeding data into the engine had grown organically -- no formal protocol, no validation gate, no goal-aware intelligence. This enhancement restructured the AI engine using patterns borrowed directly from the PM framework itself: typed adapters, confidence-gated validation, and a learning feedback loop. It was the first feature where "how we build" and "what we build" used the same architectural principles.


The Pattern Transfer

The PM framework had already solved several problems that the AI engine now needed:

PM Framework PatternAI Engine ApplicationCache Hit?
Integration adapter protocolAI input adapter protocol with typed contractsYes (L2)
Validation gate (green/orange/red)Recommendation confidence scoring with thresholdsYes (L2)
Analytics event naming conventionAI event taxonomy with screen prefixesYes (L1)
Goal-aware component weightsNutrition goal mode with metric prioritizationYes (L1)
Insight card UI structureConfidence badge and feedback buttonsYes (L1)

The research phase completed in 15 minutes -- not because the problem was simple, but because every architectural layer had a proven analog in the framework. The 45% cache hit rate reflects this direct pattern reuse.


Single-Session Execution

PhaseDurationNotes
Research15 minExpanded thin research into 5-layer architecture proposal with code sketches
PRD15 min10 requirements, metrics, kill criteria. Goal-aware intelligence added mid-session by user.
Tasks10 min13 tasks with dependency graph, parallel opportunities identified
Implementation35 minAll 13 tasks, 17 files, 986 insertions. 3 build errors caught and fixed.
Testing10 min197 tests pass, 1 test needed parameter fix
Review + Merge5 minShipped as PR #79
Total~90 minResearch through merge
13 tasks, 17 files, 986 insertions — delivered in 90 minutes end-to-end. Implementation was the longest phase at 39% of total time. · Total: 0h 0m

Mid-session scope expansion: The user proposed goal-aware metric prioritization during the PRD review. This was added as two new requirements and a new data model within the same session -- no rework needed, no phase restart. The framework absorbed scope changes gracefully because the adapter pattern was flexible enough to accommodate new input dimensions.


What Was Built

5 typed input adapters replacing unstructured data passing: Profile, HealthKit, Training, Nutrition, and a snapshot builder that combines them into a validated input package.

Confidence-gated recommendations with three tiers: high confidence (direct application), medium (presented with caveats), low (flagged for user review). The same green/orange/red gate pattern used in the PM framework's validation system.

Goal-aware intelligence that shifts metric weights based on the user's current goal. A fat loss goal prioritizes caloric deficit metrics; a muscle gain goal prioritizes training volume and protein intake.

Recommendation memory with encrypted persistence, enabling the feedback loop to learn from user responses across sessions.

Confidence badge UI showing users how certain the AI is about each recommendation, with feedback buttons for thumbs up/down.


Build Errors and Recovery

Three build errors occurred during implementation:

  1. Type naming mismatch. Used NutritionPlan when the actual type was NutritionGoalPlan. Fixed in under 2 minutes.
  2. Access control. The orchestrator was public but the recommendation type was internal. Common Swift module boundary issue. Fixed in under 2 minutes.
  3. Duplicate analytics parameter. A source parameter already existed. Fixed in under 1 minute.

All three were caught on the first build attempt and fixed immediately. Zero test regressions across 197 existing tests.


Normalized Velocity

MetricValue
Tasks13
Work typeEnhancement (0.8)
Complexity factorsUI (+0.3) + New Model (+0.2) + Cross-Feature (+0.2) = +0.7
Complexity Units17.7
Wall time90 min
min/CU5.1
vs Baseline+66% faster
Cache hit rate~45%

What the Cache Provided

Cache EntryLevelTime Saved (est.)
Adapter protocol patternL2~10 min
Validation gate modelL2~8 min
Analytics naming conventionL1~5 min
Goal-aware weights patternL1~5 min
Insight card UI structureL1~3 min
Design system badge patternL1~2 min

The Blurring Line Between Process and Product

This is the first case where framework patterns were applied to the product's own AI system, not just to the development process. The adapter protocol that structures how PM skills communicate is the same protocol that now structures how the AI engine receives data. The validation gate that decides whether a PM workflow phase can proceed is the same gate pattern that decides whether an AI recommendation is confident enough to show.

The implication: investing in framework architecture has compounding returns beyond development velocity. The patterns become a reusable vocabulary that accelerates product architecture decisions. A team that has built and refined a validation gate pattern for their development workflow does not need to design a confidence gate for their AI engine from scratch -- they already have one.


What Was Missed

No new unit tests were written. The feature relied on the existing 197-test suite for regression verification. The goal profile, validated recommendation, and recommendation memory types all deserve dedicated tests. This was deferred as technical debt.

Adapters were coded from memory, not from reading existing types. The NutritionPlan naming error came from assuming a type name that did not match the actual codebase. The adapter extraction should have started by reading the existing type definitions.


Key Takeaways

  • PM framework pattern reuse is the dominant accelerator for architectural work. The AI engine was designed in 15 minutes because every layer had a proven analog. The framework is not just a development workflow -- it is a pattern library that compounds across features.
  • 45% cache hit rate on an architectural enhancement shows the cache works beyond screen refactors. Cross-domain pattern reuse (PM framework to AI engine) validates the L2 shared cache design.
  • Scope expansion during PRD review was absorbed without rework. The adapter pattern was flexible enough to accommodate goal-aware intelligence as a mid-session addition. Good architecture absorbs requirement changes.
  • The line between "how we build" and "what we build" blurred. The same architectural principles that make the PM framework self-improving now make the AI engine self-improving. This convergence was not planned -- it emerged from pattern reuse.