Glossary
Plain-language definitions of the hardware-inspired software terminology used throughout the site.
Hardware analogs
- ANE mixed-precision
The Apple Neural Engine (ANE) runs different layers of a model at different precisions — cheap layers at 8-bit, critical ones at 16-bit. The framework adopted the same pattern for its model_tiering: cheap skills and simple tasks run on smaller models, reserving bigger models for work that actually needs them.
- big.LITTLE
ARM's big.LITTLE architecture pairs high-performance "big" cores with energy-efficient "LITTLE" cores. The OS routes tasks to whichever core fits. The framework applies the same idea: a task_complexity_gate routes mechanical work to a cheap "LITTLE" model and complex work to a capable "big" one.
- LoRA hot-swap
LoRA (Low-Rank Adaptation) is a technique for attaching tiny task-specific weight adapters to a large model without retraining the full thing. The framework borrowed this idea for "skill-on-demand" — only the skills relevant to the current phase are loaded into the agent's context, not all eleven at once.
- Mahalanobis distance
Mahalanobis distance measures how far one point is from a cluster of other points, taking into account how correlated the different dimensions are. The framework's V7.0 HADF (Hardware-Aware Dispatch) uses it to match a running environment's performance fingerprint to the closest chip profile in its library of 17 chips.
- Palettization (weight compression)
Palettization quantizes neural network weights down to ~3.7 bits per value (from 32 or 16), dramatically reducing model size with minimal accuracy impact. The framework applied the same compress-then-decompress pattern to its cache entries, reclaiming roughly 24K tokens of context window per session.
- SoC (System on Chip)
A System on Chip integrates the CPU, GPU, memory controllers, and specialized accelerators onto a single piece of silicon. Modern mobile processors (Apple M-series, Snapdragon, etc.) are SoCs. The fitme-story framework borrowed seven organizing principles from SoC design — skill-on-demand loading, cache tiers, systolic execution, dispatch intelligence — and applied them to a software PM workflow.
- Speculative preload
Modern CPUs predict which branch of an if/else they'll take and pre-fetch the likely instructions before the actual decision lands. The framework does the analog for skills: while Phase 4 runs, it speculatively preloads the probable Phase 5 skills into context, so the transition is zero-wait.
- Systolic chain
In a TPU (Tensor Processing Unit), a systolic array is a grid of processing cells that pass data directly to their neighbors, avoiding expensive memory round-trips. The framework applies the same pattern inside a single task: Grep → Read → Edit chains forward results without re-reading files, cutting dispatch cycles dramatically.
- TPU (Tensor Processing Unit)
Tensor Processing Units are Google-designed accelerators optimized for the matrix multiplications at the core of neural networks. The framework borrowed two TPU ideas: systolic dataflow (data passes cell-to-cell) and weight-stationary batch dispatch (reuse loaded weights across many inputs).
- Unified Memory Architecture (UMA)
On Apple Silicon, the CPU and GPU share a single pool of unified memory. There's no expensive copying data between them. The framework mirrors this with "result forwarding": the output of one skill is handed to the next without a write-then-re-read cycle.
Framework components
- /pm-workflow
The /pm-workflow skill is the orchestrator at the heart of the framework. It enforces a nine-phase lifecycle — Research → PRD → Tasks → UX → Implement → Test → Review → Merge → Docs — with approval gates between each phase. Other skills (dev, qa, design, etc.) are invoked from inside pm-workflow as needed.
- 72h Integrity Cycle
Shipped at framework v7.1 (2026-04-21). Every 72 hours a GitHub Actions workflow runs `scripts/integrity-check.py` against every state.json and every case study, then writes a snapshot. The first framework capability whose trigger is wall-clock elapsed rather than a code event.
- Advisory gate
Some checks are useful but too noisy or judgment-dependent to fail hard. Those run as advisory gates: they emit a warning into the integrity report but do not exit non-zero. v7.7 TIER_TAG_LIKELY_INCORRECT shipped advisory permanent because its kill criterion (false-positive rate) fired during baseline.
- Backfill
When a new gate is introduced, pre-existing data may not satisfy it. A backfill is a scripted bulk fix that brings prior documents into compliance. v7.7 backfilled 32 case-study frontmatters and `timing.phases` on three paused features. The framework also supports a backfill exemption tag (`case_study_type: "pre_pm_workflow_backfill"`) for legacy documents that bypass forward-only checks.
- Batch dispatch
Rather than dispatch each task to its own subagent (high orchestration overhead), batch dispatch groups coupled tasks — e.g. four component files that change together — into a single subagent call that commits per task. Modeled on TPU weight-stationary: load the context once, run many small ops on it.
- Cache tiers (L1 / L2 / L3)
Exactly like CPU caches, the framework keeps frequently-accessed context close and rarely-accessed context further away. L1 is per-skill and fastest to hit. L2 (`_shared/`) holds cross-skill patterns. L3 (`_project/`) holds project-wide lore. Hit rates are instrumented in cache-hits.json.
- Class A / B / C gates
The framework classifies its 25+ gates by enforcement type. Class A fires automatically (a pre-commit hook or CI check that exits non-zero). Class B is an agent-attention rule documented in CLAUDE.md but not mechanically blocked. Class C requires human judgment (e.g. external audit). v7.6 was the explicit campaign to promote silent Class B gaps to Class A.
- Complexity Units (CU)
Complexity Units are a normalized measurement that lets the framework compare the velocity of features as different as "fix a typo" and "build a hardware-aware dispatch layer." The v2 formula (power-law fit, R²=0.82) accounts for view count, type count, and design iteration scope.
- Control room
The `/control-room/*` routes on fitme-story.vercel.app expose the framework live state to anyone with the URL: `/control-room/framework` (framework health, all 25 gates, trend charts), the dispatch replay, and per-feature status. Originally an Astro dashboard inside FitTracker2; ported to Next.js as part of the Unified Control Center work.
- Data quality tiers (T1 / T2 / T3)
Introduced 2026-04-21 after the Gemini independent audit. T1 = the number came from real instrumentation; T2 = it is a self-declared estimate; T3 = it is narrative or anecdotal. Quoting a T3 number as if it were T1 is a documented integrity bug. The convention is enforced by the CASE_STUDY_MISSING_TIER_TAGS pre-commit hook plus the TIER_TAG_LIKELY_INCORRECT advisory.
- Dispatch intelligence
Dispatch intelligence is the Floor 5 component that looks at an incoming task and makes three decisions before work starts: how complex is this (complexity_scoring), which model tier runs it (model_routing, LITTLE vs big), and how many tool calls is it allowed (tool_budgets). Together these cut wasted dispatch by roughly 48%.
- DispatchReplay
DispatchReplay reads a JSON trace of an actual run (e.g. Sprint I, the fitme-story site itself) and animates each step — skill load, cache hit, model tier choice, subagent dispatch — at a controllable speed. It is the canonical way to see the framework behavior without running it.
- Eight cooperating defenses (v7.5)
The v7.5 Data Integrity Framework grouped existing and new checks into eight cooperating defenses across the three timing categories. Together they bracket every quantitative claim from the moment it is written to the moment it is read. v7.6 then promoted four silent Class B gaps in this set to mechanically enforced Class A.
- Eval layer
The eval layer is the framework's formalized test-before-merge step for AI-generated output. A library of golden I/O pairs (known input, expected output) runs before any "AI produced" phase can advance. Coverage below a threshold blocks the phase. Introduced in v4.4.
- Hub-and-spoke topology
The framework is organized as a hub-and-spoke graph: /pm-workflow is the root orchestrator, and domain skills (dev, design, qa, analytics, etc.) are spokes invoked from the hub based on phase and task type. The topology emerged at v4.3 and produced a 6.5x speedup over the previous mesh.
- Integrity check code
Every audit rule has a stable check code. Codes ending in `_LIE` catch state.json contradicting reality (PHASE_LIE, TASK_LIE). Codes naming a missing thing flag absence (NO_CS_LINK, V2_FILE_MISSING, STATE_NO_CASE_STUDY_LINK). Codes with `_NO_LOG` / `_NO_TIMING` / `_EMPTY_POST_V6` flag missing instrumentation. As of v7.7 there are 13 cycle-time codes plus 9 write-time codes.
- Kill criterion
Every PRD and every new gate must declare its kill criterion before it ships — a numeric threshold that, if crossed, triggers a roll-back or a downgrade. v7.7 TIER_TAG_LIKELY_INCORRECT pre-registered "false-positive rate >50%" as its kill criterion; that fired during baseline, so the gate shipped advisory rather than hard.
- Mechanical enforcement (v7.6)
v7.6 (shipped 2026-04-25) closed the remaining Class B → Class A gap from the Gemini audit by promoting four silent checks into pre-commit failures (PHASE_TRANSITION_NO_LOG, PHASE_TRANSITION_NO_TIMING, BROKEN_PR_CITATION, CASE_STUDY_MISSING_TIER_TAGS) and adding two recurring CI defenses (per-PR review bot, weekly framework-status cron).
- Parallel write safety
When multiple subagents work in parallel, they can write to the same files. Parallel write safety prevents corruption: each subagent works on a mirror of the file, snapshots the original, and rolls back if the build fails. Part of framework v5.2.
- Phase timing
Every phase transition is timestamped and recorded in phase-timing.json. Replaces the old "±15-30 min estimates" with real, retrospective data. Introduced in framework v6.0 as part of the measurement overlay.
- Pre-commit hook
Pre-commit hooks live in `.githooks/pre-commit` and are installed via `make install-hooks`. Each hook calls a Python checker (e.g. `scripts/check-state-schema.py`, `scripts/check-cu-v2.py`). A non-zero exit aborts the commit before it touches the index. This is how the framework enforces its write-time gates.
- Result forwarding
When Skill A's output is Skill B's input, result forwarding passes it in memory rather than serializing through a file. Modeled on Apple Silicon's unified memory architecture. Eliminates the classic write-then-re-read cycle.
- Skill-on-demand loading
Rather than load all eleven skill files into context at session start (which ate ~48% of the context window), the framework loads only the 1–2 skills relevant to the current phase. Inspired by LoRA hot-swap in neural networks. Reclaims ~30K tokens per session.
- Snapshot / rollback
Before any risky write operation, the framework snapshots the current file state to a side store. If the subsequent build or test fails, rollback auto-restores from the snapshot. Prevents half-applied changes from corrupting main. Part of v5.2's parallel write safety.
- Task complexity gate
At dispatch time, the task complexity gate inspects the task and routes it to either a "LITTLE" (cheap, fast) or "big" (capable, slower) model. Mechanical token migrations go to LITTLE; architectural rewrites go to big. Modeled on ARM's big.LITTLE.
- V7.0 HADF (Hardware-Aware Dispatch Framework)
V7.0 HADF extends the dispatch layer with a hardware awareness stage. It detects the chip/runtime fingerprint via OS APIs and Mahalanobis distance matching against 17 chip profiles + 7 cloud signatures, then adjusts model routing and cache strategy for the detected environment. Ships as framework version 7.0.
- Validity closure (v7.7)
v7.7 (shipped 2026-04-28) closed seven remaining gaps from the post-v7.6 inventory: four new write-time pre-commit hooks (CACHE_HITS_EMPTY_POST_V6, CU_V2_INVALID, STATE_NO_CASE_STUDY_LINK, CASE_STUDY_MISSING_FIELDS), one cycle-time check, and the TIER_TAG_LIKELY_INCORRECT advisory. Brings the framework to 25 mechanical gates plus 1 advisory.
- Write-time / cycle-time / readout-time gates
Each integrity gate fires at one of three moments. Write-time: pre-commit hooks that block bad data from entering the repo at all (e.g. SCHEMA_DRIFT). Cycle-time: scheduled audits via GitHub Actions every 72h, scanning the whole repo. Readout-time: dashboards that surface current state on demand (`make documentation-debt`, `make measurement-adoption`).
Methodology
- Audit findings
When the framework audits itself (every few months), it produces findings: typed issues with severity (critical / high / medium / low) and domain. The v7.0 audit produced 185 findings, 12 critical. Every finding is publicly tracked, addressed, and closed out in a sprint.
- Audit tiers (Tier 1.1, 2.1, 3.2 …)
The 2026-04-21 Gemini audit produced nine remediation tiers grouped into three layers: Tier 1.x = measurement adoption, Tier 2.x = runtime + contemporaneous evidence, Tier 3.x = documentation discipline. Each tier has its own dashboard or playbook (e.g. `make measurement-adoption` surfaces Tier 1.1, `make documentation-debt` surfaces Tier 3.2).
- Case study monitoring
The case-study-monitoring.json file records process metrics, quality metrics, success cases, and failure cases for every feature run through the framework. These records become the source material for real case studies — no invented numbers.
- Normalization (R²=0.82)
Not all features are the same size. The framework's normalization model uses a retroactive power-law fit (R²=0.82 across all prior features) to translate raw minutes-to-ship into comparable Complexity-Unit rates, so a ten-minute typo fix and a two-hour new feature sit on the same scale.
- Stacked PR
Stacked PRs let a long line of work ship in reviewable chunks. Branch B branches from A, branch C from B, and so on; each PR targets its parent, not main. The stacked-pr-misfire case study (#19) documents how the pattern fails when a mid-stack branch is rebased and downstream branches fall out of sync.
Web vitals
- CLS (Cumulative Layout Shift)
CLS tracks how much visible content shifts position unexpectedly during page load (e.g. images loading, fonts swapping). Under 0.1 is "good." Low CLS is important for accessibility — moving targets frustrate users with motor impairments.
- LCP (Largest Contentful Paint)
One of Google's Core Web Vitals. LCP measures when the largest visible element (often the hero image or headline) finishes rendering. Under 2.5 seconds is "good." This site targets under 1 second on desktop.
- Static Site Generation (SSG)
Rather than render pages on each request (server-side rendering) or only on the client (SPA), SSG pre-computes every page's HTML at build time and serves it as a static file. fitme-story uses Next.js App Router's SSG for all 38 routes — 0ms server work at request time.