# Quant Research Platform — Complete Reference

> Consolidated reference covering platform architecture, phased roadmap, access controls, and implementation guidance for institutionalizing the fund's research IP across VN Index and crypto markets.

---

## Table of Contents

1. [Context and Objectives](#1-context-and-objectives)
2. [Architectural Layers](#2-architectural-layers)
3. [Operating Model — How PO Tech and Quant Teams Work Together](#3-operating-model)
4. [Access Tiers](#4-access-tiers)
5. [Priority-Ordered Feature Map](#5-priority-ordered-feature-map)
6. [Phase 1 — Detailed Build Plan (P1, P2, P3)](#6-phase-1--detailed-build-plan)
7. [Phase 2 — Deferred Capabilities (P4, P5, P6)](#7-phase-2--deferred-capabilities)
8. [Model Documentation Backfill (P2.5)](#8-model-documentation-backfill-p25)
9. [Roadmap Summary](#9-roadmap-summary)
10. [Resourcing, Risks, and Governance](#10-resourcing-risks-and-governance)
11. [Strategic Context — Why This Matters Beyond Engineering](#11-strategic-context)

---

## 1. Context and Objectives

### 1.1 What the platform is for

The platform exists to **codify research knowledge into the system, not the person**. Today, the fund's intellectual property — signals, methodology, model assumptions — lives largely in individual researchers' notebooks and heads. This is the single biggest key-person risk and the biggest obstacle to scaling AUM credibly.

### 1.2 What the platform must do

- Centralize every signal, backtest, dataset, and decision with documentation
- Standardize research workflow via templates (hypothesis → data → test → result → decision)
- Provide a feature store and signal library queryable by any authorized researcher
- Own backtesting and validation pipelines at the infrastructure level, not at the individual level
- Enforce peer review before any strategy reaches production

### 1.3 Operating context

- **Markets**: VN Index equities and crypto. Two very different market microstructures must be handled in one platform.
- **Data sources**: multiple external feeds per market. Sources can disagree, drop, or correct historical data.
- **Output channel**: signals push to Telegram for human-in-the-loop execution. Provenance of every signal must be traceable.
- **Team shape**: small quant team + PO tech team. Reducing key-person risk is a primary goal.
- **IP sensitivity**: model logic is the crown jewel. Access must be tiered from day one.

### 1.4 The core tension to design around

Quant researchers want speed, flexibility, and notebook-driven iteration. PO tech wants reliability, version control, and production discipline. Most research platforms fail because one side imposes its worldview on the other — either it becomes a rigid engineering system researchers route around, or a notebook-soup that can't survive a key person leaving.

**The platform succeeds when it makes the right path the easy path.** Researchers should get productivity gains from using it, not feel taxed by it.

---

## 2. Architectural Layers

Five layers, each with clear ownership.

### Layer 1 — Data Foundation (PO tech owned)

Single source of truth for all market data, fundamentals, corporate actions, and alternative data. VN30 constituent history with point-in-time accuracy is critical — survivorship bias and look-ahead bias kill backtests silently. Needs immutable storage (Parquet on object storage, partitioned by date), a data catalog, and quality monitoring. No researcher loads a CSV from their desktop again.

### Layer 2 — Feature Store and Signal Library (joint ownership, quant-led design)

Where alpha lives. Every signal registered with metadata: definition, formula, dependencies, decay characteristics, correlation to existing signals. PO tech builds the registration infrastructure; quants define schemas and review entries.

**Critical property**: signals must be computable both historically (for backtesting) and in real-time (for production) from the **same code path**. This eliminates the most common production bug class in quant systems.

### Layer 3 — Research Workbench (PO tech owned, quant UX-designed)

Hosted JupyterHub or equivalent, with the feature store and data layer pre-wired. Every notebook auto-commits to a researcher's branch. Templates (hypothesis → data → test → result → decision) are scaffolded as notebook templates that researchers fill in. The friction to start a new research project should be near zero; the friction to skip documentation should be high.

### Layer 4 — Backtesting and Validation Engine (PO tech owned)

The single most important piece for institutional credibility. **One canonical backtester.** Researchers cannot write their own. They submit a strategy specification (entry rules, exit rules, sizing, universe, costs); the engine runs it with realistic transaction costs, market-appropriate liquidity constraints, and proper point-in-time data.

Produces standardized tear sheets: Sharpe, max drawdown, turnover, capacity estimates, regime decomposition, alpha decay analysis, and sensitivity to parameter changes.

Validation pipelines run automatically: out-of-sample tests, walk-forward analysis, multiple-testing corrections, and crucially for VN30, capacity stress tests at target AUM levels. Capacity matters enormously for fundraising credibility.

### Layer 5 — Decision and Deployment Workflow (joint)

Peer review encoded as a pull request workflow. A strategy proposed for production requires: completed research template, passing validation pipeline, reviewer sign-off from at least one other quant and one risk reviewer, and explicit decision documentation (approve / reject / iterate, with reasoning).

Approved strategies deploy through CI/CD to paper trading first, then live with size limits that scale based on out-of-sample performance.

---

## 3. Operating Model

### 3.1 Principles

- **Embed, don't liaise.** Assign one or two engineers from PO tech to sit with the quant team full-time as platform engineers — they build the platform *for* quants, *with* quants, daily. Their KPI is researcher productivity, not infrastructure uptime in isolation.
- **Quants own interfaces; engineers own implementations.** Quants define what a "signal" looks like, what fields a tear sheet must contain, what the research template captures. PO tech builds systems that enforce those contracts. This avoids engineers guessing at quant needs and quants writing brittle infrastructure.
- **Versioned everything, with sane defaults.** Every backtest result reproducible from a git SHA plus a data snapshot ID. Every signal versioned. Every production strategy pinned to specific dependency versions. Heavy-sounding but it's the only way to answer "what changed?" when a strategy degrades — and it will degrade.
- **Don't boil the ocean.** Build Layer 1 and a minimal Layer 4 first (canonical backtester with one accepted strategy format). Migrate one existing strategy onto it end-to-end before building anything else. Resist the temptation to design the perfect platform; ship a thin slice, then widen.

### 3.2 Roles

- **Platform Product Owner (quant side)**: senior researcher who defines interfaces, accepts deliverables, prioritizes tradeoffs. Without this role filled by someone senior with real authority, the platform will drift toward what is easy to build rather than what is needed.
- **Tech Lead (PO tech side)**: owns architecture, technology choices, delivery. Reports progress weekly.
- **Embedded engineers**: 2 PO tech engineers full-time with the quant team. One owns data and infrastructure; one owns research workbench and backtester.
- **Security/access engineer**: part-time, owns identity, permissions, audit logging.

### 3.3 Cadence

- Weekly demo to the quant team (researchers must see usable progress, not just architecture)
- Biweekly retrospective
- Monthly milestone review with principals

---

## 4. Access Tiers

Access tiering must be designed in from day one. Backfilling permissions onto an open system is painful and usually leaks first.

### 4.1 The four tiers

| Tier | Who | Can See | Cannot See |
|------|-----|---------|------------|
| **T0** | Investors, auditors, external reviewers | Aggregate performance, risk metrics, process integrity reports | Signal definitions, code, specific positions |
| **T1** | Junior researchers, interns | Data layer, run assigned backtests, author research in sandboxed branches | Full signal library source, production strategy code, other researchers' in-flight work |
| **T2** | Senior researchers | Full signal library, propose strategies for production | Unilateral deployment, infrastructure modification, deal-level financials |
| **T3** | Principals (Eric, C, Head of Research) | Everything, including production deployment approval and access grants | (audit log applies to all T3 actions) |

### 4.2 The boundary that matters most

**T1 → T2.** That single boundary protects most of the IP downside. Strong controls there; lighter controls elsewhere.

### 4.3 Enforcement points

- Single sign-on (SSO) via Okta, Auth0, or self-hosted Keycloak. Every action carries a user identity.
- All data and signal queries route through a platform API. No direct database credentials issued to users.
- Tear sheets watermarked with viewer identity. PDF exports logged.
- Signal library source restricted to T2+. T1 sees only signal names and high-level descriptions for assigned projects — cannot enumerate the full library.
- Immutable audit log on all access and changes.
- Unusual patterns (bulk downloads, off-hours access) trigger alerts.
- Disabled copy/paste from notebooks for T1 where feasible.

None of this prevents a determined bad actor; it raises the cost meaningfully and creates evidence trails.

---

## 5. Priority-Ordered Feature Map

### 5.1 The full priority list

| Priority | Capability | Timeline | Why It Ranks Here |
|----------|------------|----------|-------------------|
| **P1** | Unified Data Layer | Months 1–2 | Foundation. Every downstream artifact is only as good as this. |
| **P2** | Canonical Backtester | Months 2–4 | One trusted engine replaces every researcher's bespoke backtester. |
| **P2.5** | Model Inventory & Documentation Backfill | Months 2–4 (parallel) | One-time effort; can start before P3 infrastructure exists. Reduces key-person risk immediately. |
| **P3** | Signal Library with Tiered Access | Months 4–6 | IP vault. Versioned signals, same code path for backtest and live. |
| **P4** | Telegram Signal Provenance | Months 5–6 | Closes the live feedback loop; required for monitoring. |
| **P5** | Peer Review & Validation Gates | Months 6–8 | What separates a fund from a few smart individuals. |
| **P6** | Live-vs-Backtest Monitoring | Months 7–8 | Early-warning system tied to NAV backstop / Safety Reserve. |

### 5.2 Irreducible core vs nice-to-have

- **Must-have for institutional credibility**: P1, P2, P2.5, P3
- **Operational must-have given Telegram flow**: P4
- **Institutional polish and risk control**: P5, P6

If budget or time forces cuts, P5 and P6 can slip to Months 9–12 without breaking the fund. P1–P4 cannot.

### 5.3 Phase 1 selection

Phase 1 covers **P1, P2, P3** (with P2.5 running in parallel). Approximately 6 months. End state: any researcher can pull point-in-time data, run a reproducible backtest, and reference signals from a versioned library — with access controlled by role.

---

## 6. Phase 1 — Detailed Build Plan

### 6.1 P1 — Unified Data Layer (Months 1–2)

**Owner**: PO tech (data engineer lead) with quant input on schema.

#### Why it comes first

Every signal, backtest, and decision downstream is only as good as this layer. Bad data produces convincing but wrong backtests — worse than no backtest at all. VN equity has its own quirks (foreign room limits, ceiling/floor prices, ATC/ATO sessions, corporate actions). Crypto data is messy — exchange outages, bad ticks, different symbol conventions across venues, 24/7 with no clean session boundaries. A unified layer that normalizes both is the foundation.

#### Required features

**Storage and ingestion**
- Immutable data lake on object storage (S3-compatible). Parquet format, partitioned by date and asset class.
- Ingestion pipelines per source, scheduled and monitored. Each produces a dataset manifest with source, ingestion timestamp, schema version, row counts, checksum.
- Point-in-time integrity: when data is corrected after the fact, the original version is preserved and the correction is recorded as a new version with the correction timestamp. Backtests can replay history as it was known at any past date.
- Corporate actions as first-class data: splits, dividends, mergers, index reconstitutions, ticker changes, foreign room adjustments.

**Schema coverage**
- *VN equities*: daily and intraday bars, fundamentals (as-reported with reporting timestamps), corporate actions, foreign ownership room, index membership history, ceiling/floor prices per day.
- *Crypto*: spot and perpetual futures bars, funding rates, exchange-level liquidity, symbol mapping across venues.
- *Cross-asset*: reference data (calendar, holidays, sessions), FX rates, macro indicators if used.

**Source reconciliation**
- When two sources disagree, both values stored. A configurable rule picks the canonical value (e.g., "prefer Source A unless missing"). Reconciliation decisions logged.
- Daily reconciliation report: divergence between sources flagged for review.

**Quality monitoring**
- Nightly checks: missing data, outliers (price moves beyond plausible thresholds), schema drift, corporate action consistency.
- Failed checks alert before the next research day starts.

**Access**
- Single Python client library. Researchers query by symbol, date range, field — never by file path.
- Client carries user identity. Queries logged with user, dataset, time range.
- Read-only by default for all tiers. Ingestion writes only through pipeline service accounts.

#### Acceptance criteria

- A researcher can query the full VN30 universe as of any past date with no survivorship bias and no look-ahead bias.
- A researcher can query crypto OHLCV across at least 2 exchanges, with funding rates available.
- Two researchers running the same query at the same time get byte-identical results.
- Data quality dashboard shows ingestion status and failed checks for the last 7 days.
- Onboarding a new data source requires a documented pipeline; no ad-hoc CSV loads anywhere.

#### Suggested tech stack (not mandatory)

- **Storage**: S3-compatible object store (MinIO self-hosted, or AWS S3). Parquet files.
- **Query layer**: DuckDB for interactive queries; Trino/Presto if scale demands.
- **Orchestration**: Airflow or Prefect for ingestion pipelines.
- **Catalog**: lightweight metadata service (Postgres-backed) for dataset manifests and lineage.
- **Identity**: Okta, Auth0, or self-hosted Keycloak.

The tech lead chooses based on team familiarity and operational cost.

---

### 6.2 P2 — Canonical Backtester (Months 2–4)

**Owner**: PO tech (backend engineer lead) with quant defining strategy specification and tear sheet contract.

#### Why it comes second

Today, every researcher has a slightly different backtester. Results aren't comparable, assumptions are buried in code, and migrating to production is manual re-implementation that introduces bugs. One canonical backtester owned by infrastructure fixes this and forces explicit documentation of every assumption (cost model, fill assumptions, sizing) currently implicit.

#### Required features

**Strategy specification format**
- Researchers submit a strategy spec — not arbitrary code — describing: universe filter, entry rules, exit rules, position sizing, rebalance frequency, cost model, capital constraints.
- Spec format is structured (YAML or JSON-like) with a defined schema. Schema is versioned; changes reviewed by quant leadership.
- Signal references in spec point to entries in the signal library (P3). Until P3 exists, signals defined inline with understanding they will migrate.

**Execution engine**
- Vectorized for speed where possible; event-driven where path-dependence is required.
- Same code path for VN equity and crypto. Market-specific behavior (sessions, costs, constraints) configured through the strategy spec.
- Realistic cost model per market:
  - *VN*: brokerage fees, tax, slippage, ceiling/floor constraints, foreign room
  - *Crypto*: exchange fees, funding rate impact, slippage by venue
- Capacity-aware: estimates achievable size based on average daily volume / order book depth. Results include capacity warnings.

**Reproducibility**
- Every run produces a unique run ID.
- Run ID resolves to: code SHA, strategy spec, data snapshot ID, configuration, runtime environment.
- Any past run can be re-executed and must produce byte-identical results.

**Standardized tear sheet**
- *Performance*: Sharpe, Sortino, CAGR, max drawdown, drawdown duration, hit rate, profit factor
- *Risk*: volatility, downside deviation, tail metrics, factor exposures
- *Operational*: turnover, average holding period, capacity estimate, cost drag breakdown
- *Robustness*: regime decomposition (bull/bear/sideways), parameter sensitivity, walk-forward results
- Watermarked with viewer identity; exports logged.

#### Acceptance criteria

- One currently-live strategy migrated onto the new backtester and produces results within an agreed tolerance of the original (e.g., Sharpe within 0.1, CAGR within 0.5%).
- Same strategy spec run twice produces identical tear sheets.
- A strategy spec written in November can be re-run in March and produces the same result it produced in November (point-in-time integrity verified end-to-end).
- Tear sheets pass quant team review for completeness and clarity.
- New researcher can run their first backtest within one day of onboarding.

#### Migration approach

Pick one well-understood live strategy. Migrate it end-to-end. Compare results carefully — every discrepancy surfaces an assumption that was previously implicit. Document each one. This migration is the real test of the backtester and the data layer beneath it.

---

### 6.3 P3 — Signal Library with Tiered Access (Months 4–6)

**Owner**: Joint — quant team defines schema and reviews entries; PO tech builds infrastructure.

#### Why this is the IP vault

This is where the fund's alpha actually lives. The signal library is what makes knowledge institutional rather than personal. It is also the highest-sensitivity component — access tiering matters most here.

#### Required features

**Signal registration**
- Every signal stored with: name, formula or pseudocode, author, creation date, status (development/staging/production/retired), version, dependencies (data fields, other signals), decay characteristics, correlation to existing signals, review history.
- Signals versioned. Modifying creates a new version; old versions remain queryable. Production strategies pin to a specific version.
- **Same code path for backtest and live computation.** The signal that fires in Telegram is provably the same one that backtested well. Single most important architectural property of the library.

**Discovery**
- Searchable catalog: by market, signal family, author, status.
- Correlation and overlap dashboard helps researchers avoid reinventing existing signals. Critically: only shows signals the viewer has access to.
- Lineage view: which strategies use this signal, in production or in research.

**Access controls**
- T1: sees signal names and high-level descriptions only for signals assigned to their projects. Cannot enumerate the full library.
- T2: full library access including source. T2 status granted explicitly by T3.
- T3: controls T2 access, can revoke immediately.
- All signal reads logged. Bulk downloads, unusual patterns, off-hours access alert.
- Watermarking on any export.

#### Acceptance criteria

- All currently-live signals registered with complete metadata.
- All paused and recently-retired models documented in the wiki (via P2.5).
- A new strategy proposed for production references only signals in the library — no inline signal definitions accepted.
- T1 user cannot enumerate the full signal library; T2 user can; access attempts outside tier are logged and blocked.
- Backtest run referencing a signal version produces results identical to live computation using the same version.

---

## 7. Phase 2 — Deferred Capabilities

Out of scope for Phase 1; targeted for Months 7–12.

### 7.1 P4 — Telegram Signal Provenance

Every Telegram alert tagged with strategy ID, signal version, data snapshot, timestamp. Dashboard showing live signal history alongside backtested expectations. Closes the live feedback loop and turns operational data into research input.

### 7.2 P5 — Peer Review and Validation Gates

No strategy reaches production without passing automated validation (out-of-sample, walk-forward, multiple-testing correction, capacity stress) and peer sign-off via PR workflow. A strategy failing any validation gate cannot proceed without explicit override and documentation. Two reviewer sign-offs required (one quant, one risk).

### 7.3 P6 — Live-vs-Backtest Monitoring

Daily comparison of realized signal performance vs backtested expectations, with alerts when divergence exceeds thresholds. Ties directly to NAV backstop and Safety Reserve early-warning. Detect degradation before it eats reserves.

### 7.4 Other deferrals

- Automated execution (remains human-in-the-loop via Telegram)
- LP-facing reporting layer beyond aggregate metrics

---

## 8. Model Documentation Backfill (P2.5)

A one-time workstream addressing immediate key-person risk. Runs in parallel from Month 2.

### 8.1 Why start early

Two reasons:

1. The documentation effort surfaces problems you want to know about *before* building the library — which models are still working, which were abandoned and why, which have undocumented dependencies on a specific researcher's setup, which can't be reproduced. Discover this now, while you still have the people who built them.
2. Even a simple structured wiki holding model documentation is *immediately* better than current state. It doesn't need to wait for the full signal library.

### 8.2 Documentation template

For every model — live, paused, or retired:

- Name, status (live/paused/retired), market (VN/crypto/both), author, date created
- Hypothesis and economic rationale (why should this work?)
- Signal definition with exact formula or pseudocode
- Data dependencies (which feeds, which fields, which lookback)
- Universe and filters
- Entry/exit/sizing rules
- Historical performance summary
- Known failure modes and regime sensitivities
- If retired: why, and what was learned
- If live: where it runs, which Telegram channel, who monitors

### 8.3 Tiered from day one

Even in a wiki, set up tiering immediately. T1 sees model names and high-level descriptions only for assigned models. T2+ sees full documentation. Don't let "we'll add access controls later" happen.

### 8.4 Interview, not self-write

Run as structured interview, not "please write up your models." Sit a senior person (you, C, or a designated documenter) with each researcher for 60–90 minutes per model, using the template as a script. Researchers skip fields when writing alone; they can't skip them in a conversation. Catches implicit assumptions researchers don't realize they're making.

### 8.5 Sequencing

- **Month 2**: stand up the wiki with templates and tier-based access. Start with **live** models — highest risk if knowledge lost. Target one model documented per week minimum.
- **Months 3–4**: document paused and recently retired models. Latter is harder (memory has faded) but most valuable for lessons-learned.
- **Months 5–6**: as P3 signal library comes online, migrate wiki documentation into the structured library. Wiki was scaffolding; signal library is permanent home.
- **Ongoing**: new models cannot go live without complete documentation. This becomes the cultural shift that locks institutionalization in place.

---

## 9. Roadmap Summary

### 9.1 Phase 1 month-by-month

| Month | Workstream | Key Deliverables | Milestone Gate |
|-------|------------|------------------|----------------|
| **1** | P1: Data Layer | Identity/SSO live. Data lake stood up. First ingestion pipelines for VN equity and one crypto exchange. | Researcher can query VN30 daily bars with identity logged. |
| **2** | P1 finish + P2 start + P2.5 start | Point-in-time queries working. Corporate actions ingested. Backtester spec format defined. Documentation wiki stood up; interviews begin. | Data layer acceptance criteria met. First model documented end-to-end. |
| **3** | P2: Backtester + P2.5 continues | Backtester v1 running. Cost models for both markets. Tear sheet format finalized. Live models documentation 50%+ complete. | First migrated strategy producing tear sheets. |
| **4** | P2 finish + P3 start + P2.5 continues | Backtester acceptance criteria met. Signal library schema designed. All live models documented. | Migrated strategy results match originals within tolerance. |
| **5** | P3: Signal Library + P2.5 finish | Signal library infrastructure live. Tiered access enforced. Paused/retired models documented. | First signals registered; T1/T2 access boundary verified. |
| **6** | P3 finish + Phase 1 review | All live signals registered. Wiki content migrated into library. Phase 1 retrospective. Phase 2 planning. | Phase 1 acceptance criteria all met. Ready to begin P4–P6. |

### 9.2 Full 12-month view

| Months | Phase | Focus |
|--------|-------|-------|
| 1–6 | Phase 1 | P1, P2, P2.5, P3 — foundation, backtester, IP vault |
| 7–8 | Phase 2 start | P4 (Telegram provenance), P5 (peer review) |
| 9–10 | Phase 2 continue | P6 (live-vs-backtest monitoring), LP reporting layer |
| 11–12 | Hardening | Disaster recovery, runbooks, cross-training, audit polish |

---

## 10. Resourcing, Risks, and Governance

### 10.1 Indicative Phase 1 team

- Tech Lead: 100%
- Data engineer (P1 lead): 100% Months 1–3, 50% thereafter
- Backend engineer (P2 lead): 100% Months 2–6
- Security/access engineer: 50% throughout
- Frontend engineer (tear sheets, dashboards, library UI): 50% from Month 3
- Quant Platform PO: 50% throughout
- Quant team (interviews, schema reviews, acceptance testing): ~1 day per researcher per week

Starting estimates only. Tech lead refines after architecture choices.

### 10.2 Top risks and mitigations

| Risk | Mitigation |
|------|------------|
| Scope creep on data layer | Define schema coverage upfront; resist additions until P1 acceptance criteria met. |
| Backtester migration reveals deep discrepancies with current results | Budget time for assumption reconciliation; treat discrepancies as discoveries, not failures. |
| Quant team does not engage with documentation interviews | Principals (Eric, C) personally schedule and attend the first round to set precedent. |
| Access tiering treated as optional in early build | Identity and access are part of P1 acceptance criteria, not a later add-on. |
| PO tech team builds in isolation | Embedded model with weekly demos. If researchers are not using new tooling by Month 3, escalate. |
| Platform Product Owner role on quant side left vague | Name the person before kickoff. Must have real authority over the spec. |

### 10.3 Governance

- Monthly milestone review with principals
- Quarterly platform review: are researchers actually using the system? Where is friction?
- Annual external review (once Phase 2 complete): structural integrity check for institutional credibility

---

## 11. Strategic Context

Two threads worth keeping in view as the platform is built.

### 11.1 Platform as fundraising asset

With the SPV structure and Q's capital coming in, the platform is also a **due diligence asset**. Institutional LPs increasingly ask about research infrastructure. Being able to show a documented, peer-reviewed, version-controlled research process materially strengthens the fundraising story. Build with that audience partly in mind — the T0 reporting view is not an afterthought.

### 11.2 Platform as risk control

The NAV backstop (up to 3% NAV per year on negative returns) and the Safety Reserve mechanics (funded at 20% of performance fees until reaching 6% of deployed capital) both assume strategy degradation can be detected early. The validation pipeline (P5) and live-vs-backtest monitoring (P6) are directly tied to downside risk in the deal economics — alerts when realized performance diverges from backtested expectations by more than X sigma protect the reserves before they're consumed.

These aren't standard features in most research platforms. They are essential here.

### 11.3 The deeper point

Codifying research into the platform is the highest-leverage move available for scaling AUM credibly. It reduces key-person risk, enables institutional due diligence, creates the monitoring backbone for the deal's risk-sharing mechanics, and locks in the cultural shift from individual practice to institutional process. Phase 1 is the foundation that makes the rest possible.

---

*End of QuantPlatform.md*
