nuMetrix Audit Readiness Assessment
Where we stand today, what's missing,
and how to get to 100% auditability.
February 2026
Can an external auditor trace any finding
back to its source data, understand why it was flagged,
and verify the rules haven't changed since?
Strong analytical foundations.
Weak operational audit trail.
7 of 12 capabilities scored 4 or 5 — the data pipeline is solid.
5 capabilities scored 1 or 2 — the operational layer needs work.
| Capability | Evidence | |
|---|---|---|
| Source lineage | 5 | source_file + row_number on all 9 Bronze models |
| Validation trail | 5 | is_valid + invalid_reason per Silver row; 34 rules documented |
| Validation rule registry | 5 | lineage_validation_rules + lineage_validation_counts queryable |
| Finding determinism | 5 | MD5(probe_id | tenant_id | entity_id | time_bucket) |
| Evidence chain | 4 | Probes → Hypotheses → Diagnoses fully linked |
| Pipeline metrics | 4 | Row counts per layer; no timestamps |
| Audit reports (PDF) | 4 | 4 report types × 3 languages, stored in DuckDB |
| Finding lifecycle | 2 | Rebuilt each run — no created_at or state tracking |
| Rule version history | 2 | Version string exists; no changelog |
| Column-level lineage | 2 | Implicit in macros; not queryable |
| Execution metadata | 1 | No run_id, no executed_at on findings |
| Exception management | 1 | No false-positive marking, no review trail |
Every layer is traceable to the one below it. The chain is complete — but only as a snapshot.
Tier 1 Must-have for audit
Tier 2 Should-have for external audit
Tier 3 Nice-to-have
Key insight: The analytical layer (what happened, why) is strong. The operational layer (when, by whom, status changes) is almost absent.
Modify the probe compiler (probecompile.py) to inject two columns into every generated SQL model:
invocation_id (UUID per run)current_timestamp at build timeSame change to hypothesiscompile.py and diagnosiscompile.py.
Files: scripts/probecompile.py, scripts/hypothesiscompile.py, scripts/diagnosiscompile.py, contracts/findings_contract.v1.json
Add an evidence_breakdown JSON column to hypothesis_verdicts:
[
{"probe_id": "probe_revenue_leakage", "role": "primary", "weight": 3, "findings": 42, "signal": 1.0, "contribution": 0.43},
{"probe_id": "probe_orphan_billing", "role": "supporting", "weight": 2, "findings": 18, "signal": 0.89, "contribution": 0.25},
{"probe_id": "probe_duplicate_billing", "role": "context", "weight": 1, "findings": 0, "signal": 0.0, "contribution": 0.0}
]
Same for diagnosis_verdicts: add confidence_breakdown showing base + each conditional boost.
Explorer hypothesis detail page renders the breakdown as a table.
New dbt incremental model: finding_snapshots
Requires: dbt incremental materialization (merge strategy on finding_id). New pattern for the project.
New table: finding_exceptions
POST /api/findings/{finding_id}/exceptionThis is the bridge from "analytics tool" to "audit workflow."
Phases 1+2
Compiler-only changes
No new dbt models. No Explorer changes.
Phases 3+4
New dbt models + Explorer write
Incremental materialization. API endpoints.
The data tells the truth.
Now we make the trail visible.