Three tables. Three answers. Which is correct?

"What was our rooms revenue in July?" P&L (Finance) $12.8M Authoritative total includes all room revenue Room Segments (Ops) $11.9M Breakdown by segment missing segments outside ops domain Reservations (ORS) $12.1M Transaction-level detail missing adjustments outside reservations domain $0.9M drift All three pass every data quality check. An AI picking the wrong one doesn't know the difference.

BI reporting stays in lane. AI has to cross but they don't add up.

Traditional BI

P&L Segments ORS Room rev + other rev + adjustments + tax Monthly, all line items Corporate, Leisure, Group, OTA segments Monthly, ops categories Daily transactional booking records Daily, pre-adjustment SUM SUM SUM Total $12.8M Total $11.9M Total $12.1M Finance Dashboard $12.8M Ops Dashboard $11.9M Revenue Dashboard $12.1M F Sees $12.8M O Sees $11.9M R Sees $12.1M Each dashboard stays in its lane. Each total is correct — for its domain.

AI Analyst

P&L Segments ORS Room Revenue $12.8M By Segment $11.9M Bookings $12.1M "Break down total room revenue by segment" total breakdown AI assembles across sources: Total (P&L): $12.8M Segments: $4.2M + $3.5M + $2.3M + $1.9M = $11.9M Segments sum to $11.9M but P&L says $12.8M The AI has no way to reconcile them. Which number should it present to leadership?

A semantic layer turns disconnected tables into a navigable map.

Without

Lakehouse ERP source ORS source PMS source P&L Segments ORS Bookings Room Types ETL lineage, schema validation, freshness checks, row counts Relationships are technical. No business relationships between tables. AI guesses how to link them.

With semantic layer

P&L Room Revenue $12.8M authoritative = P&L minus staff bookings sum validated ✓ (∆ = staff $0.9M) Room Segments Corp $4.2M Leis $3.5M Grp $2.3M OTA $1.9M = $11.9M maps to P&L via adjustments variance checked (<5%), stored recalculated on each data ingestion ORS Daily Bookings $12.1M pre-adjustment, pre-tax

What the Semantic Integrity Layer does

1 Cross-validate Total: $12.8M $4.2M $3.5M $2.3M + ... ∑ = $11.9M ✓ ∆ $0.9M accounted for Every property Every month Numbers add up. Variances are explained. 2 Maintain continuously Apply corrections Scale, adjust, reconcile Store variances Track drift over time ! Flag issues Stale data, anomalies, breaks On every data ingestion. Automated, no manual work. Always current. 3 Guide the AI Data map Authority, hierarchy, known variances Business glossary RevPAR = Room Rev / Available Rooms ADR, Occ%, TRevPAR, GOPPAR Semantic understanding Fiscal periods, seasonality, currency AI navigates the map, not the raw tables.

Architecture — fully isolated per client

Client Environment Your Data Warehouse Read raw data Write back clean model Dedicated Environment Fully isolated per client Data Engineering AI Analyst Stack Semantic Integrity Layer Hierarchy + mappings + validation Source authority rules Query Engine Guardrails continuously maintains semantic layer Application Layer LLM Provider Model-agnostic Zero data retention API calls only No customer data stored

Layers between AI and data

"What was our revenue last month?" Guardrails Is the question allowed? Does the user have access? Is the scope within bounds? Semantic Layer "Revenue" → room_revenue_net Resolves ambiguity. Enforces metric definitions. Maps to authoritative source. Query Engine Generates validated SQL. Only permitted tables & columns. Parameterized, injection-proof. AI-Optimized Data Model Grounding & Provenance Tracking Every answer traced to a real query result. Citations linked to source data. Full audit trail from answer back to raw data.

Generic copilot vs our AI

"How did our RevPAR compare across regions last quarter, and which properties are underperforming vs budget?"

Generic copilot

Scans all available tables Query Engine Ad-hoc pattern matching, keyword similarity Provenance Tracking Generic: "Based on..." / "I found..." No guaranteed link to authoritative source Result • "RevPAR" → averages property RevPARs (wrong) • "region" → picks whichever table has a region field • "last quarter" → calendar Q, not fiscal quarter • "budget" → may pull forecast instead of budget Closest match ≠ correct match

Our AI

Starts from semantic map Semantic Resolution "RevPAR" → room_revenue_net / available_rooms "region" → property.region via portfolio hierarchy "last quarter" → fiscal Q (Oct-Dec), "budget" → approved_budget table Query Engine Validated SQL from known tables, known joins, known granularity Provenance Tracking Source: P&L authoritative → table.column → row-level citation Result — grounded, auditable Correct metric, correct source, correct granularity Defaults to the map, not the guess.

How it fits — nothing changes

Your Existing Warehouse Fabric / Power BI Lakehouse P&L Segments ORS / Dashboard reads from Semantic Integrity Layer Reconcile Auto-fix Quality flags Authority rules verified data + context AI Engine Query engine, guardrails, semantic resolution answers + citations Copilot / Power BI Consistent numbers across every view and answer

Additive, not a replacement.

Your warehouse stays as-is.

Your Power BI dashboards stay as-is.

The layer reads, validates, and serves.