Back to case study

From 500 Dashboards to Self-Service: AI-Driven BI Migration and Autonomous Analytics

Interactive project timeline

Discovery Phase 1

Metadata Inventory and Translation Layer

The engagement began with a comprehensive extraction of the entire Looker instance — every dashboard, look, board, usage statistic, scheduled export, calculated field, filter, and configuration setting. In parallel, the team mapped which Snowflake data models each asset queried.

This produced a complete metadata replica of the BI platform: the foundation for automated migration, verification, and the downstream semantic layer.

With the inventory complete, the team designed a translation layer mapping every Looker component to an equivalent custom Streamlit component — defining what the AI pipeline would target.

Key Inputs

  • 500+ Looker dashboards with full metadata extraction
  • Snowflake data model usage mapped to each BI asset
  • Component-level translation layer from Looker to Streamlit
  • HIPAA/HITECH and SOC2 compliance requirements scoped

Discovery Phase 2

Translation Layer

Week 2-3

Component-by-component mapping from Looker to custom Streamlit equivalents on Snowflake

Build Phase 1

AI-Driven Migration Pipeline

The core intervention was an LLM-driven pipeline using Claude Code to automatically translate Looker dashboards into Streamlit applications. Starting from a 10% zero-shot rate with small batches, the team iterated over several weeks — restructuring session architecture, developing compact context encodings informed by Anthropic's research on long-running LLM applications, and increasing pipeline modularity from 3 to 12 parallel steps.

Throughput management proved critical: migrating 100 dashboards concurrently degraded quality versus batches of 50, so the team introduced rate limiting to keep migrations autonomous while preserving accuracy. The final zero-shot rate reached 65%.

Verification

Two layers protected migration quality:

  • SQL verification via an MCP server on Snowflake, enabling query validation through Claude Code and Cursor
  • Visual verification via headless browser automation, comparing Looker filter interactions against Streamlit equivalents

All 500+ assets were migrated within two to three weeks of active migration.

Build Phase 2

AI Migration Pipeline

Week 5-10

LLM-driven migration using Claude Code — iterated from 10% to 65% zero-shot rate with modular 12-step parallel architecture

Build Phase 3

Multi-Modal Verification

Week 6-10

Headless browser automation comparing Looker filter interactions against Streamlit equivalents for visual verification

Semantic Layer & Cortex Agents

Week 11-16

1,000–2,000 verified queries transformed into semantic layer powering 12 domain-driven self-service Cortex agents

Enable

Phased User Migration

Week 10-13

Stepwise training and Looker deactivation across business teams to prevent thundering-herd cutover

Evaluation & Auto-Evolution

Week 14-18

Tracking system with automated daily PRs extending the semantic layer based on usage gaps

Deliver

Self-Service Analytics and Platform Handoff

With migration complete, Narona Data transformed the Looker metadata into a semantic layer for 12 domain-driven Cortex agents on the Snowflake Intelligence platform. Approximately 1,000–2,000 verified queries — each representing a real business question answered by a validated analyst query — trained these agents.

An evaluation system tracked inquiry volume, session frequency, and duration. An automated daily PR process extended the semantic layer based on gaps identified from experienced user sessions, ensuring the platform evolved with use.

Measured Outcomes

  • 30% technology cost reduction
  • 50% headcount cost reduction
  • 5x increase in data asks handled
  • Dashboard turnaround from weeks to minutes
  • 12 self-service analytics agents live across business domains
  • Derived data products including improved PHI catalogs

Want to read the full case study?

Read the full article