A recurring question we get is how D3 avoids the "hallucinations" or inconsistencies that plague other AI tools. We see the headlines and hear the stories: an AI confidently presents a number that is completely wrong, or two different users get conflicting answers to the same question. This creates an "AI trust gap," and it's the single biggest blocker to enterprise adoption. The answer, as we covered in the webinar, is that D3 is built on the foundation of Cube Cloud’s universal semantic layer.

LLMs Don’t Understand Structured Data

Image

Let's be direct: Large Language Models are phenomenal at processing text, but they do not inherently understand the complex relationships and business logic within a structured database. You can’t just embed a schema and hope for the best. When a generalist AI tool is connected directly to a data warehouse, it is forced to infer business logic from table and column names—a process that is fraught with peril. This is why we say, "Your AI isn’t hallucinating—it’s ungrounded."

Cube D3 Doesn't Do Text-to-SQL. It Does Text-to-Semantic SQL.

This is the most critical architectural decision we made. Instead of letting an LLM generate raw SQL that runs directly against your warehouse, D3 uses a two-step process that provides essential guardrails.

  1. The Agent Generates Semantic SQL: When a user asks a question, the D3 agent generates Semantic SQL. This is a high-level query against the governed, business-friendly objects in your semantic layer. The agent is required to use your defined measures, dimensions, and views. It cannot invent joins or query raw tables.
  2. The Semantic Layer Compiles Warehouse SQL: The Semantic SQL is then sent to the Cube Cloud runtime engine. This engine deterministically translates the request into the correct, optimized, and secure SQL for your specific data warehouse.

This architecture means the SQL you see the agent produce is not the final SQL that runs. The final query is enriched with all the necessary logic: correct joins, performance optimizations, and, most importantly, security policies like row-level security and column masking. This two-step process is the key to trusted autonomy. The AI agent is never allowed to "improvise" a query to your database. It can only operate within the consistent, secure, and explainable definitions you’ve established.

Image

Guardrails Don’t Slow You Down. They Make Trust Scalable.

This semantic layer acts as a set of guardrails. It ensures that no matter what the user asks, the AI's actions are constrained by your business rules. This is also why we don't need to fine-tune our own LLMs. Fine-tuning is slow, expensive, and locks you into a specific model. Instead, we leverage the best frontier models from providers like OpenAI and Anthropic and allow you to bring your own model (BYOM). The governance and trust are not handled by the LLM; they are handled at the semantic layer. This gives you the flexibility to use the latest AI innovations without ever compromising on data security or consistency.

Next up: How do you handle analysis that goes beyond your pre-defined model? We'll look at D3's ability to create ad-hoc metrics.


Interested in learning more about D3? Join the waitlist or watch the webinar on-demand.