Resources
Back

Saved 100s of hours of manual processes when predicting game viewership when using Domo’s automated dataflow engine.

Watch the video
About
Back
Awards
Recognized as a Leader for
30 consecutive quarters
Summer 2025 Leader in Embedded BI, Analytics Platforms, Business Intelligence, and ELT Tools
Pricing

Building Trustworthy AI Agents: How Domo Helps Customers Reduce Hallucinations

Lee James

Director, Partnerships and Customer Adoption

2
min read
Monday, August 18, 2025
Building Trustworthy AI Agents: How Domo Helps Customers Reduce Hallucinations

In nearly a quarter of responses, AI agents fabricated information, according to Shopify reporting in 2024. Everyone calling AI agents the future of work? You might want to take note.  

We believe AI agents will change how work gets done. But just as strongly, we believe hallucinations still need to be taken seriously. They are a real, practical concern for any business deploying AI at scale.  

Here’s why: One major retailer introduced an inventory planning agent that generated confident sales forecasts based on data that did not exist. The result was costly over-ordering and operational disruption. This wasn't a failure of integration or configuration. The problem was outputs that looked reliable but weren’t based in truth. 

Why do AI agents hallucinate? 

  1. Few LLMs are trained on the structured business data that could strengthen their accuracy.  

    This risk starts with how most large language models (LLMs) are trained. They rely on open web content such as forums, documentation, and articles. Very few models are trained using structured business data. This lack of training makes LLMs skilled at generating language but inconsistent when navigating dashboards, databases, or metrics. 
  1. Depending on the task, models vary in accuracy.  

    Large language model (LLM) version selection also directly impacts accuracy and, as a result, hallucinations. As an example, Claude 3.5 Sonnet outperformed Claude 3.7 in reasoning, coding, and executing real-world tasks, according to Datacamp’s benchmark testing. Claude 3.5 produced more accurate results when handling text-to-SQL queries and summarizing structured data.  

    Claude 3.7 was tuned for longer code and logic generation, often generating verbose responses that lacked alignment with business queries. Benchmarks such as Spider confirm that structured data queries require deliberate model alignment and validation. 

6 tools Domo uses to reduce AI agent hallucination 

Our goal is helping you build trustworthy agents, and we do so by providing a suite of tools including Agent Catalyst, AI Readiness, and AI Agent Task. 

Agent Catalyst and the AI Service Layer  

Our Agent Catalyst platform addresses hallucinations by bringing structure, flexibility, and oversight into the design of agents. At its core is the AI Service Layer. This layer translates business questions into executable queries and runs them directly against governed data sets. Answers are returned based on real-time values, not model memory, which helps ensure accuracy and accountability.  

Model Flexibility 

As a customer, you have full control over model selection and deployment. The platform supports both proprietary and third-party models, and you can test and evaluate each against the specific task. Models can be selected at the prompt level, integrated with existing infrastructure, or managed within Domo’s environment. This flexibility allows businesses to align their models with their data and their compliance standards. 

AI Readiness  

Once data is prepared, Domo’s AI Readiness comes into play. This excellent feature allows teams to annotate data fields, define synonyms, and configure access controls. Natural language inputs such as “monthly profit” are mapped to fields such as “NetMargin_Pct”, avoiding misinterpretation. Defining how data should be understood before an agent is deployed, removes significant ambiguity when asking the LLM to analyze data.  

AI Agent Task 

Within Domo Workflows, the AI Agent Task brings these tools together. Users can configure inputs, select models, define instructions, and preview outputs before agents are put into use. Common tools such as AI summarization, AI forecasting, and data classification are integrated into this environment. This task allows you to test, refine, and validate agent behavior. 

AI agent onboarding 

To help customers deploy agents, we’ve developed an AI agent onboarding methodology that helps teams move from proof-of-concept to production with clarity and governance. It starts with defining use cases and assessing your readiness. From there, we test prompts, review outcomes, and apply governance mechanisms, including thresholds for review or fallback. This structured approach helps you move both quickly and confidently. 

Trust must be embedded to reduce AI hallucinations 

Trust is not an accessory, as I argue in my book, In the Loop: The Human Touch in AI: Trust, Oversight and Leadership. We have to embed it from the start. Domo enables this by supporting approval workflows, output reviews, and escalation paths. These safeguards allow human oversight to be present at every stage, without compromising performance. 

Building trustworthy agents isn’t about chasing the perfect model. It is about having the right foundation. With Domo, you can apply consistent structure to model selection, data preparation, prompt design, and oversight. The result is not only fast responses but dependable and trustworthy action. 

If you’d like to dig deeper into AI, check out Domo’s limited series, The Future of AI. In particular, this episode speaks to building confidence in AI. If you’ve read this far, I think you’ll appreciate it. 

Author

Lee James
Director, Partnerships and Customer Adoption

Lee James is a senior partner leader with over 20 years of experience building high-impact ecosystems across enterprise SaaS, cloud, and AI. He specializes in unlocking revenue growth through strategic alliances, joint solutions, and partner-led delivery models, with deep expertise in Snowflake, Databricks, AWS, and specialist AI providers. Currently leading strategic partnerships and AI adoption for EMEA at Domo, he has built a multi-agent AI portfolio deployed with partners while delivering net new Snowflake and Databricks consumption. Previously, he held senior roles at AWS working in Strategic Accounts.

Read more about the author
No items found.
Table of contents
Tags
AI
Data Governance
No items found.
Explore all
AI
Data Governance