Vous avez économisé des centaines d'heures de processus manuels lors de la prévision de l'audience d'un jeu à l'aide du moteur de flux de données automatisé de Domo.
AI Models: Types, Examples, and Everything You Need to Know

AI models sit behind spam filters, chatbots, fraud detection systems, and a lot of the "smart" features showing up in business tools right now. If you understand what a model is (and what it isn't), you'll make betterclearer calls when evaluating AI for your organization.
This article covers the basics without hand-waving: what AI models actually are, how training creates the weights that drive predictions, the practical differences between supervised, unsupervised, and reinforcement learning, and what it takes to move a model from prototype to production.
Key takeaways
A few things are worth keeping close:
- An AI model is a program trained on data to recognize patterns and make decisions or predictions without human intervention
- The main types include supervised learning, unsupervised learning, reinforcement learning, and foundation models like large language models
- Training an AI model involves data preparation, model selection, training, testing, and deployment
- Organizations use AI models to automate tasks, identify trends, make predictions, and drive business value
AI is changing the landscape for every industry. The organizations that learn how to use it well will pull away from the ones that treat it like a side project.
You don't have to be a massive consulting firm to put AI models to work. Teams of all sizes use them to spot trends in the data, make predictions and draw conclusions, find and correct inefficiencies, build workflows, and handle other tasks that meaningfully improve day-to-day operations.
One quick heads-up before we get into it: in real organizations, the hard part usually isn't the AI model itself. Getting the model into production with the right data access, security, monitoring, and auditability, that's where things get complicated. And honestly, that's the part most guides skip over entirely.
So let's get specific about what an AI model is, how it works, and what it looks like once it leaves a demo notebook and has to behave in the real world.
What is an AI model?
Start with the output. You give a model an input, and it returns a result: classification, prediction, recommendation, generated text, you name it.
Under the hood, a model is a program trained on data to recognize patterns and make decisions or predictions on its own. More precisely, training produces a set of numerical values called weights or parameters that the model uses to generate outputs when it receives new input.
Here's a simple example. A spam classifier takes an email as input and returns "spam" or "not spam" as output. During training, the model learned which patterns (certain words, sender characteristics, formatting) correlate with spam. Those learned patterns are stored as weights. When a new email arrives, the model applies those fixed weights to decide whether it's spam.
AI models are usually trained for a specific job. Some common tasks include compiling marketing campaign reports, generating computer code, recognizing letters and numbers in text, and entering data. More training data generally improves accuracy, but it can also bake in the quirks of that data. "More" is not a substitute for "right."
Models are loosely inspired by how humans think, and sometimes the results feel uncannily human. Chatbots can hold convincing conversations. At the same time, models can beat human capacity in brute-force ways: processing huge volumes of data quickly and detecting patterns that a person would never see.
This is a shift from traditional descriptive analytics. Dashboards and static reports tell you what happened. Models can estimate what happens next and suggest what to do about it.
Algorithms vs. AI models
People use "algorithm" and "model" interchangeably in casual conversation. In practice, they're different parts of the same workflow.
An algorithm is a set of rules or instructions that tells a computer how to solve a problem. In the context of AI, algorithms are the training methods that teach a model to recognize patterns. Think of an algorithm as a recipe.
An AI model is what you get after running that algorithm on data. It's the learned artifact containing the weights and parameters that enable predictions. The finished dish, not the recipe.
A pipeline or system is the full production environment: data inputs, prompts, retrieval mechanisms, tools, safety guardrails, user interface, and monitoring. In enterprise settings, this system layer is also where credential handling, access permissions, and auditing typically live. Think of this as the complete restaurant operation.
Here's how these three layers map together:
AI model vs. AI application (what ChatGPT actually is)
Is ChatGPT an AI model? Not exactly.
ChatGPT is an AI application built on top of one or more large language models. The distinction matters because what you interact with includes more than the language model itself. Here's the breakdown:
- The underlying language model (GPT-4 or similar) that generates text based on learned patterns
- Additional systems layered on top, including web browsing, code execution, file retrieval, and content safety filters
- The user interface that lets you type questions and receive responses
The model is the engine. ChatGPT is the complete vehicle with steering, brakes, dashboard, and navigation system. The same applies to other AI applications: Google's Gemini app, Anthropic's Claude interface, and Microsoft's Copilot are all applications built on underlying models, but the packaging of model access, governance, and tooling varies, which is why teams often choose Domo when they need tighter control across the full system.
You'll see the same pattern inside analytics tools, too. Domo BI includes AI Chat and natural language query experiences that sit on top of underlying AI models, with a governed semantic layer and certified metrics helping keep answers grounded in consistent business logic.
AI vs. machine learning vs. deep learning
These terms get tossed around like they're synonyms. They're not, and the distinction matters more than people realize.
Machine learning and deep learning are subsets of artificial intelligence. People often over-attribute "AI" outcomes to the wrong piece of the stack, which creates real confusion when something goes wrong and teams are trying to figure out what to fix.
Think of these concepts like a castle. Artificial intelligence is the foundation of the whole building. Based on that foundation, other towers and spires can be built to serve specific purposes, like machine learning and deep learning, but they are all part of the same structure.
Artificial intelligence
Artificial intelligence covers systems that perform tasks we associate with human intelligence: pattern recognition, planning, decision support.
AI is heavily pattern-based and logical. It's not great at human judgment calls or values-driven nuance, but it shines at predictable and repetitive tasks like organizing data, running chatbots, and identifying trends. A common trap: expecting AI to fix messy definitions. If your organization cannot agree on what "active customer" means, the model will not magically resolve that for you.
Machine learning
Machine learning is where the system learns patterns from examples instead of being explicitly programmed with rules.
Sometimes it uses historical trends to make itself more accurate. Other times it looks for relationships and makes predictions about what will happen next. That's how you get things like credit fraud detection, breach detection based on past attacks, device diagnostics, and predictions about equipment failure based on usage patterns.
Train on yesterday's world and deploy into today's, and your "accurate" model quietly becomes a liability. The data generating process changes, the model doesn't, and nobody notices until something breaks.
Deep learning
Deep learning is a subset of machine learning built around neural networks with many layers. That depth helps models learn higher-level representations, useful when the raw input is complex, like images, audio, or natural language.
Because the model can represent ambiguity rather than only binary choices, deep learning supports things like art generation, sentiment analysis across large volumes of text, speech recognition, and natural-sounding translation. "Deep" does not automatically mean "better" for every business problem. For many tabular prediction tasks, classic models can be easier to debug and just as effective.
How do AI models work?
You've seen the magic trick: type a question into ChatGPT, get an answer back.
Between the input (your question) and the output (the response), the model runs a lot of math, fast. Although each AI model is different depending on the job, the overall flow is fairly consistent.
Teams start with a dataset and a goal: what the model should produce, and what "good" looks like. Here's something people learn the hard way, the goal needs to match the decision you actually plan to make. Otherwise you end up optimizing a metric that doesn't change outcomes.
Next comes training. Data moves into the model, the model makes guesses, and the training algorithm adjusts internal values to reduce errors. In neural network models, people often call the internal units "nodes," and the web of connections between them is what we refer to as a neural network.
Put differently: an algorithm defines how learning happens; training runs that algorithm on your data; the trained result is the model. Most teams spend more time looping on data quality and problem framing than on the first model run. That's normal.
Once trained, the model takes new input and produces an output. If outputs aren't accurate enough, teams typically revisit the data, features, objective function, or model choice, not just "add more data" and hope.
Weights, parameters, and inference explained
Weights are where the learning ends up. During training, the model adjusts millions (or billions) of numerical values called weights or parameters, and those values shape how strongly different patterns influence the output.
Think of it like tuning a mixing board in a recording studio. During rehearsal (training), the sound engineer adjusts sliders and knobs until the mix sounds right. Once the settings are dialed in, they're locked for the live performance. The model's weights work the same way: training adjusts them, and once training is complete, those weights are fixed.
Inference is what happens after training: the model uses its fixed weights to generate an output from a new input. When you ask ChatGPT a question, the model isn't learning anything new in that moment. It's applying its pre-trained weights to your prompt and computing a response.
A tiny example: a spam classifier might learn during training that the word "free" adds +2.3 to the spam score while "meeting" adds -1.1. At inference time, the model sums these weights for a new email and checks whether the total crosses a threshold. In production, you usually tune that threshold to balance false positives and false negatives for your specific workflow. Treating it as universal truth is a mistake teams make more often than they should.
Types of AI models
Not all models learn the same way. The "right" type depends on whether you have labels, what kind of output you need, and how the model will be used.
There are many different types of AI models. The three main types are supervised learning, unsupervised learning, and reinforcement learning. Each is useful for a different purpose.
Supervised learning
If you have examples with the "right answer," supervised learning is usually where you start.
Supervised machine learning is when a neural network makes decisions or predictions based on labeled datasetsmodel learns from labeled datasets to make predictions or decisions. Teams label examples and define features and target variables so the model learns the mapping from inputs to outputs. Structure first, then refinement.
Companies use supervised learning for tasks like speech and text recognition, regression analysis, spam filters, fraud detection, KNNk-nearest neighbors (KNN) algorithms, and random forest algorithms. The labeling step is where things go quietly wrong: if labels are based on what's easiest to collect rather than what you truly care about, the model will faithfully learn that mess.
Unsupervised learning
No labels? That's not a dead end.
In unsupervised learning, the AI model has its "training wheels" taken off. There is no human guidance, no labeled datasets, no predefined answers. The model looks for structure on its own, which is why unsupervised approaches often show up in clustering, segmentation, and anomaly detection.
Many use cases involve trend analysis, grouping sentiments of social media posts, identifying traffic patterns, and discovering inefficiencies in manufacturing processes. Just don't confuse clusters with causes. Unsupervised learning can tell you what groups exist, but it won't tell you why they exist without follow-up analysis.
Reinforcement learning
Reinforcement learning is about learning by doing, and paying attention to feedback.
An AI model is given a goal and put in a situation where it must try to reach it. For example, a model can be programmed to pick stocks with the goal of maximizing returns. The model learns through trial and error. When it succeeds, there's a reward. When it fails, there's a penalty, which reinforces the behaviors that worked.
As the model builds up experience, it can more accurately predict which actions are likely to lead to betterimproved outcomes. In real business settings, the misuse to watch for is defining rewards that accidentally incentivize the wrong behavior. Models optimize exactly what you measure, not what you meant.
Generative vs. discriminative models
Here's another useful split: do you need the model to choose a label, or create something new?
Discriminative models learn the boundary between categories. They answer questions like "Is this email spam or not?" or "What object is in this image?" These models are strong for classification and prediction tasks where you need a clear decision.
Generative models learn the underlying patterns of the data well enough to create new examples. They can write text, generate images, compose music, or produce code. When you ask ChatGPT to write an email or DALL-E to create an image, you're using generative models.
Modern generative AI includes several model families worth knowing:
- Large language models (LLMs) like GPT-4, Claude, and Gemini generate and understand text
- Diffusion models like Stable Diffusion and DALL-E 3 create images by gradually refining noise into coherent pictures
- Multimodal models like GPT-4o and Gemini can process and generate multiple types of content (text, images, audio) in a single interaction
Foundation models and large language models
Foundation models changed the default playbook. Before them, you trained a model from scratch for each task. Now you don't have to.
Instead of that, foundation models are trained once on massive datasets and then adapted for many uses. GPT-4, Claude, Gemini, and Llama are all foundation models.
Large language models (LLMs) are a type of foundation model trained on text. They predict the next token in a sequence, which sounds simple, but it enables capabilities like question answering, summarization, translation, and code generation.
Several concepts help explain how LLMs work:
- Tokens are the units of text an LLM processes, typically words or parts of words. "ChatGPT is amazing" might be a few tokens. LLMs are often priced and limited by token count, which is why prompt length and retrieved context have real cost and performance implications.
- Context window is the maximum number of tokens an LLM can process in one interaction (input plus output combined). A larger context window matters when you need the model to reason over long documents or keep a conversation consistent without losing earlier details.
- Parameters are the learned weights in the model. More parameters can mean more nuance, but they also raise computational cost and can make the model harder to run within strict latency or budget constraints.
What are pre-trained AI models?
Most teams don't train a modern model from scratch. They start with one that already exists. That shift alone has changed how organizations think about AI adoption.
Pre-trained models have already learned from large datasets before you use them. Instead of training from zero (which requires serious data, compute, and expertise), you can start with a pre-trained model and adapt it to your needs.
This changes where teams spend their time: less on raw training runs, more on data readiness, evaluation, and governance. There are three main ways to adapt a pre-trained model:
- Prompting guides the model with instructions and examples without changing its weights. This is the cheapest and most flexible approach, but small wording changes can swing outputs more than people expect.
- Fine-tuning retrains the model on your specific data, permanently adjusting its weights. This works well for domain-specific language (legal, medical, technical) but requires more resources and careful guardrails to avoid teaching the model sensitive or low-quality patterns.
- RAG (retrieval-augmented generation) connects the model to external data sources at runtime. The model retrieves relevant information before generating a response, keeping outputs current without retraining. Teams often stumble by retrieving "close enough" content. The quality of retrieval (and your document chunking and metadata) can matter as much as the model itself.
In enterprise setups, RAG also tends to be the moment where governance becomes real: which governed datasets and documents can the model retrieve from, and who approved that access?
How to train an AI model
Training isn't a single step. It's a loop.
Whatever kind of task you want your AI model to do, there's a general workflow to follow. The quality of your training data directly determines the reliability of your model's outputs. Here are the main steps teams take to train AI models.
- Gather data. The more data you have, the more accurate your model can become and the more complex decisions it can handle, assuming the data reflects the real conditions you'll face after deployment.
- Clean the data. This involves removing inaccurate entries and, for supervised learning, annotating and labeling the datasets. Cleaning also includes removing "noise," which isn't always wrong, but can still push the model toward the wrong conclusion. Clean after splitting and you risk leaking information from your test set into training, which produces results that look great and then collapse in production.
- Choose a model. Start from the output you need and the constraints you have. When you choose a model, you'll take into account the learning type (supervised, unsupervised, or reinforcement), as well as resources like processing power, time, and how many tasks the model must support.
- Train your model. Run training on your training data, and use a validation set to tune settings and compare alternatives. Be strict about what's in validation versus training. Peek too often and you can accidentally optimize for the validation set instead of the real world.
In practice, teams often do this work in notebook environments so they can iterate quickly. For example, Magic Transform includes integrated Jupyter Workspaces where data scientists can build, train, and validate models using Python or R without constantly exporting data to a separate tool.
Testing and evaluating AI models
A model that fits its training data perfectly can still fail in production. Testing is where you find that out.
The standard approach splits your data into three sets:
- Training data teaches the model (typically 70 to 80 percent of your data)
- Validation data tunes the model during development (typically 10 to 15 percent)
- Test data measures final performance on completely unseen examples (typically 10 to 15 percent)
Keeping that final test set truly unseen is the whole point. Those numbers only matter if they reflect how the model behaves on data it hasn't memorized.
The metrics you use depend on what your model does. Here are the most common ones.
For classification models (spam/not spam, fraud/legitimate, churn/retain):
- Accuracy measures overall correctness: what percentage of predictions were right?
- Precision measures how many predicted positives were actually positive
- Recall measures how many actual positives the model caught
- AUC (area under the curve) measures how well the model distinguishes between classes
For regression models (sales forecasting, demand prediction, pricing):
- MAE (mean absolute error) measures average prediction error in the same units as your target
- RMSE (root mean squared error) penalizes large errors more heavily
- R² measures how much variance in the outcome your model explains
For generative models (text, images, code):
- Human evaluation remains the gold standard for quality assessment
- Task success rate measures whether generated outputs actually work (does the code run? does the summary capture key points?)
A few failure modes show up repeatedly in production. Data leakage happens when information from your test set accidentally influences training, making your model look better than it actually is. Overfitting occurs when your model memorizes training data instead of learning generalizable patterns. Distribution shift means your test data differs from real-world data in ways that hurt performance.
If you're deploying AI agents or LLM workflows, evaluation often becomes ongoing rather than one-and-done. Human-in-the-loop review steps and built-in prompt-response evaluations can help you check output quality before an agent takes action.
Deploying AI models in production
Deployment is where the theory meets the messiness of real systems.
Deploying a model means integrating it into your systems so it can make predictions on real data in real time. You'll connect it to live inputs, wire it into workflows, and make sure you have enough compute and the right frameworks for the job. Teams that measure only model accuracy and ignore end-to-end latency, retries, and failure modes learn this lesson the hard way. Those are what people actually feel.
If your AI model's predictions are imprecise or inaccurate, you will need to make changes. Bias often surfaces here too, usually because the training data was incomplete or skewed. Continuous learning and optimization are part of the machine learning process, so you keep refining the model over time.
In enterprise environments, deployment also means you need a control plane around the model. That usually includes things like:
- Credential handling so model connections (and data connections) don't end up in random scripts
- Access permissions so models only touch authorized, governed data
- Auditing so you can answer "which model ran, when, by whom, and on what data?"
- Performance monitoring so you can see latency, failure rates, and output quality trends
This is where orchestration layers matter. For example, Agent Catalyst includes an AI Service Layer Abstraction that helps teams swap or upgrade models (OpenAI, Google, Anthropic, or custom models) without rebuilding the agent or app logic around them.
Model drift, monitoring, and retraining
A model that works at launch won't stay perfect forever. The world moves.
Data drift occurs when the input data your model receives starts looking different from the data it was trained on. Maybe customer behavior shifts, new product categories emerge, or seasonal patterns change. The model's weights were tuned for the old patterns, so performance degrades.
Concept drift is subtler: the relationship between inputs and outputs changes. Fraud patterns evolve as criminals adapt. Customer preferences shift. What used to predict churn no longer does.
Monitoring signals that indicate drift include:
- Accuracy or other metrics declining over time
- Prediction distributions shifting (suddenly predicting more positives or negatives than usual)
- Increased error rates or user complaintscomplaints from people using the system
- Input feature distributions changing significantly
When drift is detected, retraining becomes necessary. Some organizations retrain on a schedule (monthly, quarterly). Others retrain when ongoing model monitoring reveals performance drops below a threshold. The right approach depends on how quickly your domain changes and how costly prediction errors are.
Before deploying a retrained model, many teams run it in shadow mode (making predictions without acting on them) or use A/B testing to compare the new model against the current one. Shadow mode, in particular, catches "it works in staging" surprises without putting real operations at risk. Having a versioned testing environment for agent and model changes makes it much easier to validate updates safely before promoting them to production.
AI model examples and business use cases
If the term "AI model" still feels abstract, examples make it real.
Here are common categories you'll see in the wild.
Classic machine learning models:
- XGBoost and Random Forest for classification and regression tasks like churn prediction, credit scoring, and demand forecasting
- K-means and Density-Based Spatial Clustering of Applications with Noise (DBSCAN) for customer segmentation and anomaly detection
- Collaborative filtering models for recommendation engines (Netflix, Spotify, Amazon)
Computer vision models:
- ResNet and EfficientNet for image classification (medical imaging, quality control, content moderation)
- You Only Look Once (YOLO) for real-time object detection (autonomous vehicles, security systems, inventory tracking)
Natural language processing and LLMs:
- GPT-4, Claude, and Gemini for text generation, summarization, and question answering
- BERT for text classification, sentiment analysis, and search relevance
Generative image models:
- DALL-E 3 and Midjourney for image generation from text descriptions
- Stable Diffusion for customizable image generation and editing
Audio models:
- Whisper for speech-to-text transcription
- WaveNet for text-to-speech synthesis
Everyday examples you've probably used:
- Netflix's recommendation model suggests shows based on your viewing history
- Google Translate uses neural machine translation to convert text between languages
- Your email's spam filter uses classification models to keep unwanted messages out of your inbox
- Your bank's fraud detection system flags suspicious transactions in real time
Business applications across industries
The same model types repeat across industries. The differences come from the data, constraints, and what "success" means in each context.
In finance, classification models detect fraudulent transactions by identifying patterns that differ from normal customer behavior. Regression models forecast revenue, predict loan defaults, and optimize pricing. Before AI, fraud detection relied on rigid rules that criminals could easily circumvent. Now, models adapt as new patterns emerge, assuming you retrain and monitor them as the world changes.
In sales and marketing, propensity models predict which leads are most likely to convert, helping teams prioritize outreach. Churn prediction models identify at-risk customers before they leave, triggering retention campaigns. Sentiment analysis models monitor brand perception across social media and reviews. Treating these scores as guarantees rather than prioritization signals is where teams get into trouble.
In operations, predictive maintenance models forecast when equipment will fail based on sensor data, reducing unplanned downtime. Quality control models detect defects in manufacturing using computer vision. Supply chain models optimize inventory levels and logistics routing.
In healthcare, image classification models assist radiologists in detecting tumors and other abnormalities. Natural language processing extracts insights from clinical notes. Predictive models identify patients at risk for readmission or adverse events. Accuracy alone doesn't help if the output arrives too late or in a format clinicians can't act on. That's a deployment problem, not a modeling problem, and the two get confused constantly.
Ethical considerations for AI models
AI models are powerful tools, and they come with real limitations and risks.
Hallucinations occur when generative models produce false or nonsensical outputs that sound plausible. An LLM might invent a legal case citation that doesn't exist or confidently state incorrect facts. This is particularly dangerous in high-stakes domains like healthcare, law, and finance. Teams reduce harm by tying answers to governed sources (for example, with RAG) and requiring citations or verification steps when decisions have consequences.
Bias emerges when models trained on biased data reproduce those biases in their predictions. A hiring model trained on historical data might favor certain demographics because past hiring decisions were biased. A lending model might unfairly deny credit to certain groups. Addressing bias requires careful attention to training data, evaluation across subgroups, and ongoing monitoring. Checking only overall accuracy is not enough. You can still have deeply unfair outcomes even when the top-line metric looks fine.
Privacy risks arise when models trained on personal data inadvertently memorize and reveal sensitive information. A model trained on medical records might generate text that includes patient details. Organizations must consider what data goes into training and implement appropriate safeguards. Even when you're not training, sending sensitive fields to an external model API can create exposure if you haven't set clear policies and redaction controls.
Security vulnerabilities include adversarial attacks where carefully crafted inputs trick models into wrong predictions. Adding imperceptible noise to an image can cause a vision model to misclassify a stop sign. Prompt injection attacks can manipulate LLMs into ignoring their instructions. Teams often miss this early because everything works with "friendly" prompts. Until someone tries the unfriendly ones.
Intellectual property questions surround models trained on copyrighted content. When an image generator creates art in a specific artist's style, or a code generator produces snippets similar to copyrighted code, ownership becomes murky.
Explainability challenges make it difficult to understand why deep learning models make specific decisions. When a loan application is denied, regulations may require an explanation. Black-box models can't always provide one, which is why model choice and documentation matter as much as raw performance.
For organizations deploying AI, governance matters. This includes audit trails for model decisions, access controls for sensitive data, documentation of training data and model behavior, and processes for addressing errors and bias. It also includes making sure sensitive fields are handled correctly before they ever reach a model. Automated personally identifiable information (PII) monitoring in a data integration layer can flag risk early, and compliance standards like SOC 2, HIPAA, and GDPRService Organization Control 2 (SOC 2), the Health Insurance Portability and Accountability Act (HIPAA), and the General Data Protection Regulation (GDPR) shape what "responsible" looks like in regulated environments.
Future trends in AI modeling
The models are changing, but so are the assumptions teams build around them.
Smaller, more efficient models are becoming viable alternatives to massive foundation models. Techniques like distillation, quantization, and pruning create models that run faster and cheaper while maintaining much of the capability. This matters for organizations that need on-device deployment, lower cloud costs, or tighter latency requirements.
Multimodal models that process text, images, audio, and video together are becoming standard. Instead of separate models for each data type, a single model can understand a document with text and images, transcribe and summarize a video, or generate content across formats.
AI agents that can take actions, not just generate text, represent the next frontier. These systems can browse the web, execute code, interact with APIs, and complete multi-step tasks. The shift from "AI that answers questions" to "AI that does work" changes the risk profile too. You're no longer only evaluating output quality. You're evaluating actions, and that's a different problem entirely.
Retrieval-augmented generation (RAG) is becoming the default architecture for enterprise AI. Instead of fine-tuning models on proprietary data (expensive, slow, privacy concerns), organizations connect models to their data sources at runtime. The model retrieves relevant information before generating a response, keeping outputs current and grounded.
For enterprise AI adoption, these trends mean faster time-to-value, lower barriers to entry, and new governance challenges, especially when agents can trigger downstream workflows.
How Domo helps you build and deploy AI models
If you're trying to stay competitive, AI can help, when it's governed, connected to the right data, and actually deployable.
Domo's suite of AI tools and services supports secure and flexible AI, along with AI-powered chat and guidance inside your flow of work.
For technical teams building and managing AI models, Domo provides:
- Model management, deployment, and optimization for Domo-hosted models
- Integration of external models with existing data environments through Domo Integration
- Tools to create and fine-tune models with Magic Transform for data preparation
- Pre-built Universal Models for forecasting, anomaly detection, sentiment analysis, and PII detection, eliminating the need for development or training
If you're dealing with a mix of models across vendors, Domo can also help you keep things governable and swappable. Agent Catalyst includes an AI Service Layer Abstraction that sits between AI models and the agents and apps that use them, so teams can test options from providers like OpenAI, Google, and Anthropic, or bring their own model (BYOM) through connectors to services like Hugging Face.
For business teams looking to get value from AI without deep technical expertise, Domo offers:
- AI-guided experiences that surface insights without requiring you to build queries
- Chat-style data exploration through Domo BI that lets anyone ask questions in natural language
- Agent Catalyst for building AI agents that can take action on your data, not just analyze it
If you're on the "where do we even start?" part of the journey, Domo also offers AgentGuide and Executive Transformation Workshops to help teams prioritize the right use cases, plus expert agent templates that give you a practical starting point instead of a blank page.
Across both audiences, Domo provides a secure and transparent environment that promotes responsible AI practices through built-in usage analytics and governance. Trusted conversational AI helps people ask questions, uncover insights, and take action.
A big part of that trust is context. Domo's semantic layer and certified metrics help keep AI model outputs tied to governed business definitions, so people aren't debating which version of "revenue" an answer used.
To learn more about how your organization can speed up productivity and increase efficiency with Domo's AI tools, start a free trial or schedule a demo with us today.
Frequently asked questions
What is meant by an AI model?
An AI model is a program trained on data to recognize patterns and make predictions or decisions without human intervention. During training, the model learns numerical values (called weights or parameters) that capture patterns in the data. When the model receives new input, it applies those learned weights to generate an output, whether that's a classification, prediction, recommendation, or generated content.
Is ChatGPT an AI model?
ChatGPT is an AI application, not a model itself. It's built on top of large language models (like GPT-4) but includes additional systems: web browsing, code execution, file retrieval, safety filters, and the chat interface you interact with. The model is the engine; ChatGPT is the complete product.
What is an example of an AI model?
Examples span many categories. Classic machine learning models include XGBoost for classification and Random Forest for regression. Computer vision models include ResNet for image classification and You Only Look Once (YOLO) for object detection. Large language models include GPT-4, Claude, and Gemini. Generative image models include DALL-E and Stable Diffusion. Everyday examples include Netflix's recommendation model, Google Translate, and your email's spam filter.
What is the difference between an AI model and an algorithm?
An algorithm is a set of rules or instructions for solving a problem, like a recipe. An AI model is the result of running that algorithm on data, like the finished dish. The algorithm tells the computer how to learn; the model is what gets learned. A third layer, the pipeline or system, includes everything around the model: data inputs, tools, safety filters, and user interface.
Are AI models always accurate?
No. AI models can make mistakes, and their accuracy depends on the quality of trainingdata, evaluation, and ongoing monitoring.
Domo transforms the way these companies manage business.




