Building Agentic AI in a Governed Data Environment

Try Domo for yourself.
Free Trial
Play video |
00:00
Watch the Webinar
Play video |
00:00
Watch the Webinar
Play Video
Video transcript

Welcome to this conversation between Chris and I. My name's Cody Irwin, I'm the AI adoption director at Domo. Chris, wanna do a quick intro?

Chris Willis: Yeah. I'm Chris Willis. I'm the Chief Design Officer at Domo.

Cody Irwin: Chris, wanna kick us off? We were debating some recent posts.

Chris Willis: So, this post, I think you can see it, it's been making the rounds. It's been showing up in my feeds, and people have been sending this to me. I think this post is indicative of something you're seeing a lot that's adding to a lot of FOMO. I'll just give you the quick take. I'm sure you've seen lots of posts like this. It's a post that suggests that with AI, this time it's different. I know historically we've heard this before, right? You know, whenever someone says, "this time it's different," I think those are the set of words that make me suspect if anything's really gonna change. But I thought it'd be good to sort of react to some of the things in this post. Have you seen this post at all, Cody?

Cody Irwin: I have, I read through it. What was kind of interesting about the post is it does resonate. Like reading through it, a lot of us have been thinking through what's happening with AI. We're seeing tons of news about AGI is almost here, jobs are over. And I read it and found some resonance points.

Chris Willis: I'd like to dive into a few specifics there, because I think this is indicative of a lot of this hype that you're being exposed to. And I think we need to inoculate ourselves from some of it. I'll just call out a few examples here. So, yes, I saw this and I've had the same kinds of apprehensions where I'm both wildly excited about the potential, and then I'm terrified. I kind of just vacillate between those two. And it's the extremes. It's extreme and it's really tiring me out. But this kind of stuff really doesn't help either, because if you look at these kinds of articles, there's a lot of hype that's built in here in bold. "I'm no longer needed for the actual technical work of my job." Or you look here, "then on this day, essentially the singularity happened." Like, did the singularity happen on a Tuesday? I don't think so. And I know why, because my life hasn't changed in any appreciable way.

What you tend to find in a lot of articles like this, and a lot of these are articles that are not just stumbling into your feed. These are ones that are being sent out by like, CEOs, right? TikTok videos and tweets saying like, you know, get excited 'cause your job's going away. Like, you know, people talking, Elon Musk talking about, well, he's not saying universal basic income, but he is saying, we'll just send you a check 'cause you're not really needed here anymore. So enjoy your free time.

But what I would say is, if you're looking at these kinds of posts, first, ask yourself, is there any real information in here that you should be concerned about? Because when I go through here, there's no specific information that suggests anything has changed. I feel like a lot of these posts, you know, like right here, "I tried AI, but it wasn't that good." A lot of these posts are essentially emotional manipulation. So you have to be really careful of that because that manipulation can cause you a lot of anxiety. It can also cause you to make really bad decisions.

He talks here that these models have gotten so much better in just a year that, you know, in the course of just the next year, I won't be needed anymore. That's not the case. I don't know if any of you remember out there, but you know, it was just in the fall of this year where chat GPT was coming out with their new model of Model five. And it was not what people expected. They had to go to a take a different approach. There's a lot of techniques that were used in AI model development over the last year or two through pre-training, and they start to hit that limit, that asymptote, and they weren't really getting any better, and were kinda stuck there. Cody, you and I were talking a little bit earlier, that the CEO of Philanthropic had essentially a similar sort of take, which was, by the end of 2025, you won't need, or 2026, you won't need software engineers anymore. I think that's a bit overblown.

So I would just say, you know, what are some of the tools people could use to keep themselves somewhat sane in this? We've talked a little bit about first principles. I mean, here's the thing that's terrifying. 85 million views on this post.

Cody Irwin: Yeah. And I think part of what's happening, it's kinda social media and how it operates, like extreme gets attention. Extreme thoughts, extreme talk tracks get attention. Sensible things typically don't. I mean, we're giving this attention. I'm torn already.

It's like, it really is like one of those things where I saw this and I read it. I read the whole thing. I read it on Twitter and it wasn't like a classic tweet. It's long. I think what's enticing about extreme kinda rhetoric when it comes to things like AI is there is some truth to it. There absolutely is. And that's why it kind of resonates with us. And it feels somewhat alarmist to a degree.

Chris Willis: Well, let's take it to the opposite view. Just another day or two ago, not getting 85 million views, was a post like this that you've seen, right? This sort of, "okay, is any of this hype real?" And, you know, you see on Reddit, a data engineer who's being told like, "yeah, you should use AI more and you're gonna be replaced." And he's like, "I'm not really seeing any of that."

One comment here really jumped out at me. It's down here a little bit, but it's someone asking, what's the missing piece? Or why are certain enterprises going to just either be replaced or leapfrog everybody else? And I thought this was an interesting discussion right here. This commenter says, "I think the effectiveness of all of this depends on how well integrated your AI tools are with your internal systems, and whether your data sets are well documented. And most organizations don't have an environment like that." And then this commenter responds, "None of them do." That really resonated with me. What's your take on that?

Cody Irwin: It's kinda proving like, "Hey, you should have invested in a good data foundation years ago, but you didn't." And now AI's really hard because of that.

Chris Willis: Yeah. I think that's something that's near and dear to our heart. I'd love you to maybe dive into that a little bit more, which is, a lot of this is kind of first principles. It's basics. A lot of companies, I'm sure you've seen this. They have people who have been using some of these models and then they're like, "okay, we're gonna create this moonshot type project. We're gonna do all this magical stuff." And it's like, one, the models don't really allow you to do that. And two, just doing the basic things for many organizations has been a struggle. Just getting systems integrated, having data observability, data health, getting your data AI ready. What does that even mean? And a lot of 'em want to kind of skip the steps. 'cause I think that hype is driving a lot of that, but there are tremendous amounts of efficiencies and opportunities to be gained by starting simple on low risk type things. I think there's a lot of organizational and cultural changes that are up for grabs right now. But maybe you could tell me a little bit about, you know, maybe some of the first principles that you've been seeing applied to the people that you work with and the organizations you work with.

Cody Irwin: We started kinda walking this path with a number of organizations, and I've seen recent research that backs it up. Like OpenAI came out a report on change. Like, how do you change a company using AI? It starts with marginal gains. Like the best place to start is on incremental improvement. The book, Atomic Habits by James Clear has a story about the British cycling team. And I love talking about this with companies because it kinda illustrates in a different context, like what's happening in the market.

So the British cycling team in the early two thousands was not very good. They weren't the best team out there. And they got a new coach. The new coach came in and said, let's focus on things that we can control every day. So let's get like a better pillow to sleep on. Let's change our diet a little bit. Let's wear a better speed suit. Let's change these small things. And they found that the compounding effect of marginal gains drove significant improvement to the point now.

Chris Willis: Yeah, I remember that. I remember that story. The thing that was really fascinating about it was the British cycling team didn't win because they created like a new kind of bike or some new kind of scientific breakthrough. What they did was they made sure the athletes were sleeping and eating right. And I think there was one thing that was interesting. They painted their van white so that they could see if there was dirt anywhere that might be getting into the gears of the bikes that they're repairing. I mean, it was little things. And I think to your point, all found the power of marginal gains is actually a mathematical foundation, right? It's like, you could make a 1% improvement over here and a 1% improvement over there, but if these improvements are related, you don't just get a 2% improvement. You get maybe a 5% improvement over time.

That compounding aspect is really important. And I've noticed the companies that seem to be pulling ahead and are seeing value from a lot of these AI type projects, I would suggest that it's not strictly a technical advantage that they're using. It's an organizational change. It's leading through change. It's changing the culture.

I was on a panel recently for AI leaders, and I had this sort of epiphany as they were going around the table asking people, "what's your plan at your company?" Almost everyone had the same plan, which was, "we're just gonna give everybody a bunch of AI tools and then they're gonna start innovating." AI just paint job. And I thought, okay, just taking a step back from this, you know, did you hire innovators at the start? Is Dave in HR and Mary in accounting hired because Mary really thinks outside the box? From my experience with accounting firms, those people don't last long. You don't want really creative thinkers in accounting, that gets you into trouble.

But I think, you know, this was somewhat, I called it "the year of magical thinking." You know, like, "we'll just apply a technology and it'll solve all the problems." Of course, that's somewhat delusional. That never really happens. But what I am seeing is companies that are allowing their culture to shift to become a learning organization, which is maybe something they're not used to. Usually they're used to staying in their lanes and everybody kind of is aware of what's happening in the business. And then using the tools to sort of improve or better understand or report, that's fine. Now you're being asked to do something a little bit different.

And what I've seen is the companies that use AI not to create these sort of very disruptive sort of moonshots. And by the way, really big innovation. As exciting as that is in all of the mythology around innovation is a very risky strategy. Big innovation is time consuming. It's resource intensive, and its outcomes are not guaranteed. I think, you know, when you look at a lot of executives and a lot of these sort of mandates that are being thrown out there, that's not a push by management for innovation. It's just impatience. It's like, "we have to do something, let's do something."

Cody Irwin: It feels almost like, in some cases it's almost seem like desperation. They're going after, like it's fear. 'cause they read these articles about hype, and they're like, "what's happening? Like, who are we now?" And like, one of the biggest thing factors that tends to drive, like what you're talking about, like actual change. It is the people is the processes, but it's also back to first principles. It's knowing why the organization exists, the problems it's trying to solve. We almost lived in a market for a while now, where we've just accepted inefficiency. And we've been like, "Hey, we're all kind of mediocre. So, you know, it's easy for us not to like, spend time setting goals, just gonna operate." And like, having that in place. I found actually like the companies that are working well are the ones that know why they exist and they adapt around the intended outcome. And just realize AI will kinda flex into that. It'll help them.

Chris Willis: I'd love to dive a little deeper on that one. 'cause you have some great ideas there. So one is, when you're going through, in some cases sort of this existential moment where a lot of companies are like, "what do we stand for?" I think that's actually a great conversation and is much needed. Because I think to your point, many organizations, if you think of like the target you're after, the outer ring would be the "what" of the business. Like "what do we do?" Everyone pretty much gets that. We usually focus on the second ring, which is the "how." Right? It's the day-to-day operations, it's the reporting, it's the alerting, it's the R&D, it's all of these processor type things. That's usually where most of the thinking stops, right? It's like you check in, you check out, and you did your job. We are solidly in the "how" ring, but the target, the bullseye that I think is where the value is, is the "why." So why do we exist as a company at all? And, you know, a lot of companies still focused on the "how" organize their hierarchies around that, right? These companies are these functional hierarchies and everyone stays in their lane. These are "stay in your lane" organizations.

But what happens when technology comes that sort of doesn't care about your lanes? Says, "oh, you're in marketing, but you could actually write an app. You're now a coder." Yeah. "You're a developer or you're a developer and now you're writing a business plan. Or you know, you're in accounting and you're a designer." Now it's a bit unsettling. Right? How do you think about navigating that space?

Cody Irwin: Yeah. This really is an age of empowerment for organizations and for individuals. Those that are creative are flexing into new things. I see it across companies. And to be fair, Domo, as a company, historically, one of our goals was to really unlock data for the business, which is like, those are simple words. That's a pretty big change. The companies, again, that are getting this figured out, they're kind of like advancing, using AI or those that know what the problem is, and they're willing to adapt.

And when it comes to like, that change in organizational structure and perspective, companies have to be willing to enable their people to give them some freedom to pull away some of that scaffolding to a degree and like, let people thrive. I, you know, one of my coworkers here, he's in marketing. He's young, he's hungry, and he's doing all kinds of crazy cool things across the company because he's been given a little bit of space. They said, "Hey, go create, go do cool things." Companies need to give some of that a little bit of space to do some cool things.

One thing I wanna call out too, what I've seen work well in perspective, like on how you prioritize and think about the space is kind of leaning back into like McKinsey's three Horizons framework, where it's like, "hey, if you want to innovate, you know, take 70% of your budget." And that could be people, process, tech, all of that. Apply it towards marginal gains. Like that's where you're gonna get the most. That's a known commodity. Take 20%, apply it towards what's next. Like, what's kinda an obvious next step, which every company has. Some of those, they're like, "we've always wanted to do this thing. It's always been a little bit too hard." AI could make it easier. Like, focus on some of those and then reserve 10% for innovation. Some of the companies should be thinking about what's happening, because again, like this space is moving quickly, and it is really interesting.

I think that's kind of part of the perspective that needs to be there is not just marginal gains, not just like org improvement. Like find your craziest thinkers and give them that 10% budget and be like, "go out and do crazy things." Like just try crazy things, that should happen. But for most of the people, it is kind of educating them to walk this path.

Chris Willis: Yeah. Another way I would frame that, you know, McKenzie uses the horizons. I think that's a good betting strategy. You know, I think making the right kinds of bets, you don't have to bet everything on one thing. That would be a bad idea in many cases. But yeah, I mean, I think also, you know, staying open to being surprised and being open to changing your mind. I think it was Oscar Wilde had that famous quote that said something like, "when I learned new information and I change my mind, what do you do?" But, you know, part of innovation or thinking innovatively is not just creating things. It's being curious and it's about collecting a lot of dots so you can start connecting dots that others haven't seen. I think, you know, one of the things that you might have caught us off guard in particular was this moment recently.

Um, you know, something like open Claw came outta nowhere and was really interesting. You know, I think as a prototype of an idea, at the same time, begin to undo 40 years of IT security in many enterprises. So it had that as an unintended consequence. But I think, you know, there's something here which is, I think people are hungry for AI that does something. And not staring into this sort of, you know, black box. What were some of your takeaways, as you think about innovation and you start to see these kinds of things because there will be more of these sort of events, I'm sure.

Cody Irwin: Yeah. Let's double click into that. 'cause it really is interesting. It's what open clock kind of communicated to me. And to be fair, I bought a Mac Mini. I'll raise my hand. I've got one on my desk at home. My 14-year-old is setting it up with me. Um, he wants to do it on his own. I'm like, no time out. Like, you know, he got some problems.

Um, but it kind of showed like a moment of desperation in the market. I think the way people reacted to what it is. And again, I think we've all kinda learned in life that, again, extreme is interesting. We want some of that. There's something sensible in the middle that is sometimes hard to kinda suss out and find. There was a lot of information out there about open quality overnight almost. It's just like posts are showing up everywhere. People are talking about, again, they're throwing out like numbers, like, "I'm saving this much money or doing this thing." They kind of went to the extremes. And there was something desperation oriented, I think in our psyche where people kind of gave away the farm to a degree. They kind of went to the extreme there, like you said, like 40 years of security down the drain.

Um, but I think that the bigger story behind that is why. Like, why were they willing to do that? Like, a lot of these people doing this are are smart technical people. Um, I listened to the All In podcast a few weeks ago where, you know, two of their hosts debated this one's like, "I've got three or four of these running in my startup. We've named it Ultron," and the other guy's, like, "I'm not allowed to touch anything." So two very smart entrepreneurs are arguing about this thing. But there's something there. There's some desperation moment that's triggering us to kinda like, give some stuff away and take things to an extreme.

Chris Willis: So yeah. A few things there. I gotta double click on this a little bit. Uh, I think you're right. I think there's, there's maybe a, a shift in that there is this desperation for some kind of, you know, access to a new kind of power or, or a new kind of, you know, feature functionality set that you're willing to give away, potentially some security, maybe a lot of security.

You know, there was the meta's chief of AI safety, you know, she tweeted out that she had open claw going and it started deleting all her emails. And then she tried to stop it from her phone, had to run to her house. And then she said she was disassembling her Mac Mini as if she was diffusing a bomb. That's a different kind of world you're living in when you're just like, "okay, what are you willing to give up for this sort of mythical new future?"

Um, I definitely, there's something there, but I wanna, I want to dive a little deeper. 'cause you sort of said this, which is, you mentioned the middle. So I think, you know, at one side, there's ginormous enterprise software that many organizations have been building over the course of decades. They run big operations, big systems. They have to be that way. You have 25,000, 50,000, a hundred thousand people in your business. They have to be using the same data off the same database. They're going to have to use the same little buttons and features Right. To get work done. They can't all be, you know, dispersed in all doing their own thing. That would be chaos. That wouldn't work.

On the other hand, there, I think it's a long tail. I'll just call it the messy middle, where there are things that people have done historically with like Excel spreadsheets. And, you know, there was always that joke, like, "well, anything you see on an Excel spreadsheet is, could be another SaaS app or a V-SaaS app, like vertical SaaS kind of thing." I think that's where there's a lot of opportunity for some of this technology to be really beneficial. I think the open claw example fits that, right? No one's saying, "oh yeah, we just use open claw and it just replicated, you know, SAP." No, that's not happening.

Um, but there is a big messy middle of problems that right now we've always been using things like Excel to solve, right? We've created tiny little apps and maybe those apps are only used for a few days, and that's it. Or maybe the app is so specific to the things you have to do that no one else would ever put the time in to code that up. Well, the unit of work and intelligence and coding is changed dramatically. It's basically free. However, you know, I'm still not at the point where I've been able to figure out how to use these tools to like, in that Twitter post where I could just say, "create me the next Uber" and then walk away for four hours and it comes back and it's fully working. I haven't had anything close to that kind of experience.

Cody Irwin: I found the same thing. I use Claude code semi often. I love it. Like I was always a hack engineer. I was never great at that side of it and has given me a slight superpower, I can go much faster. It requires iterations, it requires guidance. Like I don't put it in walk away. And immediately it's awesome. It requires me actually working with it. And I've kind of found in a lot of ways using these systems, they make me a little more efficient. They really are kind of still in the assistant phase in a lot of ways in how I'm using 'em.

Chris Willis: So I would say another takeaway, you know, I think you're right. And I think, you know, for everyone out there, the way you vaccinate yourself against it is focusing again, on first principles. Um, you know, things like, "okay, if this is kind of like a little superpower, well then what does that mean in terms of how you should start engaging with it and what it could mean for your career?" And I think what you're starting to see is used, right? These tools really elevate human judgment, skill and taste or creativity, right? Um, if you don't have any good ideas, you're not gonna get anything out of these tools, right? Um, but well, I think there's a huge class of people who have been, you know, like yourself sort of right on the edge of that technical frontier. And it's like, "if I was just a little bit better, like if I didn't have kids who need to go to school and eat, I could really focus on learning how to set up a react project, you know?" But instead I can't. Well, that's changed, right?

Cody Irwin: Yeah. The game's changed, I think there for sure. And back to like first principles, I think one thing I've kinda wondered, looking at the market, I'd love your thoughts on this too. It is a people, process and technology thing. There are companies that already had some mature processes. I'm amazed at how often talking to companies about the process side of AI. 'cause the reality is if you're gonna apply an agent to a problem, it's gonna step into a process of some kind that you have in the company.

Chris Willis: Well, I think I was talking to one of our colleagues here, and he was talking about this sort of invisible box problem that companies have, which is they're not exactly sure where those boundaries are for the kinds of problems that should be solved.

Cody Irwin: Yeah. And I, I've come across what I feel like is a good framework for this, which is, you know, you obviously wanna start with low risk kinds of problems to be solved. You wanna start with problems that are well understood that you have data for, right? Those are the basic sort of principles. But I think also understanding, if you put things on a on the scale of, we call it the verifier scale, which is on one scale you have difficulty of verification of whatever this problem is. And on this scale, ease of generation by the models. So I know I'm kind of miming graphics here, but bear with me. There's a line, if you cut that in half, there are certain problems. Like let's say you ask chat GBT for a diet that's going to make you live to your 120. Will it give you an answer? Absolutely. It will be more than delighted to give you an answer. Is that answer right? Can you validate it? That's very, very difficult. That's like up here in terms of difficulty. Because I mean, yes, if it says, "you know, pick, take up smoking and eat more bacon," then maybe you'd be like, "yeah, I don't think that's gonna be the one that's gonna work." You could be wrong. But, you know, in order to create the right amount of tests, you'd have to have many, many lifetimes. Right? And you'd never be able to do it. So do something simple like, "okay, write me a haiku." There's very low risk in the haiku, and it's kind of easy to validate. Yeah, I like it or I don't. Now if you move over to like generation, that's where you get into like the vibe coding era right here where vibe coding and a lot of these models are are getting very good at generating code. Um, but they're generating everything from scratch. And so there's a lot of opportunities for things to not go quite right. They could open up security issues, right? They could be writing API keys in your code. Um, they may not solve the problem.

There are ways to take a problem that is maybe outside of that boundary and move it into an easier boundary. And that would be through using certified tools and certified data and components and abstractions and all these other kinds of things. So I think that kind of framework is useful because there's a whole universe of problems out there waiting to be solved. Not all of them are ideal for this moment.

Cody Irwin: And that point, what I'm finding a lot talking to companies is when it comes to even kind of knowing what they do, they don't always know. Like the processes aren't there. It's tribal knowledge locked in someone's head.

Chris Willis: I think having processes defined, it's like a first principle concept. Yeah. Like what do we at the company, like, how it's even, it's a "how" band, like a lot of "how" is tribal right now. Where, you know, Susan in accounting just knows how to do it, you know, has it ever been documented?

Well, I think you're opening up, you know, maybe this is more for our next conversation, but I think what you're opening up to is the realization that, you know, for ages, we've focused on building the data stack, right? We've spent trillions in databases and cloud data warehouses and ETL tools and connectors. And now we're realizing that there's something missing. There's that now all of a sudden is becoming more essential. When you do have tools like this, the context or the intent stack is missing.

Cody Irwin: A hundred percent. And I think it kinda fits into like the tech side of that kind of pyramid, that people, process, tech pyramid is like, we have to have things to back it up. And what's kind of funny, even there is like, we have spent trillions on data platforms. Um, you know, like we're both part of a company that works with data platforms. I'm amazed at how many companies still don't have those. Actually, it's gonna surprise me. I think we think like everyone kinda come across the bridge and has like a good data platform. AI's demanding it. And again, like I've seen a number of reports where they call out, like, one of the biggest gaps is that data foundation. Like some companies actually have some rigor in other areas, and they come and they're like, "well, how do we actually work with the data?" And I even see this to be completely fair. You see this in the foundation model. Companies like when it comes to my knowledge or my data, they don't have a great answer yet for how to tap into that. The data foundations can be quite hard. So it's definitely kinda like a, I think a crawl, walk, run when it comes to data. Companies have to get that foundation figured out. They need to have a solid place that they can rely on that is governed. It doesn't have to be like necessarily a logical place like this is the warehouse. I think fabrics are gonna become more and more interesting. We need like a rich layer of like how we get data. But to your point, there's this emerging space that I think is actually like, what makes AI work. 'cause like just having data doesn't actually solve the problem. I think some of have been kind of misled. You can't do anything without it, but it's not the whole part of the solution.

Chris Willis: Right. There's, there's still some ingredient missing because like, what happens if you like just drag a file and drop it into like one of these, one of these engines. Like, it will do some cool things, but it makes massive assumptions. There's this icky gray area between like the general knowledge and your needs.

Cody Irwin: Yep. For Sure. That these tools don't know quite what to do with. And so they guess, like, to your point, like back to the example around like, "help me live to 120," like, it's gonna make some massive assumptions. Like it doesn't know me, like it doesn't know who I am and like my proclivities and like my diet and I can obviously feed data in there, but it's never gonna be fully complete. Which I think is pushing towards what you talked about, Chris, like this context and intent need that's emerging. Tell me more about that. Like what have you been hearing and reading when it comes to context and intent and like, what could that be?

Chris Willis: Well, I definitely, I've definitely been hearing a lot more, and this has been super recent, been hearing a lot more about, you know, context graphs and context engines and things along those lines, which obviously I think make a lot of sense now. I mean, those kinds of terms, knowledge graphs, context graphs have been around for a long time, but I think they were more like spinach technology. You know, it was like, "oh, this is a good thing to have. It'll make you stronger," but it wasn't really obvious why you need it now. And I think to your point, these new technologies are forcing a reexamination of this kind of information because data, you know, that's your sort of raw material to really turn into a strategic asset. You need to keep adding layers to it. You need to understand things like semantics and ontologies and then how do all these things relate? And then to your point, there's other aspects of the organization that aren't written down anywhere. And the models do not know your business, right? So a lot of companies, I think they're struggling because they're investing a lot in AI tools, but it's really just an expensive technology that makes risky guesses. 'cause it doesn't know their business, it doesn't know their values, it doesn't know exceptions and goals and it doesn't have access to OKRs. And I think what's happening is people are just assuming that, you know, "well it did really well, helping me with my kids' homework. Why can't it help me run my business better?" I mean, that's just not an extrapolation I think is a good one to make at this point. I think there's gonna be a hopefully a reconsideration of a lot of the kind of companies that are gonna build the infrastructure that do help unlock a lot of that capability. Right now. I think there's a still too much hype to to stare through.

Cody Irwin: Well, and I think it's kinda like, back to kinda that hype and like, where we started this conversation. I think that is like what people are hoping for. Like, you know, I've got a son that plays lacrosse that loves pink, and I wrote a poem using chat GBT for him about lacrosse. And it was fantastic. It was amazing. It was magic. Like, I probably could have written it as like myself. It would've taken me a long time to do. I think people have made that logical leap where they're like, "I did this. I saw some magic. I believe." They're tapping their heels together, saying they believe and they're hoping that it can apply to their business. That one seems to be what's causing that desperation in the market.

Chris Willis: And that may actually be the topic that, you know, obviously we're coming close to our time for what we wanna talk about today. That may be like the topic we should kind of address as we're going forward. Like how do we move from point A to point B? 'cause it will happen I think at time. I think over time we'll see efficiencies unlock in the market and new opportunities.

Cody Irwin: My personal hope is that a lot of the menial work I'm required to do in my job, I can ship off to agents. I would love that.

Chris Willis: Yeah. That would be incredible. I do too. I think if nothing else, if we can just automate away a lot of the menial so you can focus on what makes work fun. That would be fine with me. That would be great. And that feels like kinda like the next evolution of the market. Like that feels like what's gonna happen next? I don't think it's gonna be the, "okay, now we're all suddenly retired and we're gonna go live on Universal basic income."

Cody Irwin: And living in data centers and space together. I don't think that's the logical next step. Yeah. The logical next step I think is this where we can be better humans, hopefully. Like we can be better at creating and we can focus on things that bring us purpose and meaning in what we do.

Chris Willis: I love that we started this off in a pretty dark place, but I like where we ended up. So this is good. I feel more hopeful and it's a great therapy session for me.

Cody Irwin: Yeah. I almost feel like we need to like title this, like "The AI Counseling Corner" or something like that. Because I think yeah, we're seeing the extremes of rhetoric and it causes panic. It causes consternation. I had to drive with a brother-in-law of mine. It was about a month ago. We had like a two and a half hour drive and being involved in AI, he's like, "tell me what's happening. Like, what should I be concerned about?" And it turned into like a deep philosophical discussion, which tells me that I think all of us are kinda just wondering like, "what's happening? Like, where are things going?" I think we have a hopeful future and I think there is a path that is gonna emerge that's a very sensible path. Like one that is kinda drives towards value, drives towards meaning. It's not the, "everyone go buy a Mac Mini and install open cloud today and let it like, run your life and hopefully not ruin it." That's not step two.

Chris Willis: We missed a few steps there. The foundations matter.

Cody Irwin: I think this is a moment for us as organizations, as human beings to really index on first principles like we talked about. Like I think it really is a moment for us to be better, to be better at how we think, how we operate, how we interface, the tech we use. This is a massive disruption moment that should drive us to be better. Hopefully that comes from it.

Chris Willis: That's a beautiful sentiment and we're right at time. So Cody, I can't thank you enough. I love chatting with you about this.

Cody Irwin: Love Chris. I was beyond excited for the conversation today. I can't wait for the next one.

Chris Willis: Awesome. Thanks everyone.

Cody Irwin: Thanks everybody listening. Thanks.

Speakers
Cody Irwin
Cody Irwin
AI Adoption Director
Cody Irwin
Cody Irwin
Domo
AI Adoption Director

Cody Irwin is the AI Adoption Director at Domo, where he partners with organizations to accelerate AI-driven transformation and deliver measurable business impact. He brings a unique blend of technical expertise, product leadership, and business strategy from roles at Google, Domo, GUIDEcx, PwC, and Backcountry.com. Throughout his career, Cody has helped companies modernize by applying data and insights to core business processes. Today, he leverages that experience to help leaders confidently embrace generative and agentic AI, unlocking new efficiencies, growth opportunities, and competitive advantage.

Chris Willis
Chris Willis
Chief Design Officer
Chris Willis
Chris Willis
Domo
Chief Design Officer

As Domo's chief design officer and futurist, Chris' hyper focus on combining data, technology and emerging trends in innovative ways helps to make Domo an indispensable platform for its customers. He has nearly three decades of design leadership experience in web, mobile and data visualization. And as one of Domo's earliest employees, he's involved in every aspect – from initial design, strategy and execution – of building and developing solutions that solve even the most complex problems faced by customers.

Prior to Domo, Chris co-founded HOUR Detroit magazine and Footnote.com (now Fold3.com), which was acquired by Ancestry.com for $27 million. Before moving into technology, he was an award-winning illustrator, journalist and author with multiple published works to his name.

Feeling the pressure of AI hype? You're not alone. While many companies look for a quick "AI paint job," Chris Willis and Cody Irwin share a more grounded perspective.

Join them as they reveal why a strong, governed data foundation, a cultural shift within your organization, and getting back to "first principles" are what truly unlock AI's potential. They will discuss how to move past risky guesses by using AI as a powerful assistant to elevate human judgment in everyday business challenges, what they call the "messy middle," and why a solid "context stack" is now more critical than ever.

No items found.
Explore all

Domo transforms the way these companies manage business.

No items found.
No items found.