Headline Sponsor
Fulfil
The ERP built for ecommerce

Fulfil is the only ERP designed for DTC & wholesale brands. Go live in 8–12 weeks with fixed-price implementation. Built for Shopify, Amazon, and 400+ 3PLs — not retrofitted from legacy software.

Get a Demo
AI Operators·Podcast·55m·May 7, 2026

The AI Lesson From a 150-Year-Old Manufacturer

“We’re past the adoption phase. We’re in the pit of disillusionment.” How does a 150-year-old, $2B vacuum company adopt AI faster than its startup competitors? Matt Kruer (CIO, BISSELL) joins Craig Foldes (Founder, ChatWalrus) and Ben Flohr (Co-Founder, Scale) for a breakdown of the biggest week in AI infrastructure; $700B in combined AI CapEx from Alphabet, Microsoft, Meta, and Amazon + one AI agent that wiped an entire production database in 9 seconds. Then Matt unpacks the “consumer insights brain” his team is building to unify 150 years of siloed data into one AI-legible source of truth, debates the real cost of enterprise AI licensing, and makes the case for routing to open source models when frontier pricing gets out of hand. Made Possible by: Richpanel https://9ops.co/richpanel AfterSell https://9ops.co/4i3bb5 Operators Newsletter https://9operators.com/ Chapters 00:02:55 Agent nukes database 00:07:56 60 Salesforce MCP tools 00:15:00 213 ads scraped in 1hr 00:20:03 Stripe's $2T bet 00:24:14 $700B in AI CapEx 00:30:06 Meet Matt Kruer 00:33:03 0% to 85% adoption 00:34:31 The pit of disillusionment 00:37:30 Building the consumer brain 00:44:10 "Will AI take my job?" 00:47:53 AI ads fooled everyone 00:52:25 Base vs. reasoning model

Transcript

Craig Foldes
00:06

Welcome to episode 7 of the AI Operators. I am so stoked to be here with you guys. We are here because of the teams at AfterSell and Richpanel. Ben, it has been a crazy week. How you living, man?

Ben Flohr
00:24

Doing great, man. I'm working on Craig MCP. I want to connect to you through an MCP so I can work with my favorite agent rather than calling you on the phone.

Craig Foldes
00:34

I appreciate that. I love our talks, Ben. I've got a question for you though, as we seek to drive credibility with the audience, and I'm being real about this, has the way that you have leaned into AI and been so exposed to it, real talk, does it make you more or less confident that we are living in a simulation?

Ben Flohr
00:53

Oh man, how much time we got to talk about that one? That's a heavy topic. But yeah, I'm a, I'm a big fan of the simulation theory. Yeah.

Craig Foldes
01:02

TikTok works, man. It has been all that I am seeing on my feed and I'm like, oh man, maybe, uh, maybe this is all, uh, all a hologram. So, all right. We have got a lot to cover and no simulation involved. We are going to talk about the Magnificent Seven earnings reports and what it means for the AI labs that will win going forward. We are going deep on MCPs and OPPs. Yeah. You know me, we are going to talk. About Meta's new MCP, Salesforce's new MCP, and Stripe's new MCP. And most importantly, we are joined by a pioneer and a leader in the AI for enterprise space. Matt, who is the Chief Investment Officer at Bissell, will walk us through the ways that 150-year-old legacy CPG company is transforming to drive AI adoption across the organization. And then Ben will define what a reasoning model is. It is a banger of a show and we can't wait to get going.

Sponsored
01:55

[Sponsor Content] Some brands are cutting their support tickets by almost 50% with AI. I've actually been recommending Richpanel to my community for the last 6 months, so I'm really happy they're the headline sponsor for AI Operators. Their billing is pretty different than most AI support tools. Most tools are basically just AI agents. You train them once and hope they behave. Richpanel built what they call a Boss AI. You just tell the Boss AI your values, principles, and desired outcomes. It handles the rest. The Boss AI reviews thousands of conversations every week, scores them, and continuously improves them to get your desired outcomes. Brands like Richpanel cut ticket volume by 45%, and Jones Road actually cut their support team size in half, and both of them were live in under 2 weeks. Richpanel even guarantees 30% ticket reduction in 60 days or your money back. If you want to see what that could look like for your support team, go to richpanel.com/demo.

Craig Foldes
02:55

The prime example of agentic AI going wrong and what every operator needs to do about it this week. Okay. So on April 25th, Jared Crane, the founder of the car rental software PocketOS, posted a thread on X about a cursor agent that wiped out his entire database, nearly 7 million views on this tweet alone in just a few days. So Railways CEO Jake Cooper said that what happened was their agent vibe deleted, you know, PocketOS's database, which is apparently now like a category that we're going to have to talk about. Just kind of like how I vibe forgot my wife's birthday present last week. Ellen, I am once again, sorry.

Ben Flohr
03:33

All right.

Craig Foldes
03:33

So here's the story, right? So PocketOS runs rental car rental businesses across the US. This was not a side project. It is in production and their service stayed down for just about 30 hours. A few things went wrong. First, the agent, which was running on Claude Opus 4.6, so a Frontier Models flagship product, hit a credential mismatch in staging. So instead of the agent stopping and asking, it decided like any good employee would to fix the problem itself. Okay, cool. Second, it scanned the database, found an API token scoped for a totally unrelated task and used that to issue a single delete command on a Railway volume. Okay. So third, Railway stores that volume level backups inside the same volume. So when it went, the backups went with it. And just like that, in 9 seconds, the entire database was gone. So Ben, more and more folks are building agents on their own. What the heck happened here? And what guardrails do operators need to put in place?

Ben Flohr
04:34

Look, man, this story went viral mostly because of this headline. If you only read the headline, it's really easy to get scared, right? Like AI can decide to wipe your production database. The reality is it's really not that simple to let AI do that, right? What happened to PocketOS was a couple of reckless practices that compounded together, right? It's not Claude or Cursor's fault, in my opinion. It's two really huge failures that should have never existed in the first place. So the first one, the agent had access to a token with permissions that allowed it to delete the database. It was meant for a completely unrelated task, but with blanket permissions just sat in a file accessible to the agent. That's the first rookie move. The second one is Railway, the cloud infrastructure provider that PocketOS was running on kept their backups inside the same volume as the live data. So when one got wiped, both were gone, and that's also a big no-no. So, you know, it's like giving a freelancer the password to your Shopify admin, your Playvio, your ad account, full access, and your only backup of your customer list lives inside Shopify itself. So it's like one bad afternoon, one mistake from that freelancer, it takes the whole thing down, you have no backups. So this wasn't a rational, responsible action by these guys, in my opinion. Like the moral for me or for operators is pretty simple. Let a professional developer set up any AI workflow that goes near a live or production database or codebase. Everyone else, marketing admin, ad account insights, creative, that's where your team should be deploying agents, right? So it's not, so much about an AI that is so powerful and defiant, it can destroy your business despite your best efforts. It's really about being a responsible operator.

Craig Foldes
06:32

I think that there is so much here, but the one thing that operators, particularly startup operators, need to take from this story is structure. There is a balance here. You wanna move quickly, you wanna empower your teams, but at the same time, you've got non-technical people, myself, Included who are building like really advanced stuff that really kind of opens yourself up to some security risks or some problems here. And so as you start to dive deeper on these agents and more advanced workflows through connectors, I cannot recommend this highly enough. Ben is right. Don't let a non-technical person set up advanced API integrations or access to live databases or code bases because ultimately you're going to end up in some trouble. I will say this story in particular. Has a happy ending, uh, that the, the team was able to restore this. Uh, they, they worked with the CEO of Railway, they got it back, everybody's happy. And Jarrod was able to satisfy a lot of his dissatisfied customers over those 30 hours of outage by backing in, uh, sort of Stripe payments history and, and supporting there like, like any great company would. Ben, do you have any, uh, closing thoughts on this before we move on to the next story?

Ben Flohr
07:36

No, I think this is the, you know, this is a symptom of vibe coding. this is going to happen, right? When you vibe code in live production, uh, database or codebase. So, uh, make sure you don't do that without having an AI engineer set some guardrails. That's all.

Craig Foldes
07:56

Vibe coding, vibe deleting, and vibe birthday gift forgetting. On to the next. Salesforce has gone headless and the rest of the industry will follow. We are about to dive deep on 3 stories tied to MCP, so Salesforce has shared that the API is the new UI. So what does that mean? So they had their developer conference just a couple of weeks ago on April 15th, and Salesforce announced more than 60 new MCP tools, which we learned about last week, and 30 preconfigured AI skills for agents all in a single morning. So Marc Benioff opened the keynote by saying our API is the UI. So for a company that built their business logging into a browser, like this is a full-on reversal. And by the way, Salesforce did $40 billion in revenue last year. They are the dominant CRM in the world. So when they make this change, like that's a pretty big shift. So they did 3 things. The first is the MCP layer, 60 new tools, which means that every single Salesforce object is now a function that any agent can call on by name. The second is the skills layer. So there are now 30 preconfigured workflows. The agent doesn't have to reinvent the wheel. It just picks a skill and runs. And then the third are the tools, right? So Claude, Claude Code, Cursor, Codex, Windsurf, every single major coding agent and harness on the market can now operate Salesforce directly. No clicking through tabs, no logging in, no human in the loop. Like the agent just runs the system. It's kind of like we talked about with Shopify 2 weeks ago, with Klaviyo last week, like this is the new normal and here is what it looks like. For operators and it's kind of crazy. So Jason Lemkin, you know, who many of us know runs SaaStr, uh, he posted in response to Salesforce's announcements that he has been using the tool this way for 6 months. He built an agent called 10K that runs SaaStr's entire go-to-market on top of Salesforce. So every morning, 10K pushes a standup into Slack outlining its pipeline movement, what deals it closed, and what specific tasks it is assigning to each human on the team. So the agent that Jason has built is telling the humans what to do and not the other way around. And it's working, right? Like Lemkin's argument is as follows, like actually this structure makes Salesforce even more important because it's the brain that is operating the entire system. So Ben, what is going on and what is the move for folks who are either on HubSpot or Salesforce who are operating these brands? What? What does this all mean?

Ben Flohr
10:31

Look, this is really the playbook for SaaS today. If they want to survive, they have to make sure they have easy connectors into people's favorite agents, right? We've seen this before, right? We talked about it last week, Shopify AI Toolkit, which allows you to use Claude or ChatGPT inside Shopify. There's Klaviyo with custom skills and Composer. There's Meta API that we're going to cover later on today. So they open their entire backend to AI agents through MCPs and APIs. So what changed for operators is just like Benayav said, the dashboard is not the product anymore. It's really about the API surface or the MCP, right? Right? So the non-obvious move for operators now is like, now that you're assessing a new SaaS, the two questions that you have not asked before that you need to now ask is number one, can my agent do everything your UI does? And what is the rate limit cost at agent speed? Meaning they are going to limit how much data, how many questions, how many tokens your agent can use, uh, digging through the MCP or the API inside the SaaS, you need to understand what those limits are. Most operators have never asked either question in a sales demo.

Craig Foldes
12:01

Yeah, I, I think kind of building on this, a couple things. Maybe we should hope that companies like Salesforce will kind of pause their rate limits because the opposite is also true, which is if it's unlimited, then maybe you'll eat through and spend a ton of money on token consumption or usage that isn't necessarily as productive. I know that that's a bad that Microsoft is, is making and announced on their earnings call kind of yesterday. So there's certainly a balance there, but I want to close with one thought based on my time at Crocs, right? And so ultimately, you know, different vendors like Salesforce would spin up an AI solution and we would say, great, we're doing AI because we're using this Salesforce tool. But the reality is, is what you are seeing change is everything we've been talking about. If Claude is kind of the sun in the solar system, Salesforce is one of many planets that rolls and operates around it. And what happens is if you are just using Salesforce's tool, you only have that context within the CRM. And the beauty of what these companies are enabling through MCP is they're bringing it into cloud where you no longer have to just leverage Salesforce's context. You have the context of other marketing campaigns, of other strategic work. And so you're orchestrating and conducting amongst a bunch of different knowledge sets. You're not just isolated. In what Salesforce or HubSpot or Klaviyo has. Okay. I promised you 3 stories on MCPs. The second is Meta. The third is Stripe. So let's dive in to Meta opening their ad platform to your AI agent. So, all right, here's the deal. On April 29th, Meta's CFO, Susan Li, used their Q1 earnings call to announce Ads AI Connectors is now live and open beta. So you can now point Claude ChatGPT or Claude Code straight at your Meta ad account and have it run your campaigns. For you. Li framed the bigger shift directly. She said, look, we are entering a world where employees are managing agents to execute tasks and build products. Like, that is where we're headed. Meta did close to $60 billion in revenue last quarter. It's crazy. Up 33% year over year. That is their fastest growth rate since 2021. They are crushing it because of AI. So what is going on? There are 3 kind of pieces here. The first is the architecture. So the MCP server is in open beta, which means that every Meta ad object, campaigns, ad sets, creatives, audience targeting, just like in Salesforce before it, is now a function that any agent can call on by name. So second is the tools, obviously Claude, ChatGPT, Claude Code, every major coding harness is in the, in the market can just operate your ad accounts directly. And third, which is most exciting is unlock, right? So you don't need developers or API integrations or agencies kind of sitting in between you and your media, right? Like you just kind of log in, point your brilliant agent at it, and the campaigns run kind of bidding, creative, audience targeting, all end to end. So Ben, you run a multi-billion dollar brand, you spend a lot on ads. What is happening here and what does it unlock for teams like Scale?

Ben Flohr
15:00

Yeah, so another day, another MCP connector story, right? The headline here is you need to go beyond just like ask Claude to pull a report, right? That's table stakes. The real unlock here is the creative workflows and automation that you can create with this MCP. So Meta's algorithm rewards creative volume and diversity, right? So the more variants you feed it, the better Andromeda performs for you. The hard part has always been that bridge between what's actually working in your account and what the next 20 ads should look like. So that bridge used to require either a marketer that pulls that data and insights manually or a developer that automates it with a Python script to pull your top creative, feed the data into Claude to create a brief, and then pipe that brief into Midjourney or Runway. This isn't new, right? Plenty of operators have been doing this loop for months now. Right? The Mercy Larry team, for example, scraped 213 competitor ads through Claude Code back in March in about an hour with a full breakdown of hooks, creative angles, and it was ready to drop into a brief. Alex Nyman's been publishing a DTC-focused playbook on this exact pattern for a while now. So the MCP just makes the whole thing easier and more accessible without a developer in the middle. So now your agent via this MCP pulls what's working, drafts a brief grounded in real live data, sends prompts to whatever gen tool you're using, and drops the new variants back into your account for approval. This is meaningful for an e-com brand spending $2 million a year on Meta or even much more. Ad prices on Meta have been up 12% year over year. Creative volume and diversity is the only real hedge. So if your agent can produce 30 informed variants a week instead of your team manually shipping 6, you know, the math compounds quickly after a while. So you can, you know, pay an agency to do that, or you can just connect some agents to the new MCP and you're like 80% of the way there.

Craig Foldes
17:22

I just want to echo what you've said, which is I think the first and last thing I couldn't agree with more. This is not just, oh cool, now I can I'm going to use Claude to query my ad performance, right? That's not what this is, right? This is very different. This is leveraging and harnessing superintelligence to do everything you just outlined, which is new creative strategy, new hooks to give direction to your team to move faster based upon what the agent sees is working and the opportunities it recognizes based on your brand, based on performance, based on your competitors, et cetera, right? And that is going to require a new workflow, a new way of working, a new way of operating and executing. As a team based upon what the agent is giving you as, as direction, right? Like the companies that I know have been the most successful in doing and leveraging this stuff are the ones that are breaking down old ways of working and traditional structures to support this new kind of tool, right? So not a new way to query your data, quite the opposite, a brand new way to get to speed to insight and net new creative, net more efficient ads as a result. All right. Our third and final MCP/OPP. Yeah, you know me. Story. We covered, uh, you know, Salesforce's conference. Now let's talk quickly about Stripe. So they held their annual Sessions conference in San Francisco last week, which by the way, like we got to get these guys out to Denver. Come on, Mayor Johnston. Let's, let's make it happen. Right. So they announced 288 new products and features in a single morning. So Patrick Collison opened the keynote by saying AI is the biggest platform shift for the economy since the internet. And in the not too distant future, agents will account for most transactions online. Keep in mind that Stripe processed nearly $2 trillion in payments last year. So when he's saying that, that's a pretty big deal. So what did they announce? The first is Link, their consumer wallet, which now has 250 million users globally. And I used yesterday, uh, it's now open to AI agents, right? The agent gets a one-time use card for every single transaction. And never sees your real payment info. The second is that brands like Fanatics and JD Sports through Stripe can now sell directly inside Gemini and Google's AI mode through a new Google partnership. And then the third is Stripe Radar blocked nearly 3.3 million risky AI purchases last month alone. So one in every six AI transactions is now a bad actor. That's actually a really big story. Okay. So every other payments giant, the card networks, Microsoft, Meta, PayPal, they all shipped the exact same architecture last year. So Stripe isn't necessarily a leading indicator here. So Ben, what is happening here? What's, what's going on? Walk us through it.

Ben Flohr
20:03

Yeah, look, this is the shift towards agentic commerce, towards, uh, the, the front page of the internet or the entry point for traffic, uh, to shift towards AI, right? So for the past 20 years, every funnel was set up so the seller controlled the place where the decision got made. The buyer came to your store, you ran the page, the recommendations, the upsell. That deal is breaking, right? So the decision now starts wherever the buyer already is, which is inside ChatGPT or Gemini these days. And by the time anyone gets to your store, the choice is mostly already made. That's what Stripe's 288 announcements are for. They're basically trying to build the rails or the infrastructure for that world. 250 million consumer wallets, it's just the beginning. They're making a big bet that the volume of transactions is going that way, and I agree with them.

Craig Foldes
21:02

I want to close, you know, Ben and I obviously we live in this space. There are a couple of kind of like authors that we really sort of admire. And one for me personally is my friend Nate. He runs Nate's Substack and is just a brilliant thinker. In this space, and he had a wonderful kind of take on this that I would recommend everybody go and read. But ultimately, like, it's sort of what Ben said, which is that the transaction layer might be leaving the store, right? Like, buyers used to come to your site, see a photo, read copy, decide if it was worth the cost of admission to buy the product. But that choreography, like, kind of breaks entirely if it's an agent that is doing the work. So what then is the role of brand, right? Like, it doesn't totally disappear. According to Nate, it, it kind of migrates, right? It goes from the homepage into your buyer's kind of memory. But the challenge here is that agents don't really feel brand loyalty, right? Like, so it carries the context of what the person that they are working on behalf of sort of likes, right? So, hey, Craig's like, Craig likes this coffee roaster. He avoids that airline Frontier. Frontier is the airline, never ever fly on it. He trusts this shoe's return policy, you know? So ultimately, like, the memory really only matters in so much as the agent's ability to act on it, right? So like, brand kind of puts you in the consideration set, but callability is kind of what this agent is interested in when, when they transact, right? So a lot of brands are nailing the first in terms of brand and storytelling, but they don't yet have the second in place. And I am so excited, uh, to, to see what, what comes over the next year or two in this space. All right. That is all you need to know about MCP across Stripe, across Salesforce, across Meta, uh, and the ways that, you know, these tools are just becoming the new operating system for work. All right. Onto the next. All right, Ben, we've got hyperscalers, we've got Frontier Labs, and I want you to walk us through all of it. So let me set the stage. On April 29th, 4 of the Magnificent 7 reported Q1 earnings on the same day. Alphabet, Microsoft, Meta, and Amazon combined revenue last quarter, just the last 90 days, $430 billion, up double digits across every single name, but combined 2026 CapEx guidance, $700 billion. So AI is both driving the gains and AI investment is also driving the spend. We know that some of these companies are significantly compute constrained. And so ultimately these strategies that these different brands are taking are not really converging, right? So Microsoft and Amazon are buying lab proximity. They're working closely with the teams at OpenAI and Anthropic, pouring billion dollars into these different labs who they think will win. Google, on the other hand, is going a different way. They are building it all in-house. Models, TPUs, hardware, the full stack. Meta is throwing CapEx at infrastructure and acquiring labs like Manus. And then there is a fifth name that I know you'll tell us about that's not on any of these calls yet because they're not public yet. xAI, they are sitting on more compute than they have users for, and they just potentially acquired Cursor in the deal. So Ben, I'm Jesse, you're Walter White, go cook, brother. What is going on here and what is going to happen?

Ben Flohr
24:15

Look, again, I don't have a crystal ball, but if If I was a betting man, I would say that, you know, it's just not looking great for OpenAI and Anthropic right now. They are compute constrained, like you said. They have obligations to their investors. OpenAI missed their revenue targets. OpenAI is hurrying to get an IPO so they can get some money from retail, so they can pay for all their data center obligations and their compute. And, you know, I wonder what's going to happen in the future here because looking at Google, they may not have the best models at the moment. However, they have the best engineers, they have the infrastructure, they have the compute, they have the hardware, right? They're not dependent on NVIDIA like the other companies. They have their own TPUs and they have the balance sheet, right? They don't need to go and raise money like Anthropic and OpenAI. So, I mean, there could be a situation in which in the future, one of these companies, OpenAI or Anthropic, get into financial trouble and Google can pick them up pennies on the dollar. That could be a valid scenario. And then xAI, have the opposite problem of Anthropic and OpenAI, right? They have too much compute and not enough users, right? Elon is building the biggest cluster of H100s or B200s ever, and they don't have enough users. So there could be some sort of collaboration and acquisition or merger there with probably Anthropic. I don't think Elon and Sam like each other. Very much these days. So, you know, that's what it's looking like from my seat today, but it's going to be very interesting to see where this goes.

Craig Foldes
26:14

What's a B200 or an H200? And then I've got one follow-up question for operators.

Ben Flohr
26:20

H100 is the Hopper, is the previous generation of the NVIDIA GPUs, and B200 are the Blackwells. I have a Blackwell here, actually. Uh, but it's not, it's not the same industrial grade one. Uh, I have the home version, uh, but Blackwell is the, uh, the, the current version. And then they're releasing the Vera Rubin, which is the next version of the NVIDIA GPUs.

Craig Foldes
26:46

So it's, it's compute. This from a corporate strategy standpoint is so interesting. But Ben, last week we talked about like operators making a bet on one and then 3 months later falling behind, right?

Matt Kruer
26:57

So.

Craig Foldes
26:57

Is this a situation in which, you know, you're making a bet now and that's who you're with, or how should an operator kind of like ride this wave? I've got a thought, but I'll pass it to you first.

Ben Flohr
27:08

Yeah, I think what you said last week is valid. There's no bad choice. Just get started. If you work with one of the frontier models today, you're going to do just fine, right? So whether it's Claude, ChatGPT, Gemini, even Grok today is good enough to do 90% of the work that we give to AI. I think it's really just about how you roll out AI in your organization and get people used to working with it. That's the key for me here. It's not so much which tool you use. If you want to nerd out about which tool is good for what, we can. I don't know if we have enough time for that today, but I think the juice is now worth the squeeze to like move, shift from one company to another to get an incremental increase in performance every month or so when, you know, one of the companies releases a better model.

Sponsored
28:10

[Sponsor Content] I agree, and I'll point back to something that Matt at Pela said on our last episode, which is that like, this is a bit different. Technology is composable now, right? So if you're picking one, it's not like you're investing in them for the 3-year horizon, right? What you're doing is learning to use and interact with Everyday AI, with Superintelligence. And guess what? If you decide as a company that you're going to switch from Claude to ChatGPT or to Gemini over time, that's as simple as doing 2 things. It's prompting the Everyday AI tool to share its memory with you so that you can upload it into the next sort of a model that you go towards. And it's setting up a couple new connectors so that the context is there. This is not like I'm committing to ChatGPT in blood. This is, hey, my company is learning to use this tool as their teammate, and then we'll ride the wave of that one. And when it comes time to change, maybe a year or two from now, we will. Most brands have deployed AI across acquisition and creative, but the confirmation page, the highest intent moment in all of commerce, remains a dead end. Rockt After-Sell is the design system for the full post-purchase flow, unifying cart checkout, post-purchase, and confirmation into one intelligent system. Rockt Thanks brings real-time AI decisioning to the confirmation page, powered by Rockt Brain, which analyzes 1.95 trillion data points across 7.5 billion transactions annually to determine the next best action. The result Instead of generic offers, customers see timely, relevant partner offers from 500+ brands like Hulu, HelloFresh, and Venmo, generating pure profit per order at scale with zero ad spend and zero customer data shared. Brands like Sephora, Nordstrom, True Classic, and Jones Road have generated over $1.5 billion in pure profit this way. Activate RockThanks at aftersell.com/operators and unlock AfterSell's full optimization suite including cart checkout and post-purchase, all completely free. I am so incredibly excited to be joined by my friend Matt, and he's also kind of like a pioneer in the space as it relates to AI adoption at enterprise brands. So Matt, why don't we just kind of first start, tell us who you are and where you work.

Matt Kruer
30:20

Yeah, thanks, Craig. Uh, happy to be joining you guys today. Um, so Matt Kruer, I'm the Chief Investment Officer at BISSELL. BISSELL is about a $2 billion global residential cleaning products company. We focus mostly on floor care products and, you know, in my role as Chief Investment Officer, I oversee several aspects of the business. So I'm responsible for the finance department. I was the CFO for about 4 years prior to becoming the CIO. I oversee IT, and then I also oversee some of our investments and acquisitions that we've made over the years. So, As you think about the Chief Investment Officer role, I'm overseeing both our internal investments in terms of how we allocate our capital and our resources across our different businesses, and also helping identify external investment and acquisition opportunities.

Craig Foldes
31:11

So when you hear CFO and IT at sort of a legacy enterprise brand, what you often hear as it relates to AI is no, right? Like, yeah, we're not going to do this. And it's a bunch of reasons to say no. To change and pursuing things. I want to ask you a two-part question. The first, you've clearly taken a different approach. So the first question is going to be tied to just broader AI strategy and how you're thinking about it at Bissell specifically in your role. And then I'll ask a follow-on tied to use cases. But how are you as a CIO starting to think about this internally?

Matt Kruer
31:43

Yeah. So what we've, what we've said frequently is we need to think about AI and more broadly, just technology in general in service of our strategy. So, you know, we have an internal strategy that we're trying to execute on to win in our market. We have certain capabilities that we need to build. So, you know, a lot of the same capabilities that your listeners would have, right? In terms of being able to create quality content, be able to create a certain volume of content at a certain velocity. and if tech is an enabler of that capability, then we should be leveraging it and investing in it. And so, um, that's how I look at all technology, not just AI, is does it serve the purpose of, of our strategy and build the capability that we need to win?

Craig Foldes
32:31

Uh, I'm so lucky we, we spoke on the Operators' Cod Podcast together a couple months ago, and I stole a, a frame, uh, you know, a framework from you off that. You talked about the imagination gap and people not kind of knowing what's possible until they see somebody else in their role. So I've run with that. And now what I'm going to take from you is, you know, AI in service of your strategy. So simple, but, uh, you know, so obvious. So, so Matt, what are ways that across Bissell you've leaned in to AI in service of your strategy? What are some use cases you've deployed?

Matt Kruer
33:04

Yeah. Um, so early on we focused a lot just on, on kind of awareness and adoption. So we were very aggressive in giving employees access to the tools that they needed to experiment, whether it was Copilot, ChatGPT, Claude, you know, and then we also brought in different point solutions offerings and for that are kind of functionally or domain specific to help improve specific workflows. And it was really just a lot of experimentation to just get people comfortable with, like you said, understanding the art of the possible, giving some like very, you know, early and basic examples of what the tool can use. And I can say that that's worked, you know, pretty incredibly well. We have a pretty wide adoption of, you know, the different LLMs and different tools, and we can go into kind of how our use of tools has changed since, you know, even you and I last talked in January, which it has changed very dramatically. And so I think we're past that adoption phase of the curve and we're actually in this bit of like that pit of disillusionment. And that's where I'm spending a lot of my time trying to figure out how do we get past that into the next phase of like real scale and enlightenment as it relates to these tools.

Craig Foldes
34:26

Go on, tell us more about the pit of disillusionment. I'm, I'm curious.

Ben Flohr
34:29

Keep talking.

Matt Kruer
34:31

Yeah, well, I think a lot of what we're seeing within the org is a lot of, you know, what I'll call like personal productivity hacks or solutions. People are building kind of one-off skills that they, that are, you know, interesting or, you know, using, using AI to do, you know, a one specific task once, but they're not building it into repeatable enterprise-scalable workflows that can be shared across their function. And that's something I'm incredibly focused on solving is how do we get to that phase of enterprise-level workflows that use these tools? And that's really hard. And I think what we've realized is, you know, context and integration are like by far the most important thing right now, you know, more so than the models or which models you're using. It's helping create that context for the associates and, and so that they can get the best possible outputs. And in order to get that context, you generally have to have integrations with your systems of records or your different data sources. And what we want to do is figure out a way to solve that for the organization, you know, similar to, you know, the model I've been, you know, or I aspire to is, is, is RAMP's model of what they've launched, you know, what they released a few weeks ago, they talked about Glass, their internal product, which I think solves a lot of this. The other piece of the, what I'll call like the disillusionment trough right now is we got everyone on board, we got everyone using the tools, and that pushed us into, you know, enterprise plans with, you know, with, you know, frankly, Claude and Anthropic in particular. And frankly, that, that the cost of that plan has gotten very expensive. And I know that's the big talk right now across Twitter and other places is how that model has changed. And so now I have to start thinking about how do I scale this usage in a cost-effective manner that goes back to that, you know, the not fun part of being a CFO and having that background. And so it's forcing me to start thinking about, you know, things like should we be using open-source models? And, you know, I haven't really spent a lot of time in that in that part of this. I've been a pretty much diehard Claude user, and so starting to investigate open source and more efficient ways to use the models and exploring those things so that we can continue to push adoption here, but do it in a way that is affordable from a P&L perspective.

Craig Foldes
37:02

Your experience is not unique, right? And I think it's so cool for listeners to know, okay, you're 6 months ahead, you're 9 months ahead, here, and kudos to you, here's what I'm going to be up against and how I should think about it. I think your point about context and integrations is, is so spot on and standardization. Are you willing to maybe peel the curtain back a little bit and tell us about one big project that you're kind of pursuing with that initiative? Is there anything that you're specifically working on in that world where you see real opportunity?

Ben Flohr
37:30

Yeah.

Matt Kruer
37:31

Um, you know, we, we don't have the engineering horsepower that, that Ramp has, so I don't know that we can build that single, you know, data mesh or knowledge base, you know, across the entire org. And so we decided to chunk it and do it in the areas that are most strategically important for us. And so, you know, one of the areas is where we're starting is what we're calling the CX brain, which is, you know, the consumer experience or consumer insights brain. And the way to think about this is, you know, we're, we have a vast amount of data that we've accumulated, you know, over the years. We're 150 years old. We've done quite a lot of— we've launched quite a lot of products. We've done quite a lot of consumer research. And so what we want to do is take all those siloed databases and aggregate them into one single source of truth. So we're building this brain that essentially has, you know, all of our market research that we've done over the years. It's aggregating all of our social sentiment and social listening that we're doing, all of our product reviews. It's taking internal data like our customer service conversations and records. It's taking our internal warranty and repair work that we're seeing around our products, putting it all in one spot, in a spot where it's legible to the LLMs so that different or different functions around the organization can use it. And there's a lot of ways ways that you could imagine you can use this data. You know, we've got 3 main functions that use it. You've got our consumer insights team, and, you know, they mine it for insights around our products and/or around our product categories that, you know, insights that aren't being met and help surface those insights to our teams and help with product ideation. Our product marketing teams are also monitoring that and using it to do things like inform product briefs. Or to work on the positioning of how they're going to position the product in the market based off of what the consumers are asking for or how they value different claims or product features. And then lastly, our product development team and our engineers are using it to inform our, you know, physical product roadmap and what features they either work on to deliver to the product marketing team or how they improve upon existing products in the market. And so it's very valuable to us because it gets everyone talking from the same set of data. But it also, it also significantly improves the, the visibility to the data across these functions, because in the past, you know, in terms of sharing this information, it was all, it was all dependent on, you know, the right person in the product insight team connecting with the right person in the product marketing team and making sure they share those insights. And then the product marketing team had to make sure that they pass that to product development. And that just was a very inconsistent process in terms of how it was done. So it takes a lot of the volatility and variability in, in, in those processes out of it for us and ensures that we're delivering at a much higher hit rate from a, from essentially a product development perspective. And, you know, that can compound significantly from a revenue perspective.

Craig Foldes
40:39

I spent many years on product teams and know how valuable exactly what you're talking about can, can be. I've got one last question before I pass it off to Ben, which is What was the structure of the team that built that? Was that a 2-week sprint where you had a mini pod? Like, how do you go about creating something that comprehensive and that cross-functional at such a large organization?

Matt Kruer
40:59

Yeah, uh, we've stood up what we call a, a digital and AI enablement team. Um, and so it's a, it's a cross-functional team, um, comprised of— you've got data engineers, you've got data scientists, you've got software engineers. You know, we hired our first, you know, what we call, what we're calling our first full-stack AI developer who's leading a lot of this work. And then we're also, that team then goes and works cross-functionally with the product marketing team, the consumer insights team, and product development team to build the requirements for the product and then deliver it to them. And so that's how we've attacked our most strategically important AI projects is this digital enablement team is focusing on those top 2 or 3 projects that are, you know, strategically important across the entire enterprise to make sure that we can drive those from a top-down perspective and make sure they're resourced and are achieved while also then driving a lot of bottoms-up use of AI in terms of just helping people play with the tools, helping, you know, get them the education, the upskilling needed to use it in their day-to-day workflows.

Ben Flohr
42:12

Matt, that's super interesting. I imagine a lot of companies your guys' size are dealing with very similar problems that they're trying to solve with AI. I mean, getting all the context in one place, merging databases for some sort of a searchable AI for different teams. Seems easy enough, but it's actually quite a big problem, especially for companies that's been around for a long time. And I'm assuming you guys have old PDFs or scans from, you know, documents that you need to tokenize. And that's quite the challenge, doing all these OCR pipelines and creating a RAG for everyone to be able to search. I'm also very interested in learning more about how you're looking at open source models. You know, we talked about how enterprise pricing, specifically for APIs for the frontier models, is getting out of hand and they're raising the prices on the deepest pockets, which would be your guys' tiers. And so I think a smart use of lighter models that are cheaper as well as open source models is very smart. And I think the trick here is learning or understanding how to route in a smart way, right? Which problem, we're actually going to talk about it in a small version of that in AI 101, right? When to use a base model versus when to use a thinking model, et cetera, et cetera. So these are really interesting and hard problems for you guys to solve. And I'm happy you're on the show giving us a taste of that. So Matt, I'm really curious about the feedback from the team. You guys have over 1,000 employees. You guys are rolling out AI in different levels of the organization. What was the feedback you guys got from the team, from lower ranking to mid-management to executives?

Matt Kruer
44:10

Yeah. As you can imagine, with a company of our size, the feedback is very varied. You know, in general, I've been pleasantly surprised at the willingness to experiment and adapt. I think most people, you know, understand that AI is going to be a critical piece of how work gets done in the future. And so they're willing to embrace it and want to learn about it. There are some people that are understandably, you know, concerned. Will this— how will this impact my job? Am I automating my own, you know, my own position? You know, I think, I think that the truth is what you're seeing and hearing a lot is like there's, we have way more work that needs to be done than we've ever been able to do. You know, being in charge of IT, I can tell you that our backlog of, you know, project requests and initiatives is never-ending. We can't ever do it. And so, you know, this is very much, this is very much becoming an enabler for us to do more than we've ever done. Than it is a cost-saving exercise. I haven't cut, you know, anything really, you know, from a cost-saving perspective because of AI. For sure, internally, you know, we've been able to replace a few external vendors with it, but not internally. And so, you know, it's, it's doing more with the same amount of resources is what I think you're seeing from most companies, or they're excited to bring on more, more people that can help because they know they can get so much more value from those resources. You know, I think the other thing that we're seeing is it's fun to watch where the power users emerge from within the organization and in the departments. And, you know, they emerge from unexpected places that you wouldn't have predicted. And they're building tools that you hadn't even thought of. As we think about how to train the employees and how to diffuse AI across the organization, we're also realizing that those people are going to be key enablers of it. And so we're actually creating a program where we're picking the different power users or like just really curious, you know, continuous improvement people out of their functions. We're putting them in a special AI training program with that AI enablement team that I mentioned to you guys earlier. And we're going to work with them to even further make them power users so that we can send them back into their functions to help automate automate different workflows and, you know, look at the day-to-day work with their team in marketing or in finance and, and help them solve that. Because, you know, we don't think it's realistic to expect every user to become a power user. They're not going to do that. You just need 1 or 2 people within the function that are that user that can show everybody the rest and help them. And we think that's the fastest way to get to scaled adoption within, within the organization.

Ben Flohr
47:05

I think that's very smart. I'm super aligned with that. Some people lean to AI in different ways, so you want to identify the people that can drive the change and the adoption and create the more complicated tools and then show it to the others and have everyone be excited around adoption. Very cool. So I want to shift gears here and talk about a specific workflow that either you personally use or that is critical for Bissell at the moment that you guys couldn't go without? And then walk us through what it does, how you discovered it, and what the workflow looked like before you applied AI to it.

Craig Foldes
47:48

Yeah.

Matt Kruer
47:50

You know, I think the most exciting thing I've seen recently is, you know, we recently for one of our new products, reviewed the campaign content and the marketing team presented it to the executives and walked us through all the different launch content. And at the end revealed that 100% of that content had been AI created and we had, you know, no clue or no sense that that was the case. And so, you know, that workflow that they built to create the content for this campaign was all done within a tool called Pencil, which is, I think, becoming pretty widely adopted for content creation and for a lot of marketing teams workflows. I know Unilever is talking about using it, some other brands. And so if we can get to, you know, that level of, you know, content creation with AI and automation, that just unlocks a whole new level of speed and and cost effectiveness for our marketing campaigns, which is super exciting. And I also think it's non-negotiable in terms of us figuring out how to be able to do that because we know that our competitors are using those tools and are going to use it to drive their ads in Meta, drive their ads on Amazon, and we have to be able to keep up. And you know, when you think about, you know, Bissell, we're a floor care company, we're competing against different floor care brands. But we're also competing against just anyone that's trying to get eyeballs and customer acquisition. And so when you also see, you know, the rebels, as Sean referred to them on Craig and I's podcast, when you see the D2C brands like Ridge and Jones Road Beauty and Hexclad just, you know, ripping out ads and, you know, beautiful high-quality ads, those are the same. They're competing for the same eyeballs as us. And so we have to, be up there from a capability perspective. And because we won't be able to acquire those customers if we're not doing it at that level. So I would say that's one of the most critical kind of workflows, tools, solutions that we're solving for right now is content automation.

Ben Flohr
50:07

Yeah, I think that's a, that's a theme we're seeing across a lot of direct-to-consumer brands right now. So that's great to hear you guys are on top of that. My last question would be around, you know, what's the next AI project on your desk, something you're actively working on or about to greenlight and excited to talk about?

Matt Kruer
50:29

Yeah, there's a lot of interesting tools being shown to me right now. I mean, when I think about what it's going to take to make AI successful within our org, you know, I mentioned it's data, it's integrations. I think everyone understands that the third piece of the leg is actually having your workflows documented so that you can go through and automate them and inject AI into the workflows. And that's really hard to do, you know, especially in larger complex organizations. And a lot of people are talking about how employees are going to have to evolve to be able to, you know, architect, document their own workflows and become like business analysts in order to make the AI successful. That seems like it's going to be a really tough putt for us to expect every employee to be able to create these workflows and document them. So I have seen a couple new tools and one in particular that is, you know, proposing to be able to essentially, you know, watch our employees on their computers, document those workflows for them automatically, and then help them redesign them from a AI-forward lens or an automation-forward lens. And if we can unlock that or if someone can unlock that for us, I think that's going to greatly accelerate our ability to adapt these technologies. And so that's a pilot that we're just looking to get started within the next couple weeks. That's really exciting to me.

Craig Foldes
51:56

Dude, this was a masterclass. Thank you. If people want to hear from you, learn from you, how can they find you?

Matt Kruer
52:02

I'm on Twitter. I'm not as active as I probably should be, but that's probably the best way. Or LinkedIn if you ever want to reach me. I love connecting with people that are as passionate about this, passionate about this as I am and always trying to learn. So feel free to reach out.

Craig Foldes
52:20

Thanks so much, Matt. Go Blue. Thank you both.

Matt Kruer
52:22

Thanks for having me. Go Blue.

Craig Foldes
52:25

AI 101 and some terms you should know. I'm actually quite interested in this one, Ben. So sometimes, you know, the tools will get back to you quickly and sometimes they enter thinking mode where the response is, take longer, that's when they go into reasoning. What is a reasoning model and walk us through it?

Ben Flohr
52:41

Yeah, so a base model just answers, right? It reads your prompt, it predicts the response. A reasoning model writes itself a private scratchpad first, right? So it works through the problem step by step. It catches its own mistakes and then it hands you the cleaned up answer. So the scratchpad is called chain of thought. And you don't actually see it, but you pay for every token of it, which is why the cost gap is real here between reasoning and base model, right? So GPT-5.5 Standard is $5 per million input tokens. GPT-5.5 Pro, which is the reasoning version, is $30 per million input tokens and $180 per million output tokens, which is where the reasoning tokens is counted, right? In these output tokens. So it's 6 times the price on input, and one thinking call can burn 20,000 tokens just on the scratchpad before it even answers anything back to you on the output side. So you're paying for compute, and the model is running for 30 or 60 seconds instead of 2 seconds. So think about it like this. A base model is the analyst who've been on your team for 2 years, right? You ask a question, They answer fast. They're right most of the time. The reasoning model is the consultant you bring in for a board presentation, right? They think about it for an hour before they say anything. They cost 6 times more, uh, and they're right more often on the things that actually matter. So the call is simple for operators, routine work, product descriptions, ticket tagging, summarizing a meeting, drafting an internal email, use a base model. Don't pay for thinking you don't need. Anything where being wrong costs you money— pricing decisions, financial modeling, code that touches production, an agent acting on your behalf— turn the thinking on. The retry on a bad answer costs more than the actual upgrade. Does that make sense?

Craig Foldes
54:48

It makes sense. I will leave my personal thoughts to consultants, you know, to myself. But let's say this, when I use thinking models and do more advanced stuff. Oftentimes the work is valuable. Sometimes when I work with consultants, a little less so.

Ben Flohr
55:00

Okay.

Craig Foldes
55:03

And that is it for episode 7 of the AI Operators. If you enjoy the show, please like, follow, subscribe, and share. It helps us reach more and more operators. Thank you to Finn and to Mahin and to Matt, and of course to my beautiful partner and friend Ben. We will see you next week.

Ben Flohr
55:23

Bye everyone. Bye, operators.

Matt Kruer
55:27

Operators.