• Episode 003 • January 22, 2026 • 24:47

  • Jay Nathan, Jeff Breunsbach

  • Listen on: Spotify | Apple Podcasts | YouTube

About This Episode

Jay and Jeff dive into why context is the critical unlock for AI agents—both personal and enterprise. They unpack Aaron Levy's thesis that enterprise software won't be replaced by AI, but will become more valuable as the guardrails for non-deterministic agents.

The conversation explores how agents could drive more activity through your CRM and CSP than humans ever did, the tension between deterministic workflows and probabilistic AI, and how personalization (not just segmentation) becomes possible when agents have the right context.

Plus: ChatGPT ads are coming, Google's real-time search advantage, and a rapid-fire list of AI resources worth following.

Key Takeaways

  • Context is the unlock. The more an agent knows—about your customer, the health of the account, the commercial relationship, your company's preferences—the better decisions it can make. Fragmented context means fragmented results.

  • Enterprise software becomes more valuable, not less. Aaron Levy's thesis: AI agents need deterministic systems (CRMs, CSPs, ERPs) as guardrails. Expect more data flowing through these systems than ever before.

  • Agents solve the data entry problem. Leaders have always asked: do I want my team in front of customers or updating CRM fields? Agents can do the backend work—notes, deal stages, forecasting—so humans can focus on relationships.

  • Deterministic + probabilistic must coexist. Some things (CSAT surveys after every case) need to happen the same way every time. Others (personalized onboarding plans) benefit from agent intelligence. The art is knowing where each fits.

  • Personalization beats segmentation. Instead of forcing customers into Tier 1/2/3 buckets based on revenue, agents can pull context from sales calls, market data, and team size to tailor the experience to each account.

  • AIO is coming. Just like SEO, there will be AI Optimization—how do you get your product recommended when someone asks Cursor or Claude Code to solve a problem? Dev tools and APIs need to think about this now.

Chapters

00:00 – Intro: Jay's weekend using Claude for branding
02:56 – Why context is the key problem for AI agents
04:10 – Aaron Levy's thesis: enterprise software gets more valuable
07:13 – Agents will drive more activity through enterprise systems
09:21 – Deterministic workflows vs. autonomous agents
13:35 – Personalization vs. segmentation with AI
17:38 – AIO: How do AI coding tools discover your product?
19:14 – ChatGPT ads and the monetization of AI
20:38 – Google's real-time search advantage
23:00 – Resource recommendations

Mentioned in This Episode

  • Aaron Levy – CEO/Founder of Box, article on enterprise software and AI

  • Ethan Mollick – Professor at Wharton, author of Co-Intelligence

  • ProfG Markets podcast – Scott Galloway and Ed Elson

  • Practical AI podcast – Chris Benson (Lockheed Martin) and Daniel Whitenack

  • Marketing Against the Grain – HubSpot marketing podcast (pivoted to AI)

  • Hacker News – news.ycombinator.com

  • Planhat – CSP platform

  • Alison Brotman / UKG – deploying Sierra AI for customer experience

About Your Hosts

Jay Nathan – CEO of Balboa Solutions and co-founder of ChiefCustomerOfficer.io

Jeff Breunsbach – Head of Customer Success at Junction and co-founder of ChiefCustomerOfficer.io

Subscribe

The Chief Customer Officer is all about keeping the customer at the center—strategies, tactics, and real ideas in the era of AI that leaders can execute. From practitioners actually doing it.

Transcript

Jeff Breunsbach (00:01.88)
All right, welcome back to another episode of Chief Customer Officer. This is actually our second take. Having some computer difficulties, but we're back. Jay, think one thing that we should bring from the earlier recording was I think just a little bit of the, you what you were messing around with this weekend sounded like you, I kind of like just hearing some of the, I don't know, the trials and tribulations of how you're just using this in the day to day, but it sounded like.

At the time, I think you're trying to figure out some branding for your company and trying to use either ChatGPT or Claude or somebody to help you. What were you doing around your branding?

Jay Nathan (00:39.922)
Yeah. yeah, exactly. So I was telling Jeff, we're, we're traveling this weekend to a kid sporting event. And, so I didn't have any projects I was working on per se this weekend, but I was sort of talking to Claude and chat GPT all weekend long about some branding decisions that we're trying to make between Balboa are our, services company. And then also green shoot the, private equity firm that, that is

an owner of Balboa and several other companies. so, but yeah, we were talking about context and how, you know, it's super helpful to be able to articulate a lot of thoughts that you have throughout the day to an assistant or an agent like Claude and OpenAI. Eventually we'll all have an agent, right? That represents us in some way, shape or form, maybe multiple agents. And

the thing that makes those agents powerful is if you, they have a context about your preferences, your, business, your personal life, all the, all the things that make you, you right. And so we were talking about before the break is if that context is fragmented, it's harder right for, for these agents to, sort of help you make sense of the world. And so anyway, so yes, I was spending time

you know, just throughout the weekend as we were going through our activities, when I had an idea, I would sort of throw it into to Claude and let it sort of think about that and process it. And, you know, came out of it with some, pretty good thinking in terms of where we're headed from a brand perspective. But, but yeah, and that's completely different than, than enterprise application of, of

agents, right? And in AI and so to be real careful, like just because we're using Claude and chat GPT to be our thought partners doesn't mean we're, we're able to implement that in the same way in the business, but the same problem exists context. so, Jeff, you're going to talk about Aaron Levy here in just a minute, but he had a really interesting article about context. think we might've even talked about it last, last week, but context is the problem for these agents, right? In our enterprise world.

Jeff Breunsbach (02:29.091)
Yeah.

Jay Nathan (02:56.542)
The more they know about a customer and about a situation, the better they're going to be able to help us and articulate, help us solve whatever problem it is or whatever process we're trying to automate with them. But they've got to have the context. If it's a renewal, they have to understand the health. have to understand the product utilization. They have to understand the commercial relationship that we already have with that customer. They need to understand what our preferences are as a company relative to our commercial relationship with that customer.

So context is sort of emerging as this big problem to be solved. And it's really no different than it ever was, right? Getting all the data in the right places or accessible to the right places to be able to help make decisions. So, you know, I was long winded in winding, but.

Jeff Breunsbach (03:41.177)
No, it's I mean, yeah, I think, you know, part of today's episode, what we wanted to do is just bring kind of some articles and some things that, you know, people that we trust, like who are we are we reading when it comes to AI? Where are we trying to find stuff? And so you mentioned Aaron Levy is he's the CEO and founder of Box, right? I sometimes get confused by Box and Dropbox and so he's the CEO and founder of Box and somebody that I've

Jay Nathan (04:01.214)
Box. Yep.

Yeah.

Jeff Breunsbach (04:10.731)
Even over the years before AI and stuff, think he's a pretty thoughtful writer and somebody that I've turned to when he writes something. I usually try to read it.

He's got a lot of great LinkedIn posts about leading teams and businesses and some of the evolution that's happened in software. so this one, he particularly wrote, I think in the last day or so it's an article on X. think the, I think X trying to go out, you know, say there's a million dollar prize for article articles, was an interesting way to just get the, get the grease or get the, so there, so if you go to, if you go to X now and you notice there's articles galore,

Jay Nathan (04:40.372)
I didn't see that.

Jeff Breunsbach (04:47.342)
they're trying to revive articles. And so they put out a prize of a million dollars for the most seen article in the next two weeks. And so you have all these, I would say you have a large number of people who are, you know, kind of prominent thought leaders or spokespeople that are now taking to X to write an article and try and see what they can do and get. it's, I thought it was great. You know, it's, mean, to be honest, right. It's a great incentive that now they're, yeah.

Jay Nathan (05:12.468)
What is a million dollars to X, right? I mean, that's brilliant. Wow. Cool.

Jeff Breunsbach (05:16.256)
So, so anyway, so Aaron wrote this and, I think the, large premise, we'll try and do this here pretty quick, but the large premise that I think largely people get scared of is AI is coming. It's going to come for jobs and that we're, you know, enterprise software is going to go by the wayside and SAS is not going to exist anymore. And, you know, you kind of have this doom and gloom approach. And, and I think, you know, what was interesting about his article was essentially just trying to, I think, put a little bit more.

pragmatism into the thought, is enterprise software exists for a reason. You think about ERP solutions, CRMs, you're talking about systems and processes that cannot fail. If they fail, then there is a...

There's large financial implications. There's market risks. There's things that would kind of decimate part of industries or part of the markets. so I think his kind of original point is like enterprise software is going to exist because of that, right? The reliability, the repeatability, the audit, like the ability to audit these types of systems and tools, like that is going to need to exist as we go forward. so really, I think what he was trying to point out was this to your point as well.

which is like these large enterprise software systems provide guardrails for kind non-deterministic software. So these agents that you want to create, they need to be able to get the context. They need to be able to.

essentially store this information in a system and a tool that is deterministic. And so, you know, I think his whole thesis was essentially that actually more activity might start flowing through these enterprise systems because you have agents that are working and gaining, you know, they're drafting, they're researching, they're doing this analysis, they're doing customer interactions, and all of that is going to feed back into these enterprise systems. And so you're actually going to have more kind of activity flow through them. And I think in essence, like you're actually going to have more value

Jeff Breunsbach (07:13.84)
flow through these tools. And the reason I think that is like a large question, think a lot of leaders ask, sales leaders or customer success leaders are, is, do I want my team spending time in front of customers or do I want my team spending time in front of a CRM system updating certain fields? And I think everyone always says, well, I want my team spending time in front of customers. I think that is the better use of time and energy and dollars that we can spend. And well, that means, you know, up until now largely...

we'll call it back end system work. We need fields updated. I need notes in there. I need your deals in the right opportunity stage.

Jay Nathan (07:50.538)
and your status reports, your TPS reports.

Jeff Breunsbach (07:52.309)
up sales reports, forecasting calls, like all of these things have kind of been like, yes, you can make them work, but it still is dependent on your team getting in there and doing the work. so now I think you're to Aaron's point, I think you're actually starting to see how agents can live within these tools and systems because they can get the context from the tools and systems. They can go out and do the things that they need to and then bring that back. And it actually makes the I think it almost makes the enterprise system more valuable to you as an entire company. Right now, as a CEO, you're not wondering, well, why aren't the notes updated?

as the sales leader, is Jeff forecasting this deal to be closed, or forecasting that this deal is a 90 % probability, but this other one's 70%, and it's really his judgment versus a system, versus us having a codified system to do that.

Anyways, I thought it was an interesting kind of dichotomy because I think everyone is starting to assume, my gosh, Claude, Claude, co-workers out Claude code is out. There's all these things that are going to come and basically eat up my job or my, know, the enterprise company that I work for. And I think really it's more about how do these things compliment each other and.

Do you see an explosion in enterprise software because we actually didn't know the true demand that was there because we've actually been constrained by us having the capacity to go fill out these tools and systems. They can only work with us as much as we update the fields and update the system and information. Anyways, I thought it was interesting article.

Jay Nathan (09:21.428)
There's only so much, there's only so much time of the day to go maintain your software, but agents give you sort of these autonomous workflows, give you the ability to go collect data that you, that was never worth it to collect before, right? It just took too long and you would never go do that. But I think the promise of agents in general is autonomy, right? When Salesforce implemented, and again, and this is secondhand information,

Jeff Breunsbach (09:26.306)
Yeah.

Jeff Breunsbach (09:36.535)
Yeah.

Jay Nathan (09:49.323)
But when Salesforce implemented their Agent Force platform to handle their support requests, they ostensibly reduced their support head count by 4,000 people because they quote unquote automated it with AI. But then they found out and they've sort of backpedaled since then. We talked about this last week, so I won't go too deep into it. they found out that these, what you're calling deterministic workflows are actually really important to the support process.

I guess, figure it out how to do things like the same way every single time for every single case that came in. Like every single time we have a case, we want to send out a CSAT report or a CSAT questionnaire afterward. It's a very arcane example, but there are certain things you want to happen every time without fail. That's what we mean by deterministic agents by their nature are probabilistic, which means they choose the best thing with the information that the best thing to do with the, with the, the information that they have.

at their fingertips again comes back to context. But also just the nature of these things that they're not going to do things are like humans are not going to do things the same way every single time necessarily. You would hope that they learn and get better at this this kind of stuff. But yeah, so I think I think it's an interesting point. There will be more data flowing through these systems maybe than there ever has been more detailed data.

question that I'm trying to figure out is what is the balance between building something deterministic and using agents? Because we've always had the ability to build workflows, deterministic workflows. That's not a new thing. Just really, really not. It's been around for decades. So where do the two interacting and commingle and where do agents give you the edge?

that you need of autonomous decision making where, back to your point, there are guardrails around the decisions that they're making.

Jeff Breunsbach (11:48.835)
Yeah.

I think one thing, and I don't know if this is the right way to think about it, but I think I'm kind of going through this thought exercise right now as we're starting to build, you know, we're using PlanHat as our CSP. And so I'm starting to get into some of the workflows and decisions that I want to happen, right? Like if these things happen, you know, how does the workflow work? And I'm starting to see, I guess, like the intersection of those workflows with these AI agents, right? Like, do I have a deterministic workflow that has agents or steps that I need the AI

and agents to work in those, right? So for example, have a, you know, if I have a workflow for onboarding, for instance, I want, you know, the AI agent to go determine.

for me if the customer is in a certain segment or is a certain style of business and then I want it to basically bring that information back. You know, we might call that right now segmentation and we have a very rigid segmentation. Okay, like, you know, I want tier ones to get this tier twos to get that tiers threes to get this and I actually wonder if I can use an agent to almost not have to squish all of my customers into tier one, tier two, tier three, but actually have the agent go determine is this a big or small customer? What type of

actual healthcare, like where do they fit in this healthcare spectrum? And so it actually maybe has like kind of, have more options and then I wanted to basically bring that information back and then for me to build a better onboarding plan for that customer. Cause I actually don't think that tier ones, tier twos, tier threes all operate the same, right? They could have different small little changes or intricacies that I want to go determine. So I don't know. That's like one example it's coming to my mind is like, can I, like you said, can I have kind of this, can I have workflows that I want, you know, to happen for things kind

Jay Nathan (13:22.196)
Mm-hmm.

Jeff Breunsbach (13:35.049)
of happen in an order of operations, but can I introduce AI and agents within those workflows to make the system overall better and make the experience for the customer overall better? Because I, you know, again,

You know, I don't want all my tier one customers to get the same exact thing just because we've determined that like you're a tier one customer and I think you deserve an enterprise approach, right? Like I want us to be able to essentially configure the experience for what we think is the best thing based on the all the information that we know about them. And so that would include sales handoff sheets and research that it's doing itself in the market on the company. That would be like looking at, you know, the market overall that they're in. Like, I know, I think there's some things like that that I

might come into play. So that's like one example I just thought up right now.

Jay Nathan (14:21.61)
Well, you're talking about personalization, right? It's not even, mean, segmentation can be deterministic. That's actually pretty easy because you sort of do want to put everybody in a big box. The question is, do you have that data and how do you make that decision on which segment they fall into? We've always chosen very simple criteria for segmentation. I feel like in the SaaS industry, this customer pays more than X. So they fall into tier one, they pay less than X, they fall into tier two.

Jeff Breunsbach (14:24.023)
Yeah.

Jeff Breunsbach (14:44.376)
Yeah.

Jay Nathan (14:51.51)
And we know, like we've said for years that that's not the best way to segment customers, but it is a simple way to segment customers. And the data is, very easily obtainable and it's easy to make that decision. Whereas it's not easy to say, okay, well, this is a tier one customer, but they only use us in this one division and they exist in this market. And so because of that, we need to do things a little bit differently.

Jeff Breunsbach (15:17.464)
Yeah.

Jay Nathan (15:21.32)
that tailoring does not happen today by default. So I think what you're talking about is personalization using AI.

Jeff Breunsbach (15:24.632)
Yeah.

Jeff Breunsbach (15:28.652)
Yeah, in using the agents to help us determine what to personalize and where, or I guess bringing information back to help us personalize the experience. Like you said, think like a lot of...

pieces right now are presumptive, right? Like we essentially, like you said, like they pay us a lot. put them in this bucket. and I just, I wonder, you know, can it go like one aspect too, right? Like we record all of our sales calls. Like, can it go listen to all the sales calls? Can it help us bring back information that maybe the salesperson missed or that like we should, we should, you know, we should consider as a part of our segmentation strategy. maybe in sales calls, they mentioned, you know what? We don't have a big team. I can't really afford to have a big plan about, you know, how I'm going to onboard and

Jay Nathan (15:45.001)
Yeah.

Jay Nathan (15:59.164)
yeah.

Jeff Breunsbach (16:12.976)
So if we were to walk in with a plan that's, here's six months of onboarding and here's 27 steps and I need you to meet these 10 people, they're gonna look at that and go, oh my gosh, right? But if we actually listen to these calls, brought the information out and said, oh, they can't handle that, let's go, what's the slim down version, right? How does that look? How do we actually put that in front of them? It's more palatable, straightforward, like whatever. But I don't know, that's like crude example.

Jay Nathan (16:32.456)
My question for you is how would you actually implement the personalization? Would you do it with a person or are you using agents to drive some of the engagement with the customer? And maybe it's too early for you to answer that question, but.

Jeff Breunsbach (16:44.538)
Um, yeah, I mean, I still think like we're, um, you know, our, at least for us as a, as a business, like, um, we don't have a, um, trying to of the right way, the right phrasing. Like we're kind of an API first developer tool. And so like we, there's not like a platform I need them to log into that I can then kind of create, you know, a workflow and steps and tool tips and kind of a more.

Programmatic onboarding like we do have to have a couple calls to help essentially configure what some of those things look like right now on the back end and so I still I think there's probably humans involved, but I do think like to your point There's probably a moment though. We're like, you know, does our at the end of the day do we actually create a Internal tool system that can help configure these things without the human involved and then we actually just create the whole kind of like flow through right using AI and from the population

Jay Nathan (17:38.507)
Here's an interesting question for you. And I'm an advisor for another company that is somewhat of a developer tool as well, or developers would be the primary user to implement the tool and then you don't really have to touch it anymore. So one of the things we started talking about this week with that company is how do the cursors, the replicates, the base 44s, the clod codes,

Jeff Breunsbach (17:55.405)
Yep.

Jay Nathan (18:08.084)
How do they know about your product? And when somebody says, I need to do X, Y, and Z, how do they go choose your product off of the list of all the other open source products and tools out there and implement that in the code automatically so that you get that sort of that tailwind. It's almost like search, right? It's like SEO, but for agents and people have been talking about this for a long time, but I think it's more interesting when you think about dev tools.

Jeff Breunsbach (18:25.39)
It's interesting. Like searched. Yeah.

Jay Nathan (18:36.871)
and APIs and how they automatically get integrated. When somebody says they need a capability, they know about your product and they go implement that capability with your product behind the scenes in a very simple way.

Jeff Breunsbach (18:46.766)
Yeah. I actually just saw on LinkedIn there. There's a guy who is leading a company now that is all about how they can essentially, I'm trying to think of the phrase he used, but it's almost like AIO or like SEO, know, that's like, what's the AI optimization for like getting you like search results and everything. I, know, interestingly enough too, right?

Jay Nathan (19:02.654)
Mm-hmm.

Yeah.

Jeff Breunsbach (19:14.924)
ChatGBT is now going to be releasing ads in their free version. And so how does that actually work and happen? Right now, it's undefined about how those things are actually going to flow in, but it's an interesting component. You probably saw this coming, that they need another way to monetize, and here's another great way to put flow through in the system.

Jay Nathan (19:18.27)
Yeah.

Jay Nathan (19:34.026)
my god. Yeah.

Jay Nathan (19:40.415)
The most amazing business on the planet is Google AdWords ever to date, right? It just gushes cash. And that's why Google is what Google is. If you think for a minute that these companies are gonna spend billions and billions and billions on CapEx to build data centers and process your crazy requests on their open LLMs and not monetize it in some way, you are crazy. And if you're not paying for it,

What do you say, right? What do we say? If you're not paying for the product, you are the product. so, yeah, absolutely. There's no other way to fund it. I don't think there's enough subscription fees in the world, and these consumer, from the consumers at least, to justify the capex spend on these platforms. So yeah, I noticed that ad stuff too this week. Every company will go that way.

Jeff Breunsbach (20:11.948)
You are the product.

Jeff Breunsbach (20:17.88)
Yeah.

Jay Nathan (20:38.974)
And by the way, shopping on AI is going to be a thing. I guess my question about it, and this is maybe where I think Google still has a really strong edge because they've got real time search and all these LLMs, just so you know, go ask Claude, go ask OpenAI, go ask Rock. How up to date is your data? Like when was the last time the model was processed?

Jeff Breunsbach (20:49.976)
Yeah.

Jeff Breunsbach (21:06.562)
Yeah.

Jay Nathan (21:07.226)
It's further back than you might think. So the data in these LLMs is outdated by last I checked about a year because these training runs are huge and they take lots of power, lots of time, and you don't just update the model every day. That's not the way it works.

Jeff Breunsbach (21:27.554)
Like Claude, I just did it for Claude. Claude is January 2025. So it's a year. Yeah.

Jay Nathan (21:31.594)
Great, same with OpenAI. Although 5.2 just came out, it may be more recent.

Jeff Breunsbach (21:40.015)
I mean, I'll tell you, I still go to Google and when you search stuff now and it just has a little AI summary at top, I use that more often than I want to admit. It's super easy, right? Because you're also, mean, like in some cases, like I might not be looking for a link, but at the same time, like...

You know, it's, I use Claude and chat GBT so many times a day. So like, is part of like my natural workflow, but like, if I'm going to look up something really quickly and I want to make sure that I can get an answer or also have a link to something like it's much easier for me to go do that on Google right now because they've got the AI summary and then they have the links below and it's like, okay, cool. Like you can answer my question. I can also dig into it if I want.

Jay Nathan (22:15.892)
Google is the ultimate rag machine, right? Because it goes and grabs all the results that it can find, which it's indexed up to the minute, pretty much. And then it summarizes that with an LLM with Gemini. I'm way over simplifying it. I'm sure it's more complex than that. I mean, do OpenAI and Claude have that same capability and advantage? I don't think so.

Jeff Breunsbach (22:45.07)
Nah.

Jay Nathan (22:45.608)
I've not heard anybody talk about that. So I'm getting into territory that I'm not qualified to talk about, but it is an interesting difference from my perspective.

Jeff Breunsbach (22:53.696)
Yeah, yeah it is. Alright, I know I've gotta go in like three minutes, I don't know if we can

Jay Nathan (23:00.006)
no. Okay, cool. Well, I think what we wanted to do is share just a few resources that, like interesting for us and actually digging into this, I've been looking for, I've been trying to find new podcasts, new newsletters, new, just content sources that are concentrated enough for me to get some real value out of them. there's a handful that I have already been sort of tuning into, which I'll just sort of list them off and maybe we can talk.

more about these later, Jeff. So number one is this is more of like a how are the markets thinking about AI, but ProfG Markets is one of my favorite podcasts. so Ed Elson, Sky Galloway, that's the original ProfG, Sky Galloway. Very good podcast, definitely a stronger angle on the financial.

Jeff Breunsbach (23:47.384)
Yeah.

Jay Nathan (23:55.229)
side of these things, but very interesting nonetheless. It does go broader than just AI. So, but there's a lot of good, interesting AI content in there. Hacker news. Again, we talked about hacker news last week and an idea we need to actually launch that. Jeff, maybe we do that in the next couple of weeks here. But hacker news is, you know, it's a, the Y Combinator news site where basically people can submit links, upvote them, comment on them, and you get some really interesting stuff there. So I always just sort of scan that every day for AI related articles.

Ethan Malik is a professor and co-director of the Generative AI Lab at the Wharton School. And he's an author, the author of a book called Co-Intelligence. I wanted to talk a little bit about this. Maybe we'll do this next week. I haven't actually read the book, but I went and got a summary of it. Some of the concepts in there are really good. By the way, I think most books should be blog posts and most blog posts should be tweets. So I actually have the blog post version of the book, which is good, but I'm sure I'm missing out on something there. So I don't want to undersell it.

There's a new podcast that I started listening to called Practical AI with Chris Benson and Daniel White neck. You ever listen to that? That one? These guys are like one of them is the Lockheed Martin engineer and he's he runs their AI research lab. then Daniel White neck is interesting. He has a company called Prediction Guard. It's like a it's like

Jeff Breunsbach (25:02.54)
No.

Jeff Breunsbach (25:07.238)
cool.

Jay Nathan (25:20.382)
They allow you to deploy AI systems within universities and research organizations that are sort of safe and protected and, you know, utilizing a lot of open source technology. So pretty, cool. And then one that I think you and I both enjoy is marketing against the grain. That is a marketing HubSpot marketing podcast. However, about two years ago, they pivoted to AI and everything they talk about there is AI. So they do a good job of keeping up on the latest and greatest of the releases of what's going on.

Jeff Breunsbach (25:36.205)
Yep.

Jay Nathan (25:50.494)
More like the tools that a marketer would use and how they would use them. I, there's a whole parallel between customer success and customer experience and marketing that I think we underplay and setting. That's, that's been a, a cool source. Maybe we can dive into some of that at later point, but anyway, those are some of the, some of the things I'll put a one more plug in here. We're having a round table discussion with, Alison Brotman from

company called UKG, big software company, top 50 software company globally, HCM platform. They are implementing an AI agentic platform for their customer experience. So their support organization and they have gone through some really interesting learnings she's gonna share all of that. So that's this Thursday at noon. I actually don't have the link, but if you go to, well, by the time this comes out,

You're pretty much going to miss it anyway. We'll put it in the show notes that we'll put the link in the show notes if you can show up at noon. It will not be recorded. So if you don't come, you won't get to participate in this. But anyway, cool opportunity to learn from a big company who's deploying agents right now. So all right, lightning round. That was it. So all right, cool.

Jeff Breunsbach (26:57.153)
Awesome.

Perfect. Some good stuff there at the end. All right, we'll see you all next week. We'll get this out.

Jay Nathan (27:05.099)
All right, see ya.

Reply

Avatar

or to participate

Keep Reading