Blog

The Human Side of AI Transformation

Written by Cheryl Brown | 7 May, 2026

AI is changing how work gets done, but the most important question may be what it allows people to do better. In this episode of Cut to Context, Q2 CTO Adam Blue talks with Chief People Officer Kim Rutledge about how the role AI plays in the workday, including employee uncertainty, the value of judgment, and how automation can create more space for meaningful work.

Watch

 

 

Subscribe

    

 

Related Links

AI for Everyone, Q2

"Aladdin Sane" by David Bowie

Transcript

Adam Blue

Hey, everyone. Welcome to Cut to Context. Joining me today is Kim Rutledge, chief people officer here at Q2, and we're going to explore a number of topics around AI transformation and the role employees will play in that in our business. Super excited to have her here today. She's been at Q2 for quite a while, I think about 10 years, if I remember correctly. And let's just go ahead and get right into it.

So we're seeing a lot of discussion now and a lot of writing about what AI transformation really means for an organization. And there's a lot of talk about institutional AI versus individual AI and how these transformations are likely to be organizational as much as they are individual. Where do you come down on that, Kim? What do you think about that?

Kim Rutledge

It's interesting. I think that initially, people were either excited or anti AI, and leaned into whichever direction that fell for a while. And now I believe that we're beginning to see some of the outcomes that could be driven by the benefits institutionally. My opinion is that whether it's institutional or individual becomes a less relevant question than the outcomes that we're driving through AI. And where I really believe we need to start is by stepping back and saying, "What are those outcomes that we get to drive?" 

I think it's an exciting opportunity for us to rethink whether we're achieving the outcomes we've desired up to now and really focusing on whether those outcomes are customer focused, speed, quality, innovation, experiential. I think this gives us an opportunity to really step back and look at this differently and say, What are the expected outcomes," instead of, "How do I make this process faster?" Really focus instead on that end result and allow ourselves to rethink it. 

So my leaning is that there will be a huge institutional impact on how things get done, but I also believe it's an exciting opportunity for the individuals who are in the process of defining and deciding how this gets done. Does that make sense, Adam?

Adam Blue

Yeah. No, I think it does. I think it does. I think maybe we could frame it a little bit as thinking about not just doing the work we already do differently, but what is the work that we want to do?

Kim Rutledge

Yes.

Adam Blue

And how do we want to do that work? And I have a great many tasks each day in my life that I perform because they're required and not because I enjoy them. And so if you've ever seen me unload a dishwasher, this would make perfect sense to you. Like this … I'm very focused on being finished with unloading the dishwasher in a way that often makes the task take much longer because I'm dropping something and then I’ve got to put it back in the dishwasher because it touched the floor and now it's dirty. And I think there's some feeling right now that the rate and pace might be something that is driving us more than our intended outcome or where we want to land. And that feels like it's a little dangerous.

And along those lines, one of the things we frequently forget about as leaders is that employees are separate individual human beings with their own wants and needs and privileges and rights. And so what do you think we're underestimating collectively about what it's going to be like to be an employee and what the impact for them is?

Kim Rutledge

I want to divide this into two different areas. One is where we are today, and the second is where I believe we can go. So where we are today, I think employees may underestimate how much everyone is experiencing fear, uncertainty and doubt at all levels of the organization, because I think that we're lucky enough to work at a company where we can admit we don't know what it looks like either. We've got a lot of theses that we're testing and looking for opportunities, but no one knows really what this looks like 12, 18, 24, 36 months, much less five, 10 years from now.

I think we all have some fear, uncertainty, and doubt. And our employees probably underestimate how much we have, and our managers probably underestimate. And I mean managers at all levels, we probably underestimate how much we need to support our employees through this as we are helping them start to transition the way that they work and start to transform the way that they think about work.

Now, the exciting part of that impact, though, is I believe this results in permissions to be more human than we might have felt previously. I believe that we are going to end up in a place where some of the bureaucracy and rigidity that is required to get, in our case, 2,500 people operating in consistent manners that deliver consistently expected outcomes ... I believe some of that rigidity gets put into the tools, and the humans get to show up as more human.

And I think it's an exciting time because I think we're also giving people permission to reimagine and reinvigorate themselves on whatever task they will be doing. Now, it's a journey. We've got to get those dishwasher items off of us first, right? That's going to take a bit to get that design and implementation done. But I think ultimately, the humans get to be more human than we may be able to feel on a day-to-day grind basis. I don't know how human you feel unloading the dishwasher, but probably about as human as I feel when I'm ironing clothes.

Adam Blue

Yeah. That's another one. We need robots sooner and not later, for sure. So I like this optimistic view that the onrushing technology of AI creates more space and room for us to be human in the job roles. And so how do you and your team, specifically, because they're at the ground level working with people every day, how are they thinking about how to characterize automation and improvements in the technology that engender joy and not alienation for the employee? And to be fair, that's a tall order. Joy at work is a great aspiration. I'm not sure everybody finds it every day. But what rubric would you guys apply to think about that, like automation that's essentially helpful and valuable versus just really soul crushing?

Kim Rutledge

I think it's generous to say that we've defined that, but I would say that our starting point is in a couple of directions. One is this premise that good automation removes repetitive work, it creates space for judgment. It doesn't just make you faster, it accentuates those opportunities for you to leverage your discernment, your taste, your context knowledge to bring value. And I do believe that the vast majority of people at work want to believe they've done something meaningful during the day.

And I think on the days when it's frustrating, frequently it's frustrating because you didn't get to feel that way, because it felt like you were pushing your head, battling against a brick wall all day long and didn't get where you wanted to get, didn't get done what you wanted to get done, for whatever those hurdles are that were in front of you, whether they were internally imposed or externally imposed, or technologically imposed, or whatever those hurdles were. I think most people just want to feel like they're adding value and want to believe that they're actually doing something that means something to somebody, whoever that receiver is, whether it's one of our customers or an internal customer.

So I think starting with that mindset that good allows people to do more of that, to add greater value and have to do battles with brick walls less frequently and removes that repetitive task is a good place to start. Now, the second piece of that is that my team is very much focused right now on discovery. What do our teams need? What can we do to help them move from here's how AI works, to enablement through the institution, back to your other question. And how do we help our teams get there?

And so part of us looking at it optimistically, part of it is, what are the tools that we need? We have some tools; we don't have enough tools. We don't know which tools are going to be the right tools a year from now. But we've got theses that our different teams are testing. And so how does my team enable people to be ready, and how do we help them lean in? Because we have a very strong belief that becoming great at some portion of this helps keep our people relevant. And most people don't get into HR because they don't want to help people. And so most people who are in HR are there for a variety of different reasons, but they love helping people within a business context. And so helping our people be able to develop their own skills to be relevant and then helping the organization realize the value of that, those things go hand-in-hand for those of us on our team.

Adam Blue

Yeah. I love your use of the word relevance here, because when I get a thing that somebody needs help with, I don't get excited about logging into the second Okta instance and authenticating and then doing a face ID on my phone to unlock the Okta. I recognize that these are valuable and important things, but that's not the part of the job that's interesting. What's interesting is that last 10% of it where you have that moment where you say, "Oh, I understand what's happened." And I can credibly explain to the customer or an employee or a partner or whatever why this is happening, what we can do about it. And then I can check off that box. And then you get that little dopamine hit of like, "I did a thing that mattered to someone." And so as we work through that, let's lean into that idea of relevance. So which skills are going to be the most relevant? And we're all making it up.

Kim Rutledge

We're all making it up.

Adam Blue

So feel free to be very wrong, but at least be convicted. Which skills are the most relevant for people to continue to develop and continue to invest in? And how does that map into organizational readiness in an industry where a big part of what we do is we're very trustable, we're very consistent, we're very repeatable, we're very right all the time, and we have this AI technology that doesn't always have those characteristics. And so how does that skill evolve, I think, for the employee at Q2?

Kim Rutledge

I genuinely think this is one of the most exciting parts about AI. I read a couple of articles recently ... And who knows? This appeals to me, so I'll admit my bias upfront. But there certainly is some thought out there about liberal arts educations becoming more relevant in this kind of environment, because of course …

Adam Blue

That would be amazing.

Kim Rutledge

Wouldn't that be amazing? Because what's wonderful about a liberal arts education is that it teaches you how to think. It doesn't teach you what to think. It teaches you how to think, how to approach a problem, how to investigate different views of a problem, how to form your own hypothesis about potential answers to a problem. And that's exciting to me, because I find people who think like that more interesting than people who don't. And so just from a personal bias perspective, I love being around people who challenge me to look at a problem from a different angle and to not accept today's current solution as the solution. I frequently look at anything we're doing, it's like, "Well, that's “a” way. I don't know if it's “the” way." There are very few “the” ways that I accept.

So when we convert any of that though into skills ... And what I am convicted around, and may very well be wrong, but I suspect there's some portion of this that's going to be correct. I think judgment, problem framing ... I already said discernment and taste, but that's something AI doesn't have. That's well documented. AI, at least today, they're not able to provide taste. They can react or reflect what fits into your taste, but you have to describe it. You have to say, "This looks good, this looks bad." And orchestration, being able to think how the different pieces fit together and what different pieces of knowledge that may exist in a variety of locations, what's needed in order to come up with the right solution.

I think adaptability is going to be crucial. You're going to have to be adaptable in very ambiguous environments. And candidly, I'm not sure all humans are particularly comfortable in ambiguity, but I do think that that is going to become a strength for those people who are comfortable with it and are comfortable operating in ambiguity, but then finding clarity. I think that people who are good at that and people who can develop the ability to live within that, even if they don't love it, are going to be better suited to this.

And I think that rewarding outcomes and not just process adherence is really exciting, because that leads me to this trust piece that you asked about. Our customers critically rely on us to be trustworthy. And our failures there can have long-ranging impacts for them, wide-ranging impacts for them. And so I believe that trust, risk, innovation, I think they're all interconnected, and that we can operate extraordinarily effectively by building governance in early into the tool and using guardrails and transparency. And I want to emphasize transparency because I think that's really critical, because we all know AI is not always correct, it's not always going to give you the right recommendation, and it's not always going to provide the right output.

And so I also think that, interestingly, someone's ability to test the outcome and judge whether the desired outcome really was the produced outcome is going to become a much greater valued skill. Not that it shouldn't have always been valued at a really high level, but I'm not sure that it always was. I think there was a lot of production that was really valued, and now really to test that outcome is going to be critical. 

And so I think experimenting with accountability can be one of the things that is interesting for our employees, but I think that by ensuring that we have principles around design, so innovate with the guardrails already built and then test, maniacally maybe, that's how we pull in the accountability and the trust that our customers require. But I think those skill sets go hand-in-hand.

Adam Blue

Yeah. This notion of being able to measure what happened as being the primary characteristic that creates impact and value over time, we're starting to see that in engineering, where with engineers writing less code directly, it's even more crucial that they be able to understand code that was written maybe by an algorithm, code that was written by a generative process as opposed to code that they hand-authored. And then the focus on, it's less about what's in the box and more, did I get what I expected to get out of the box?

Kim Rutledge

And that goes back …

Adam Blue

That transition is big.

Kim Rutledge

Sorry. I think that ties back to my point around understanding context and having that judgment, discernment, taste piece, right? It's not just A plus B equals C. It's does this produce an outcome? Even if some part of that was unexpected, is the outcome more valuable or less valuable because of what was unexpected? And being able to leverage your judgment in those ways is going to be a very different type of approach than might have occurred in places where, perhaps for years, we at least felt like we had less choice about being rather pedantic about exactly how we achieved certain outcomes.

Adam Blue

Yeah. Yeah. I think there's a whole reconsideration maybe that becomes available to us. I think about things that we could try to do now that would have been unthinkably labor-intensive before generative AI, and now it feels like, "Well, why not just try that? Why not just go and reread all the master data services agreements to figure out whether or not we really did or didn't commit to a particular thing in a particular way, and can we build a product like this?" Before, that would have been, I don't know, 1,000 person hours, and you'd say, "Well, we just can't do that." And so you do some other less effective thing instead, and now those kind of things are off the table. So yeah, I think that's an insightful approach.

So there's a lot of complexity around AI and there's a lot of complexity around the existing business, and the collision of those sometimes can feel moderately insane. So how does your team think about bringing people along without oversimplifying the conversation and stripping it of all the nuance? Because we are in a period of really high ambiguity, and human beings generally are not fans. And so how do we keep the nuance in the conversation, but not dumb it down in such a way that it just becomes kind of rote, in a sense?

Kim Rutledge

Yeah. I think there is a tendency, and certainly there's a tendency in the media, to lean into hype. And it's either magic or a menace, depending upon which articles you're reading and at what point you're reading, because it's almost as if the media hype shifts week to week. It's going to save us all, it's going to kill us all. And it's a story of extremes. And my view on how we assist our employees is that we need to be clear about what's changing. And I am a huge fan of transparency generally. People who know me, and certainly, Adam, you know this about me, I speak directly, I speak point plainly.

Adam Blue

Can confirm.

Kim Rutledge

Yes. I very much appreciate transparency and I very much appreciate when we can be absolutely as transparent as possible with our employees, because I think people will lean in towards achieving what the company's trying to achieve when they know those things. And so the more transparent we can be about, "Here's what we're trying to achieve, here's what your role is within this, and here are the ways that we expect you to show up," the better for everybody. The better for us, the better for our employees, the better for the humans who are these employees. To your point earlier, they're each individuals, and they get to say, "These are the things I agree with, I don't agree with." We are all operating with free will here. And so those are lovely things, in my opinion.

I think that for us, it's about showing up with clarity, and that clarity includes where we have a lack of knowledge or visibility, because you're right, it's certainly very ambiguous today. We're going to have a long journey. And not everybody's going to move at the same pace. There will be some parts of the organization that move faster, there will be some people within those parts who move faster, there will be some teams that move faster. And I think that we need to understand that and recognize it without fearing it.

Now, there may be a place where you say, "We should move faster. This team, I, this part of the organization has an opportunity to move faster than we're moving today, and here's what we need." And I hope that our employees will be excited to raise their hands and say, "I've got an idea. I think we could do this differently." But I think that they also should understand that we won't have all the tools for all the teams doing all the things all at one time. It's not a flip the switch type approach.

And so I think that our managers need to become good at translating the changes. So not just being clear about what's changing, but translating those changes into, "And here's what that means for us, for our team, for your role within our team," and creating urgency without creating chaos. And I think the more that we as a company can focus on enablement and we can create space for engagement with tools, engagement with us, engagement with questions that they may have about where we're going, what we're doing, who's going where, when are they moving, I think the more that we're able to do that and find those spaces for that, the better it is for all of our employees. I remain incredibly convicted, this is going to be better for the humans. This is going to allow the humans to be more human.

Adam Blue

Yeah, I like to think …

Kim Rutledge

We're not perfectly great at operating like that either, but sometimes it's easier just to silo ourselves off and say, "No, I'm going to behave like a machine for a minute."

Adam Blue

It can be very rewarding to do that. Something you just said there that I thought was really fantastic was urgency without chaos. And I think your earlier points around transparency really underline that. I think transparency is the antidote to chaos in some sense.

Kim Rutledge

I do too.

Adam Blue

So I'm going to challenge you here a little bit. In one sentence, and it can have many dependent clauses if required, in one sentence, what does thoughtful success look like a year from now for us?

Kim Rutledge

I think a year from now, we have less repetitive work, better decision making, more ownership and agency. Work feels more meaningful. We have more personalized experiences for employees and potentially for our customers, and less rigidity around our processes. That's a lot in a year, Adam. That might be 18, 24 months, but I believe we can get there.

Adam Blue

All right. So many days, so many days between now and a year from now. We'll figure it out. Great. Yeah, for sure.

Great. All right. Well, thanks for being on today, Kim. That was really fantastic. One of the things we like to do here on Cut to Context is to offer up a favored piece of art or media or something from the past. And you've got a great one today, so I'll let you go ahead and drop that in for everybody before we wrap.

Kim Rutledge

For sure. David Bowie's “Aladdin Sane” album is fantastic. I recommend everyone go listen to it. I will also say, for those of you who've been around for a while, you probably, once you look at the cover of this album, will see that you already know that I'm a fan of this because I've actually done this for a Halloween costume once and used it as a profile photo for a while. But go take a listen to it. David Bowie did a lot of work in the '70s where he challenged all of us to think differently about music and what made sense and what was authentic or inauthentic. And I think that that in this context is very fitting. So I encourage you to go listen to just a really badass album.

Adam Blue

Amazing. All right. Well, thanks very much, Kim. Thanks for being on today. Thanks to everybody watching for joining us for Cut to Context. You can find us wherever excellent podcasts are sold.