View Transcript
Alexa Carlin (00:37):
Hello and welcome to Accelerating Your AI Journey. I'm your host, Alexa Carlin. Today we'll be taking a look at responsible AI, what defines it and how do you ensure it when deploying and accelerating AI in the global enterprise? Joining me today is Rick Kreuser, AI Center of Excellence tech lead at Lenovo. Welcome, Rick.
Rick Kreuser (01:02):
Hey, thank you for having me, Alexa. It's wonderful to be here. I'm looking forward to sharing a little bit about AI and responsible AI with your listeners.
Alexa Carlin (01:11):
Yeah, we're definitely excited to learn from you. So just tell us a little bit about yourself and what inspires you about AI responsibility.
Rick Kreuser (01:21):
So again, Rick Kreuser, been with Lenovo a little over a year. I joined to do some R&D work for the services division, but then moved over to AI to help found the Center of Expertise for AI. And since then we've grown to 30, 40-odd people that really help bring the best of Lenovo to our clients across all of our different groups, whether it be the Services group, whether it be the Devices group, whether it be the Cloud groups or Motorola even. And what's exciting about that is we get to put all the parts of Lenovo together in a way to drive a client outcome, to give a client what they're looking for in terms of a working AI solution that fits with them. And the responsible part is actually a very important part because the solution that you have has to fit with your people, has to fit with your culture, has to fit with your customers and the values that they have. So the exciting part is crafting something, and it really is a craft at this point, that meets all those needs.
Alexa Carlin (02:29):
Yeah, so you kind of just look at the big picture and you help clients with that?
Rick Kreuser (02:33):
We do, and we always start with outcomes because a lot of AI is from the technology up, which is it's a solution looking for a use. We start the other way. We start with what outcome are we trying to drive, and then we work backwards to figure out what security, people, process, technology, responsibility you need in your environment for that particular outcome.
Alexa Carlin (02:57):
So you kind of reverse engineer it?
Rick Kreuser (02:58):
We do.
Alexa Carlin (02:59):
Awesome.
Rick Kreuser (03:00):
Or maybe it's the front engineering.
Alexa Carlin (03:02):
Right, right. Yeah, you could look at that way. So just tell us, what is responsible AI?
Rick Kreuser (03:08):
Let me answer this in two ways. Lenovo has a very detailed definition, which I'll cover at a high level in a second. But in essence, AI needs to be a reflection of your values because what's valuable for you as a company might be different than what's value for somebody else as a company. But it reflects first of all your corporate values. So this is what we think in terms of diversity, this is what we think in terms of security, this is what we think in terms of whatever. All companies are slightly different in that regard. But it also reflects your geo values because you're doing responsible AI in Europe, there's different privacy laws, for instance, than there are in South America or the United States or others. So it's a combination. It's an AI solution that reflects the values of your people, your geography, and your corporate values.
Alexa Carlin (03:58):
So it's all three of those things together.
Rick Kreuser (04:00):
All three of those things. So if you're doing a certain type of solution in Europe, for instance, it will have different requirements, regulatory requirements even than a solution in the United States to be compliant with the way the European Union wants their people treated and the privacy laws that are on the books, etc.
Alexa Carlin (04:21):
Wow. So there's so much to think about, especially when you're a global business.
Rick Kreuser (04:24):
There is.
Alexa Carlin (04:26):
So what are some of the kind of challenges that organizations face when they even just start to think about AI, let alone also implement AI?
Rick Kreuser (04:40):
I think there's two challenges when you start to think about AI, and these happen in this order, which is a lot of clients we speak with now just don't know where to start because AI is overwhelming.
Alexa Carlin (04:40):
It can be very overwhelming.
Rick Kreuser (04:52):
It is. And there's hundreds or thousands of companies, some of them startups that are not very well established. There's many of the traditional companies that are trying to play in there. You can start with use cases. You can start with a partnership you already have. You can start with what your vice president just got off a plane, read an article and says, "Use this supplier. I'm interested in whatever they're doing." It is truly overwhelming and it takes an expert in the field to really dial in, "Hey, here's the right place to start."
(05:23):
And then very quickly after you start, the next challenge you'll get to is skills. I think if you talk to McKinsey, they say there's probably a 500,000 person shortage across the AI skill set. Everybody's looking for AI skill set, whether it's partnerships, whether it's market, whether it's technical skills and data scientists and things like that, there's a shortage of skills. So even if you know where to start, it's hard to get the mass momentum in your own company because it's hard to hire those people and they're scarce in the marketplace.
Alexa Carlin (05:58):
A lot of people are talking about how they're afraid that AI is going to take their job, but you're just saying that there's so many new jobs that are being created.
Rick Kreuser (06:06):
There are. And I do have a strong opinion on that. I don't think AI are taking people's jobs. I think AI are helping people compete in their jobs. So I think when you look at it, people that don't use AI in their jobs could be replaced by people that use AI in their jobs. So it's not a we're getting rid of a job, it's we're upskilling it to be more productive with AI, responsibly AI, of course.
Alexa Carlin (06:33):
Right. That makes a lot of sense. So it's about making sure that you are being educated around how you can implement AI in your job, let alone your organization.
Rick Kreuser (06:42):
Yeah, and that's part of what Lenovo does is we help people figure out that journey because you can do individual productivity, work productivity, corporate productivity, or you can reinvent how you compete. You can address it on many different levels. So again, we go back to what outcome are we looking for, and then we can make the journey get you to that mark in a way that makes sense.
Alexa Carlin (07:06):
So that is how Lenovo AI Center of Excellence is helping organizations.
Rick Kreuser (07:12):
That is exactly what we do.
Alexa Carlin (07:12):
Okay.
Rick Kreuser (07:13):
We start out with a client, we understand where they are, we use the expression, "We meet you where you are," because everybody has different aspirations, budgets, timeframes, outcomes, starting points, partnerships, people skills. The list goes on and on and on, and you really have to assess it as a whole to say, "Okay, this is how this journey can work." And within the Center of Expertise, our job is to not only understand the customer side of it, but it's to understand what pieces from Lenovo or Lenovo's ecosystem or even in the marketplace can we bring to bear on that problem to get the right solution? That's our job in the Center of Expertise.
Alexa Carlin (07:52):
So it's definitely very helpful for a lot of these organizations trying to deploy and implement AI.
Rick Kreuser (07:52):
Yeah.
Alexa Carlin (07:59):
So can you share some examples of use cases that you've personally seen?
Rick Kreuser (08:04):
Sure. So I can tell you a quick story. There's one use case that we're working on, it's not completely implemented, but it's a knowledge management use case. This is a very common use case in the market today, which is an organization has all these documents and they don't know what they are, or they've got conflicting versions of documents or multiple contracts or things that were written 10 years ago that it's in a filing cabinet somewhere.
Alexa Carlin (08:31):
Always a filing cabinet,
Rick Kreuser (08:33):
Always a filing cabinet. So in this case, the people we're working with is a European bank and they're having problems answering questions to regulators. So the regulator knocks on the door and says, "Hey, we'd like to understand your stance on issue X," whatever it is. Well, the bank can't find the documents. They're giving subpar answers and they continue to get fined by the European Union. And eventually, Corporate Risk said, "Guys, this is not the way we want to go forward. It's hurting our reputation, it's hurting us financially. Go find a solution." So we architected a solution which takes the documents, I won't bore you with the technicalities, but takes the documents, brings them and cleans them.
(09:17):
And then you use an AI interface like ChatGPT to ask it questions and it gives you the answer and says, "This is the answer you should give the regulator. This is how risky that answer is," because maybe the technology feels good about it, maybe it doesn't, and it cites all the sources, which is, "These are all the documents that I," being the solution, "used to build that answer." And so now you have better answers to regulators, much less time with people doing manual research, etc. That's one use case, which is, loosely spoken, a knowledge management use case. We see other hotspots in chatbots, in code generation, whether it's making new code, for tweaking the code, documenting the code, or maintaining the code.
Alexa Carlin (10:07):
Yeah, very cool. So why is it important that organizations deploy and scale AI responsibly?
Rick Kreuser (10:16):
Wow, there's the money question literally. When you think about responsible AI, responsible AI is designed to minimize the negative consequences associated with deploying AI. And the negative consequences can take a lot of forms. If you deploy a bad solution and it causes harm, the harm could be to an individual, whether it's a customer that was offended or got a bad answer or took action on something the AI did that was inappropriate. It could be employee harm, in which case the AI and the people that are developing and using AI weren't built together, and so employee satisfaction can go down, which either can cause lawsuits or people leaving in turnover, neither of which are exciting for companies. And you can also cause harm to the public where either your brand reputation or regulators, you run afoul with them, and that can have all kinds of consequences. You could end up with a lawsuit which would cost you hundreds of thousands or millions of dollars.
Alexa Carlin (11:20):
And it's not even something you really think about right away. You're just thinking about, "Wow, this innovation is amazing. This is going to help us be more productive and efficient." But if you don't really do it the right way from the get go, it can cause a lot of problems.
Rick Kreuser (11:33):
Well, it is. I mean, let's say we're troubleshooting one of our PCs and the customer, let's say just for instance, let's say we hooked up a chatbot and the chatbot recommended that the customer troubleshoot the PC, do steps one, two, three, and four, but it omitted the step to unplug the machine. What if that person electrocuted themselves? There have to be checks and balances on the AI solutions to make sure that they're a quality solution, they're providing the right responses. And at this point, as Lenovo, for most of our solutions, we keep a human in the loop to make sure that there are somebody making sure that the solution is doing what it's supposed to do and fulfilling its purpose without undue harm.
Alexa Carlin (12:21):
I think that's really important to remember is that AI is like an assistant to help us be more productive, more efficient, but there's still the human factor that needs to come into it.
Rick Kreuser (12:34):
Absolutely. It's essential today. Now, I'm not saying in the future we won't have certain or many applications that are not human in the loop in some way. But for now, I think that's a reality today.
Alexa Carlin (12:47):
Today. So I understand Lenovo has its own set of guidelines for responsible AI. Can you walk us through some of those?
Rick Kreuser (12:54):
Absolutely. I'm really proud of this. So about five years ago, we formed a responsible AI committee at the direction of the board, and we said, "We want to have this be a reflection of how we think about AI for us and for our customers." And we established our responsible AI policy, and it's got six parts, six pillars is what it comes down to. So starts with diversity inclusion, responsible AI policy includes security and privacy. A lot of companies have it the other way, but we have diversity inclusion, security and privacy, explainability, transparency, accountability and performance, and sustainability are the six pillars, and those reflect our values. So when we go make an AI solution, every AI solution we make goes through the responsible AI committee to make sure it meets our standards and our customers' standards for those six pillars.
Alexa Carlin (13:55):
Interesting. So when you're talking about these solutions, these are part of the AI library that Lenovo has?
Rick Kreuser (14:01):
They are. They can be. So think of the AI library as accelerators. These are things that we've built for ourselves and use internally for ourselves, for it could be a partner that has built it with us, but when we go to a customer and say, "What outcome, Alexa, would you like from your AI?" If we can take pieces of things that we've already built that have been through the responsible AI reviews so we know they work, they're kind of internally certified if you want to think about it that way, we take things from the library as a starting point and we put things together on top of them to make it so we get to the outcome for the customer faster.
Alexa Carlin (14:38):
Okay. Yeah, it's a good analogy to think of it as little mini accelerators for a certain outcome.
Rick Kreuser (14:45):
We do that and we have identified, I don't know, at this point, 12 or 14 spots where think that we can bring value to a client more quickly than our competitors or potentially other firms.
Alexa Carlin (15:00):
So you talk about these different solutions you've created. So within that, when a customer comes to you and to Lenovo, how do you help them scale responsibly?
Rick Kreuser (15:11):
How do we help them scale responsible? So that's interesting. I'll answer it from the responsible angle first because I think it's important. When I think about the things that are necessary to scale AI responsibly, I think of three things. The first and most important starting point is your responsible AI policy. It has to be a reflection of how the company thinks about AI. You get into a lot of cases where if a company does not have... Bought into responsible AI policy, you end up with different flavors and different interpretations all over the place, which doesn't scale. You'll end up with millions of one-offs and little things going on without any binding thing that says, "This is what we are and how we do it." So I think about responsible AI, the policy itself.
(15:57):
Then I think about a governance layer, like the responsible AI committee at Lenovo sits in the governance layer, which is we review the solutions this way, we make sure they adhere to these standards. An analogy would be like Congress makes laws, the judicial system interprets them. And then the third one I'd say sits under the governance layer. It's really an operating system that allows you to do things at scale.
(16:28):
Now, I'll give you a perfect example. If you extend the judicial analogy, if you have Congress making the laws and you have the judicial system interpreting them and saying this is how they present themselves to the world, then you have people like us that walk around every day saying, "Okay, how do I structure my life to stay within those laws and guidelines, but do my best job I possibly can?" Same analogy applies for corporations. So I'll give you a perfect example. In order to scale chatbots, we have a system called Cake AI, which is basically a library of common components you would use for a chatbot that address accessibility specifically, the right color schemes to be WACA compliant, the right size fonts, and different fonts so that people in any jurisdiction, whether it's Europe or the United States, have access to it for accessibility. And we use that for everybody. So instead of 14 chatbots across Lenovo all trying their best to make their own standards, we have common libraries so we make that once we reuse it over and over and over. That's an example of how we scale.
Alexa Carlin (17:41):
That helps you scale to multiple organizations also.
Rick Kreuser (17:45):
The library does, the component libraries do, and we can apply those to clients as well.
Alexa Carlin (17:50):
So you help organizations implement responsible AI. Now, how do you help them stay responsible?
Rick Kreuser (17:56):
So actually part of the work that we do as we implement solutions is we have to think about a solution in operation. Because an AI solution in the field, I'm being generic here, results don't stay static for some of the solutions, especially in generative AI. When you feed it information, it will learn from it and give you different answers. It's incumbent upon the people making the solution and running the solution to have the appropriate controls. So we see human controls where humans inspect things and say, "That was an appropriate answer. That wasn't an appropriate answer. This isn't working well. This is working well." That's a requirement for a lot of the standards bodies like ISO and NIST for you to be certified is you actually have that human control.
(18:42):
There are also mechanical controls you can keep in place. You can build in guardrails to keep people from putting in inappropriate things or getting out inappropriate things. You can have reference models that you can compare to what's actually out there in the field. And as soon as it begins to drift or demonstrate some bias, it can trigger an automatic refresh of it so that you can go back to square one and not get the inappropriate answers. So there's mechanical ways to do things and human ways to do things. And the trick is understanding your risk tolerance and picking the right sets of things to implement for any particular solution.
Alexa Carlin (19:20):
Right, which is very important because based on the users that are inputting information, it could change.
Rick Kreuser (19:28):
You will have models that evolve over time to give you answers. And so one of the things that we do in most cases is we'll come up with a set of reference questions, which are, "Here are 20 questions," and you can do this at scale mechanically, but, "Here are 20 questions that we wanted to answer the exact same way every time." And you run that periodically, whether it's hourly, daily, whatever, against the model. And if you get a different answer, it's time to change.
Alexa Carlin (19:52):
So does Lenovo help organizations create those guidelines?
Rick Kreuser (19:55):
Yes, absolutely. So it's one of the things that we offer as our services as part of advisory is we will go in and cover responsible, people, technology and process, cover all those areas to make sure that the sum total of your people, plus your technology, plus your responsible guidelines, plus the jobs that people do every day, the process work that they do, all fit to deliver that result.
Alexa Carlin (20:22):
So if I'm an organization watching this or listening to this and I like, "Yes, all that sounds great," what would you say is an action step I should take before I contact Lenovo?
Rick Kreuser (20:35):
Before you contact Lenovo, and we can actually help with this step too, but probably the biggest thing that we see is customers come to us and they've not gotten alignment internally on what they're trying to do. So you may get somebody comes in and says, "I have a use case," but that's not really what they're trying to do. What they're trying to do is get some outcome and they think that use case is going to get there.
(20:58):
One of the phenomenon that was really interesting about AI is it came on so fast that nobody really budgeted for it or thought about it as a concept that the company itself had to deal with. It presented itself as a whole bunch of projects that just popped up. So when you think about getting people aligned behind a responsible AI policy, behind common component libraries, behind the common infrastructure, these were things that companies hadn't solved for before and now have to.
Alexa Carlin (21:28):
Right.
Rick Kreuser (21:29):
It's required. So you can engage consultancy like us, call ourselves a consultancy and a device company, but you can engage us. But the more work you do ahead of time to get aligned internally, so your security people, your data people, your CIO, your line of business, are all saying, "Yeah, that's what we want to go do," that will speed the process immensely.
Alexa Carlin (21:54):
Right. And help make it more successful earlier on this stage, okay.
Rick Kreuser (21:58):
That's the first thing we do as advisors is if we ask you what the outcome is you want, then we ask your colleague what the outcome you wanted and those are two different things... Because, I mean, I'll give you a perfect, for instance. Within Lenovo, sometimes we do AI to prove a concept. Sometimes we do it to drive productivity. Sometimes we do it to compete along a different dimension. So what are we really trying to do here? And so you have to ask that question to be just intellectually honest with yourselves. And if you get different answers, go figure out how to get aligned.
Alexa Carlin (22:32):
Right. And I love how Lenovo, you're using the products you're also offering.
Rick Kreuser (22:37):
Yes. So we have a concept that's... I think some folks have heard it in the market, and we're using it more and more, but it's called Lenovo Powers Lenovo. And within Lenovo Powers, Lenovo, the whole idea is, I mean, we've got the world's 10th best supply chain, we've got an outstanding field service organization that keeps winning award after award after award, which covers things like customer service functions, so when we have an idea on how to use AI, we'll implement it internally first and prove it out. We did that with Microsoft Copilot, for instance. If you want to do Copilot adoption services, let's figure out how to use it and deploy it ourselves first. And you get to tell the story in the market and you get the bumps and bruises from having gone through it at an N Equals One kind of level. And that's valuable, that's valuable so that other companies don't have to... They can go faster, they can get a better outcome more quickly.
Alexa Carlin (23:34):
Yeah, you take away the whole research part of it.
Rick Kreuser (23:36):
I think there's still an element that needs to go there because you still need to fit it into the organization. So if you really want to think about it, think about it like a cable company, the cable that runs under the street is the same for everybody. The last mile of getting it into your house, you might have a different port, you might have a different place that it pops up out of the ground. So the last mile in a lot of cases is different for each client and plus the security side because everybody's security is different. But really, that's a good analogy to think about it.
Alexa Carlin (24:04):
Yeah, very good. So it's kind of like Lenovo gets you 80% there.
Rick Kreuser (24:09):
Lenovo will get you 100% there, but the componentry in the library can accelerate you to 60, 70, 80% we hope. Now, is the library complete? You're probably going to have use cases that don't have something in the library today. So we would look to either a partner to accelerate it or we would build it with you, partner with you to figure out what unique value prop we want to bring for that.
Alexa Carlin (24:33):
Which is really great because you have the foundation of the use case of the solution built. And then, you said this in the beginning of the interview, every organization needs something different based on what they do, what their people are focused on. And so you've customized the solution to that need and outcome.
Rick Kreuser (24:55):
That's exactly right. So we understand that every client has different aspirations and needs. They start at a different place. But you can take a common component if you want to think about it that way, you can change, maybe you change the data that's coming in slightly to be something different. Maybe you change the output formats and the queries that you answer on the backend to be something different. Maybe you change the weights and biases in the model itself so that it gives a different tone or type of answer. But those are all things that...
(25:27):
It's almost a kind of a car, right? The chassis and the wiring is going to be pretty close to the same for everything. But what entertainment package did you get? Where are you going to take it? Do you need a gas guzzler or do you want an EV that goes short range? So there's a few really distinct choices you can make to customize your model, especially related to security because everybody's security is different. Security for a bank is going to be different than security for a startup, which is going to be different from the government. It's just different things that you consider.
Alexa Carlin (26:00):
I love all your analogies, by the way. So one final question for you. What does Smarter AI For All mean to you?
Rick Kreuser (26:09):
Smarter AI For All means that you are really finding the right solution to achieve the outcome across the entire hybrid AI ecosystem. Whether that's from a device, through the orchestration, the infrastructure layers, your cloud layer, data applications, language models, services, and you've applied them all in any way to achieve an outcome for a client. That's exciting.
Alexa Carlin (26:45):
That is very exciting. Well, thank you so much, Rick. It was great talking to you and learning from you.
Rick Kreuser (26:49):
Sounds great, Alexa, thank you so much for having me. I've enjoyed it.
Alexa Carlin (26:53):
Yeah, definitely. So again, I'd like to thank my guest, Rick Kreuser, AI Center of Excellence tech lead at Lenovo, for stopping by and talking with us today. And thank you for watching. Visit us online to learn more about how Lenovo can accelerate your AI journey on the road to Smarter AI For All.

Rick Kreuser
AI Center of Excellence Tech Lead, Lenovo
Rick Kreuser works to help customers build new things using the right investments and technical vehicles, at the right time, to achieve a set of outcomes. He spends his time on lifelong learning topics including innovation, value from technology, optimizing technology investment portfolios, and transformational programs.

Future-Proofing Your AI
Gerald Longoria, Director, Business Unit Executive, Hybrid Cloud Services, Lenovo

The Best of All Worlds – Hybrid AI
Naomi Jackson, Global Marketing Lead, AI Solutions & Services, Lenovo

Navigating Sustainability in the AI Era
Find out how you can leverage AI and still meet your sustainability goals.

Empower Employee Experiences With AI
Learn how to empower your teams to do more with AI-powered tools and Lenovo’s Digital Workplace Solutions.

Windows 11 Migration
New episodes in May
Provide more powerful experiences and consistency with better security for all.
Coming soon

AI PC
New episodes in June
Streamline and enhance everyday tasks with smarter AI that’s personalized for each user.

Hybrid AI – AMD
New episodes in July
Find the perfect balance of hardware, software, and services that’s built to fit your architecture.

Hybrid AI – NVIDIA
New episodes in August
Achieve greater productivity, privacy, and agility with tailored AI solutions.
Contact Us

Meet Your Host,
Alexa Carlin
Alexa Carlin is an in-demand public speaker, bestselling author, top content creator for women's empowerment, and the Founder of Women Empower X. Alexa has worked with Fortune Global 500 brands to create captivating and relatable content. She has been featured on the Oprah Winfrey Network, Cheddar TV, FOX, ABC, CBS, TEDx and in Entrepreneur, Glamour Magazine, and Forbes, among others.

