skip to main content

AI Compliance Programs

In this week’s episode, Katherine and Anna share their thoughts on compliance programs for AI and highlight some key regulatory concerns.

  • Guests & Resources
  • Transcript

Katherine Forrest: All right, good morning everyone, and welcome to our fourth episode of “Waking Up With AI,” a Paul, Weiss podcast. I'm Katherine Forrest…

Anna Gressel: And I’m Anna Gressel.

In our first few episodes, we did some very basic background on AI and for our listeners I wanted to let you know that you can email us to get on our email list. From there, we’ll be able to give you more in-depth content on recent developments and some details on upcoming events.

Katherine Forrest: And in today's podcast, we're going to change it up and talk a little bit about some legal nuts and bolts and practice pointers for the exciting topic of an AI compliance program, which actually for both me and Anna is an exciting topic. And I want to just introduce this by saying that I'm sometimes surprised, Anna, by the number of companies that don't yet have any form of AI-specific compliance program built into their compliance structure.

Anna Gressel: Yeah, I am too. I mean, I understand it's hard to build, and I think some of the lag is just because it takes so many people internally within a company to actually get this done. But the reason it's important is because AI is just such a hot topic for regulators kind of at the state level, the federal level, internationally. It's all over the place. And we deal a lot with AI-related exams and investigations. And I'll say governance and compliance is one of the things that all those regulators are looking very carefully about.

They want to know which tools companies are using, whether someone in the company really has their arms around that, and whether they have a defensible process in place to make sure that those regulatory expectations are being met. And so, you know, documentation of all of that is really important. We'll talk about that today. But Katherine, we talk a lot with clients about how to just set up and execute best practices in their compliance programs. What are your top practice tips?

Katherine Forrest: Well, in terms of my top practice tips, I like to start off with the “who” and then the “what” as the first two. The “who” is who in the company is tasked with managing an AI compliance program to make sure that they (1) know the pieces that have to be part of that compliance program, and (2) that those pieces are fulfilled. And actually, it doesn't have to be a single person. Sometimes it can be a committee.

But there needs to be a central repository of that kind of effort. And also, I should mention that sometimes it can sit in compliance, and sometimes we find that it sits in legal or some combination between the two.

And the second part of that is the “what.” It is really interesting, I think, to talk to companies where they know that there is some kind of AI-enabled tool within the company, usually some, perhaps many, many tools that are already being used in the company. And they can be peppered all over the place. And what they've got to do is get their arms around who's using what tools, where are they, are they in human resources, customer service, directed marketing, are they being used with algorithmic trading? Almost certainly, yes. And the first task of the “who” is to find out what the “what” is, how to get their arms around that inventory or registry of AI tools.

Anna Gressel: Yeah, and you know, Katherine, I mean, often we hear, do we have to architect some sort of really sophisticated software product to track everything? And should our inventory be kind of built? And should that take six months or a year? You know, I think often we actually just say, look, if you don't have anything in place, it's great to start with an Excel spreadsheet. It's just putting one foot in front of the other sometimes, and you can really accomplish a lot with an Excel. What is the product? Have you licensed it in? Have you built it yourself? What is its purpose? What is it meant to accomplish, really? Who's in charge of it and how does it need to be evaluated and by whom? That can be cyber risk, algorithmic bias, other things.

Katherine Forrest: Algorithmic bias. That's a big phrase that we're going to come back to. We haven't talked about it on this show yet, but we're going to do that next time in an episode. It's a really important concept and particularly important to compliance programs.

And I actually am going to add in a third practical pointer, Anna, which is that companies need to have an AI compliance policy written down someplace so that people can understand what the process is, how they can get new tools and new use cases properly approved, properly placed into that registry, and so that there can be an overall process for usage, for testing if that's necessary, and for follow-up. So I think that that's a really important pointer for people to have.

Anna Gressel: And you make sure to tell people, Katherine, it's all got to be written down someplace.

Katherine Forrest: Yeah, you know, we have what we call, and Anna, you and I have done this together, our “AI Starter Kit”, which comes from having done this now for a number of companies. We've actually got the basics that we give to companies to sort of get all this started.

Anna Gressel: Yeah, and I mean, I think that documentation is so key. As we mentioned, just even taking financial services as an example from an industry perspective, that's a really high likelihood of an examination. And we kind of mean that everywhere from asset management to insurance to banking, all of the regulators in all of those areas are very active. And it's really quite likely actually you're going to be asked for some of this documentation by at least some relevant regulators.

Katherine Forrest: So in the time that we have left, Anna, let's just talk about a couple of the top regulatory concerns. What do you see as some of the top regulatory concerns that a compliance program is trying to ensure that it's mitigating with regard to?

Anna Gressel: Yeah, I mean, I think, Katherine, you hit on one earlier, algorithmic bias. I know we're going to talk about that in another episode, but there can be discriminatory outcomes from uses of tools, and that can depend on the context. So it's important to kind of structure up a compliance program to anticipate and deal with those risks, particularly in certain highly regulated contexts.

Another one is, you know, just a risk that a company is using a model in a way that might result to changes in how clients are treated, what services are offered, the ways in which those decisions being made and those not being necessarily understandable. It can be really hard to test the accuracy and therefore present the defensibility of that kind of a tool. And that can sometimes be called explainability, sometimes we call it defensibility. I mean I think it depends a little bit. Those are really important concerns for regulators who want to make sure that the companies they regulate are acting in accordance with all of their current obligations, not to mention ones that might come into play in the future.

Katherine Forrest: All right, well, that's a great start. We've given people just a really high-level overview of some of the practical pointers for a compliance program. And again, you know, feel free to go to our website or our newsletter to sort of learn more about that. But folks, that's it for today. We'll see you next time. I'm Katherine Forrest…

Anna Gressel: I'm Anna Gressel.

Katherine Forrest: And we're your hosts for “Waking Up With AI,” a Paul, Weiss podcast. Thanks very much.

Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast
Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast

© 2024 Paul, Weiss, Rifkind, Wharton & Garrison LLP

Privacy Policy