skip to main content

Third-Party Risk and AI

In this week’s episode, Katherine and Anna define “third-party risk” and highlight some key areas of concern on its intersection with AI.

  • Guests & Resources
  • Transcript

Katherine Forrest: Good morning, everyone, and welcome to the fifth episode of “Waking Up With AI,” a Paul, Weiss podcast. I'm Katherine Forrest…

Anna Gressel: I'm Anna Gressel, and today we're going to talk a little bit about how we think of AI and third-party risk.

Katherine Forrest: And Anna, we're not even in like the same time zone. So maybe we'll just do a little quick sort of moment talking about why I'm sitting here with a huge cup of coffee and you're off ready to sort of start the other end of your day.

Anna Gressel: Yeah, Katherine, I'm in London for a Leaders in Responsible AI conference. And I have to say, it's been super interesting to be on the other side of the pond, talking with all of these folks about AI regulation and governance. And it's always great to hear people in different jurisdictions get into these issues.

Katherine Forrest: Terrific. Well, let's get going today. What we're going to be talking about, as Anna said, is third-party risk and then also maybe, if we have a moment at the end, Anthropic’s release of Claude 3. So let's jump right into it.

Anna Gressel: Yeah. So I think, Katherine, do you want to start by defining third-party risk? What does that mean to you?

Katherine Forrest: Yeah, third-party risk is just what it sounds like. It's risk that a company is exposed to from a third party. And today, Anna, I thought what we would do is talk about third-party risks that arise when an AI tool is licensed in from a third-party business. And that licensing-in then can expose the company to risk, and so that's what we're going to talk about. Now, that risk needs to be both managed and understood in terms of all of its dimensions.

Anna Gressel: A couple of episodes ago, we talked about build versus buy. I think this is an even broader topic, but a related one. What about risks that arise when you license in third-party tools? What are some of the top things you need to think about, Katherine?

Katherine Forrest: One thing I actually talk to everybody about as a top risk is that you're being sold snake oil. That's my favorite phrase. And the top risk is that what you're getting is not a robust tested product, but a very thin, not quite vaporware, but a product that's built on an old foundation model, old being a very relative term, it could be less than a year old.

It might be a terrific idea, but it really isn't robust. And it may not have state-of-the-art technology. So the very first thing you want to do is figure out, are you getting snake oil versus a real product? And one of the ways you can do that is by looking hard at the history of the product, the capabilities of the product, whether the product has been tested, who the customers actually are for the product, and things of that nature.

Anna Gressel: Right, it seems like every tool out there is now being sold with AI in the title or as part of the marketing proposition. And you want to make sure that your use case, that is what you need help with or could use, actually drives the transaction and not the other way around. But I have to say, there's a point at which you might not know what the use case is until it's shown to you. I mean, there are a lot of very innovative use cases out there, and sometimes the tools can reveal use cases.

Katherine Forrest: I agree with you. I'm a big proponent of knowing whether or not a use case is going to actually be used, so to speak, by the business. And you can find that out by internal demos and surveys. But Anna, what are your top considerations in terms of third-party risk?

Anna Gressel: These days, a big one is where the tool fits into the regulatory scheme, if there is one for your business. So, for example, if you're highly regulated, such as in insurance, financial services or pharma, you want to make sure that it complies with your basic regulatory requirements and be aware that some of your regulators are putting out specific guidance, particularly in banking and insurance, on third-party risk considerations.

Katherine, what about you? What are some key provisions companies want to think about when they start licensing third-party tools?

Katherine Forrest: Well, one of the biggest ones that I think about is indemnification clarity. There are so many issues right now about whether a model may be vulnerable to third-party copyright claims, who’s responsible for biases or inaccuracies, what happens when the model may or may not perform as intended, who bears the risks in those scenarios. So looking at the contractual risk allocation is critical.

The second thing that I think is really critical is knowing whether and how your data may be used to perhaps train the model. And that may have some implications for the price that you're going to pay. You may be perfectly happy with your data being used to train the model, but you may also want to get some extra value for it. But you may also be concerned about the competitive issues that come up when a third party is using your data to train a model that will be used for your competitors.

And then the last one that I would mention is just whether or not you own the outputs. You always want to understand whether or not you own the outputs so that if you're going to seek some kind of copyright or patent protection and there are separate issues that we're not going to talk about right now with that, that you understand. And so any other things, Anna, that you'd be looking for with third-party risk?

Anna Gressel: Just two quick ones. The first is, do you want, and can you even get an eyes-off environment? And by that, I mean an environment where the vendor does not have access to your data. That's not always possible, but it might be important in some sort of competitively sensitive use case where you don't want the vendor to be able to see data from across a bunch of similarly situated companies. Another one is just what data can you put into the model, and do you have the rights to put it in? You may have non-disclosure agreements, confidentiality agreements with various counterparties to your contracts. And if you want data to be used in the tool, you have to take a look and really take a close look at whether that fits within your NDAs.

Katherine Forrest: All right. So let's just turn very, very briefly to Claude 3. I just wanted to say one thing about Claude 3, which is that, like some of the other Claude models, it's trained to abide by certain constitutional principles. And I think that that's very interesting. It’s trying to align around the concept of fair AI and a set of do's and don'ts. But Anna, what is Claude 3?

Anna Gressel: Claude 3, which is Anthropic's newest release, is really a family of models. There are three separate models that comprise Claude 3 – Haiku, Sonnet and Opus. And they have different complexity, intelligence and scale. Super interesting.

Katherine Forrest: All right, so that's just a little teaser on Claude 3, and that's all we've got time, folks, for today. But Anna, I really want to talk about AI agents. I can hardly restrain myself, so we'll do that next time.

Anna Gressel: Same, perfect.

Katherine Forrest: All right, we're going to do it next time. Signing off now, I'm Katherine Forrest…

Anna Gressel: I'm Anna Gressel.

Katherine Forrest: And we are your hosts for “Waking Up With AI,” a Paul, Weiss podcast. Have a great day.

Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast
Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast

© 2024 Paul, Weiss, Rifkind, Wharton & Garrison LLP

Privacy Policy