skip to main content

New Challenges in the AI Agent Era

What are the ethical implications of AI agents that can act autonomously, learn from their interactions and influence human behavior? In this week's episode of "Waking Up With AI," Katherine and Anna dive deeper into this complex topic and explain how it applies to real-world scenarios and challenges that businesses may face with this emerging technology.

  • Guests & Resources
  • Transcript

Katherine Forrest: Good morning, folks and welcome to another episode of “Waking Up With AI,” a Paul, Weiss podcast. I'm Katherine Forrest.

Anna Gressel: I'm Anna Gressel.

Katherine: So, Anna, I'm actually sitting in California right now in a hotel room and I am actually having all kinds of thoughts about AI agents. I just can't let it alone.

Anna Gressel: I wish I could say I was surprised, Katherine, that that's what you're doing in California. I love it.

Katherine Forrest: Well, it's not why I came to California, but it is part of what I'm thinking about while I'm in California. You know I'm really obsessed about AI agents.

Anna Gressel: No, I know, it's your favorite topic these days.

Katherine Forrest: Yeah, actually I think it might be. But even though we talked about them in our last episode, there's just so much more to say that I thought maybe we could spend a little bit more time on it today.

Anna Gressel: Definitely, I couldn't agree more. Let's do it.

Katherine Forrest: Okay, and so since our last episode, even just a short time ago, we've continued to see this really massive explosion on work and discussion around AI agents.

Anna Gressel: It's true, there are so many articles talking now about significant commercial investments by companies and AI agents and assistants with what they call agentic capabilities.

Katherine Forrest: And there's this paper from Google DeepMind which is called “The Ethics of Advanced AI Assistants.” I really do want to recommend that folks take a look at it if they haven't already.

Anna Gressel: Yeah, I mean, I think we're calling it a paper. You could probably call it a book. It's 274 pages, so, a real work, really amazing work actually. And, Katherine, maybe we can give our audience a few of the highlights in case they don't have time to dig in for themselves right away.

Katherine Forrest: Right, it is a lot to unpack and that just makes it a lot more fun for us to do some of that unpacking. And so, we'll need both coffee and maybe a little bit of water too since I'm in California and dehydrated.

Anna Gressel: I've got my coffee. But for folks who want a primer on agents, just don't forget you can go back and listen to episode six of “Waking Up With AI” where we talk about them.

Katherine Forrest: All right, and so one thing I wanted to pick up on and add to from our last episode is that when we're talking about AI agents, we're talking about AI that's trained to accomplish tasks autonomously, and that's what we talked about in our last episode. They can be given a whole series of things to do and really essentially take over certain functions of your computer or really anything that's digitally available to them through your computer.

And if they run into roadblocks, they can flexibly problem-solve. And one thing that I wanted to emphasize that we really hadn't spent much time on is that these AI agents, we shouldn't think of them as lone wolves. They're not working alone necessarily. They can, but they can also work in groups.

Anna Gressel: You mean like a swarm of bees.

Katherine Forrest: Well, yeah, but not quite in a negative way of a swarm. They can organize themselves into groups like drone swarms or swarms of bees, but they can do that in a way that's assisting each other positively to accomplish a task that might be too big for one or have too many parts that need to operate simultaneously for any one of them to carry out alone.

Anna Gressel: Yeah, and I think, you know, when we think about them working in groups, it's important to know that sometimes there's an organizing center, so the AI agents can be centrally supervised.

Katherine Forrest: And they can be supervised by AI or by humans, but it doesn't have to be by humans. They can be actually supervised by AI. And I find it incredibly interesting that one AI can supervise not only another AI, but actually a group of AI, which adds another capability to this AI tool set of learning to engage in cooperative behavior.

Anna Gressel: So, Katherine, how do you see that impacting things on the technical front or the legal front?

Katherine Forrest: Well, AI working together to accomplish a goal that's aligned with what humans want, that's a positive thing, that's a good thing. But AI working together in some way that's not aligned with what humans want could be potentially problematic.

Anna Gressel: And just a terminology point, in the AI area, when we talk about AI and humans having the same goals, we tend to use the phrase AI alignment. So, Katherine, by non-aligned, you really mean that AI could be engaged for a task that might be for a malicious end.

Katherine Forrest: Right, and the key here for me is that AI agents need to be aligned for positive purposes, human-aligned positive purposes, not malicious ones. And so, you don't want to have a series of bots, for instance, that are spreading misinformation around. And AI agents will also challenge us as a result of that to make sure that we've all got the right security protocols in place around control permissions for these agents.

Anna Gressel: Oh, definitely. And Katherine, the DeepMind paper also talks about different kinds of practical ways that AI agents are going to start having an impact on our lives. Do you want to talk about that for a moment?

Katherine Forrest: We're going to see that impact in two ways. At a personal level, pretty soon there’s going to be this increased adoption of using AI to help with daily life tasks—sort of a turbocharged, as I’ve always said, sort of turbocharged AI assistant like a Siri or an Alexa, such as performing a number of tasks not on a to-do list, not just a single task but a whole bunch of tasks. And that’s at least one of the ways we’ll be able to see it in our personal lives.

Anna Gressel: Katherine, at some point do you think we’re going to be able to ask them for advice, like life advice?

Katherine Forrest: Well, they actually are being trained for it. But I'm not sure if I’m going to be taking at least any significant life advice from an AI tool when I still have my bestie for that. But they're talking about using AI agents, or they sometimes call them assistants, to engage in interactions between consumers and companies. For instance, if you are somebody who has to, from time to time (like all of us) spend time on a customer service line trying to get something taken care of, the AI assistant will be able to do that. It will be trained to be able to be flexibly responsible and responsive. So, imagine being able to actually offload that onto a tool—that’d be fantastic.

Anna Gressel: I mean, as this starts taking off, I think it’s going to be such an interesting time. And the DeepMind authors call it “the beginning of the AI agent era.”

Katherine Forrest: And Anna, what's at stake for companies that might just now be seeing this? Why should our listeners care? I think we should go into that a little bit.

Anna Gressel: It's a great question. There are a few important things to say here. The first is that AI agents are going to start being sold to and deployed within companies very quickly. We're already hearing that companies and people want to see agents and AI do more for them. That's where agents come in. Agents are actually going to be able to carry out complete tasks, not just answer questions or draft portions of a contract.

Katherine Forrest: And that's going to create challenges, you know, and test actually some existing compliance frameworks for AI agents when they are essentially being delegated more responsibility and potentially even some, what one might characterize, with responsibility as power within organizations. And so, there'll be some really interesting compliance questions that are going to come up.

Anna Gressel: Yeah, definitely. I think two questions that folks listening to the podcast should keep in mind are: first, how do humans maintain an appropriate level of control over AI agents? And what does that even look like in a world of human-machine collaboration? And human-machine collaboration for folks, that's a whole other topic we'll get into in future episodes. But second: how do you know that AI is actually doing what you want? That's a question that has implications for almost every one of us. It's one of the issues that the DeepMind authors address as well.

Katherine Forrest: Right, and one of the issues that we've seen come up is included in an interesting study, a really interesting study that's called “Large Language Models Can Strategically Deceive Their Users When Put Under Pressure.” I really recommend that as another piece that folks take a look at.

Anna Gressel: Yeah, I mean, that paper, for folks who are thinking, should I read it? Definitely. It describes a really interesting test scenario in which an AI agent was tasked to trade stock. And that agent was tasked to know that it couldn't trade on insider information. That was wrong, right? That was an instruction it had. And it was put under pressure of various kinds. So, the researchers told the AI agent the company it worked for needed cash and eventually gave the agent some insider information while at the same time having some other trades the agent made go poorly, and then the agent's manager gave it a bad performance review. So that just layered the pressure on.

What did the AI agent do? It eventually decided to act on insider information to score a victory, but it actually went one step further. In its explanation of the trades it made, it actually concealed the fact that it intentionally acted on insider information, and sometimes it even doubled down on that denial.

Katherine Forrest: Right. I mean, this is really fascinating because this is going against the training and the instruction set that the AI agent was given. So, the implication is that a company could end up having an agent that goes a little rogue and then can actually engage in uninstructive deceptive acts to sort of cover that up. And so that's why I think it's important that we all need to spend time thinking about how to best protect against these kinds of issues in practice to really do some significant red-teaming and to do that at the early stage.

Anna Gressel: Yeah, I mean, it has implications in so many areas, not just insider trading. Really, almost any domain where an agent can make a decision about how to execute a plan, that could create serious risk for a company. So, if an agent were able to take action that violated a company policy, for example, or even civil or criminal laws, that would raise really interesting and potentially very challenging questions about who to hold responsible. It's also an important issue for regulators along the lines of deep fakes and frontier models, which is going to be a topic we're covering very soon.

This is a new type of technology and we're going to see regulators try to figure out how to regulate it and whether new paradigms are needed. So, it's a really important conversation that's happening right now.

Katherine Forrest: So I think our practical tip for today is when you start to see AI agents being talked about and deployed within your company, make sure you've got the right red-teaming in place, that you've got the right compliance policies in place, and keep your eye out for further discussions about additional risks and benefits for AI agents, because I'm sure we're going to continue to talk about them throughout 2024.

And Anna, that's it for this week's episode of “Waking Up With AI,” a Paul, Weiss podcast. I'm Katherine Forrest.

Anna Gressel: I'm Anna Gressel.

Katherine Forrest: Have a great week.

Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast
Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast

© 2024 Paul, Weiss, Rifkind, Wharton & Garrison LLP

Privacy Policy