skip to main content

Frontier AI Models and Potential Risks

In this week’s episode of “Waking Up With AI,” Katherine and Anna introduce the concept of frontier AI models, their potential risks and the growing regulatory concerns surrounding these powerful AI systems.

  • Guests & Resources
  • Transcript

Katherine Forrest: Good morning, and welcome to another episode of “Waking Up With AI,” a Paul, Weiss podcast. I'm Katherine Forrest.

Anna Gressel: I'm Anna Gressel.

Katherine Forrest: And Anna, last week we got to spend some time on my current favorite topic of AI agents, but today it's your turn for your current obsession.

Anna Gressel: Yes, I love that. We're going to talk about a concept that is new, but I think it's going to be getting more and more attention, and that is frontier AI models.

Katherine Forrest: Frontier AI models, this is a mouthful. I think that we need to start at the beginning of this one and explain what we're talking about. Frontier models is definitely going to be new to a lot of our audience.

Anna Gressel: Totally. I mean, it's new to the world, which is part of what's interesting about it. So, a frontier model refers to an AI model that's basically so large and powerful, it's on the literal frontier of AI.

Katherine Forrest: Frontier of AI in what way?

Anna Gressel: I think what we mean is really at the edge, at the frontier in terms of cognitive abilities and speed and capabilities more broadly.

Katherine Forrest: So, it's models with such significant capabilities that they would seem like they would be catching then the attention of regulators.

Anna Gressel: Absolutely, and they have. So that's why I wanted to spend some time on this today. Regulators have already started to suggest that some protocols and regulations for these models. And the EU has been referring to them as general purpose AI models with systemic risk. It's that systemic risk piece that's pretty important.

Katherine Forrest: Right, and the EU AI Act refers to them, the White House Executive Order refers to them, and now there's a new California Senate bill. It hasn't been passed yet, but there's a bill before the California Senate that refers to them.

Anna Gressel: Exactly. That's right. So, in the White House executive order, they're called “dual-use foundation models.” It's a different term, but the same concept. And there are really a lot of powerful models out there these days. And there's a lot of debate about what defines a frontier model. And in addition, there are competing ideas about that definition and whether it should really come down to the amount of computing power used during training. But there's a lot of definitional concepts that relate to that.

So, I'm going to get technical for a moment with you, Katherine. There is a term about computing power that regulators are using that goes by the acronym of FLOPS, which is important for people to know about.

Katherine Forrest: Do I even want to know what FLOPS stands for? Do I need to know? Is that important for my life?

Anna Gressel: You know what? You may not need to know it, but I'm going to tell you anyhow. Just because I think it's fun for our audience, they're going to hear about FLOPS, just because everyone calls it that. And that stands for Floating Point Operations Per Second. So basically, FLOPS, in the AI context, are a measure of how much compute — so it's computing power — it takes to train or run a model.

And right now, researchers are arguing that models are getting better as the computing power increases, and FLOPS are the way that researchers and regulators are starting to measure that.

Katherine Forrest: So, it's sort of a quantitative measurement. And so, I've heard reference to frontier models as having been trained at  FLOPS. Is that something that you've heard?

Anna Gressel: Yes, so usually the threshold is either  or  FLOPS, depending on the regulator. And that's a quantitative measure of whether a model has reached the frontier level that regulators are starting to pay attention to or be concerned about. And right now, only a few models that we actually know about pass that threshold. But then again, there are a lot of models that don't release all of their details.

Katherine Forrest: So, let's talk about what the regulators are concerned about with these frontier models.

Anna Gressel: So, as I mentioned before, it's a concept that models can pose what we might talk about as systemic risk. And the EU AI Act refers to them as possibly having “high impact capabilities.” So, let's unpack that a little bit. There are a few things to flag here. The first regulatory concern is around cybersecurity. So, regulators are worried that frontier models, which are considered to be the most powerful models out there, can be used to dramatically decrease the skill level required by bad actors to generate malicious code.

Katherine Forrest: Okay, so let's pause on that because the concern there that you're describing is that a more powerful model will lower the amount of work that a human has to do to create, for instance, malicious code or to do any kind of mischief.

Anna Gressel: Right, that's exactly right. So, you could have dramatically more bad actors trying to compromise systems and doing a better job of it because it's actually easier for them to do. And one recent study found that GPT-4, which is one of today's frontier models, could autonomously exploit 87% of common cyber vulnerabilities. And we should mention to the audience that these things are in vulnerability databases.

Katherine Forrest: Yes, and these vulnerability databases are databases that purport to describe known security vulnerabilities. And one concern for a frontier model is that a bad actor could go to one of these databases, describe a vulnerability to a model, and then have the model do all the heavy lifting to exploit that vulnerability.

Anna Gressel: Right, yes, again, it's about the capabilities of the models just getting better and making it easier to undertake things that are dangerous. So, along the same line, regulators are also concerned that frontier models are going to be able to start actually writing more complex and capable code. And code writing is part of what generative AI models do now, so that's nothing new. But it's the complexity of the coding that actually is what is causing concern.

Katherine Forrest: And another thing that regulators are talking about now with these frontier models is a potential impact on critical infrastructure and the fear that bad actors could exploit one of these frontier models’ coding ability to attack the infrastructure like an electric grid or take down the financial system or just do a lot of mischief within either of those.

Anna Gressel: Definitely. And a third issue regulators are highlighting is a frontier model's ability to help humans acquire chemical, biological, radiological, or nuclear weapons, sometimes called CBRN or CBNRE. Some of those models have been shown to be capable of generating novel compounds, like proteins, that can be good in general. But people are worried that they might be able to instruct users in how to acquire, create weapons.

And another issue would be actually AI models that are controlling weapon systems on the battlefield, which more capable models might be better at doing.

Katherine Forrest: And we mentioned just a little while ago at the beginning, California's pending Senate bill, it's Senate Bill 1047. And we should just briefly mention what Senate Bill 1047 stands for.

Anna Gressel: Senate Bill 1047 is California's Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. That bill, if it were enacted, would require model developers to document specific tests undertaken to show a model doesn't have hazardous capabilities like the ones we just described. And under certain circumstances, it could allow for models to be shut down or require certain controls on them.

Katherine Forrest: Right, but it hasn't been passed yet.

Anna Gressel: No, it's still up for discussion as much AI regulation is these days. And to punctuate this, not all proposed foundation model regulation would even go as far as to requiring a model to be shut down. Other things we're beginning to see are requirements to undertake regulatory reporting on foundation models if they're planning to be created or if they are created, conducting impact assessments, and what we call red-teaming a model. Red-teaming is really when you have engineers try to break a model, find vulnerabilities, or get it to spit out harmful content so those issues can be fixed or mitigated.

Katherine Forrest: All right, so it sounds like we're going to have a lot more to say about frontier models as the year progresses. But for right now, that's all the time that we've got for today. I'm Katherine Forrest.

Anna Gressel: I'm Anna Gressel and we hope you're subscribing and if you do, you will see us in your inboxes very soon.

Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast
Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast

© 2024 Paul, Weiss, Rifkind, Wharton & Garrison LLP

Privacy Policy