Podcasts
Paul, Weiss Waking Up With AI
Synthetic Identity: Navigating the New Deepfake Frontier
In this episode, Katherine Forrest and Scott Caravello examine how deepfakes have evolved from static clips to adaptive, real-time impersonation driven by identity engines and behavioral replication. They explain why sensor-consistent forensics are being spoofed, the implications of the zero-trust evidence era for courts and companies, and how regulators and insurers are responding—plus concrete steps on provenance controls and incident response.
Episode Speakers
Episode Transcript
Katherine Forrest: Hey, good morning and welcome to today's episode of Paul Weiss Waking Up with AI. I'm Katherine Forrest, and I have to say–even before Scott, I heard you're about to say you're Scott Caravello–I just have to tell you that even though we're saying waking up with AI, I'm severely jet-lagged. And so I have been up for, this is like almost my evening now, even though it's morning.
Scott Caravello: That's true, that's true. Well, welcome back. I'm Scott Caravello. I hope you had a great trip to Morocco, and I, you know, I was following your updates there from the International AI Conference. It was in Settat, right? At the university over there, Hassan First?
Katherine Forrest: Right. Right.
Scott Caravello: That’s great. Well, it looked extraordinary. But before we get into the topic, what's going on over there on the AI front?
Katherine Forrest: Well, it was a really fascinating conference. It was an international AI conference, and it was totally energizing. You know, Morocco right now, like so many countries all over the globe, is dealing with not just the technology and what the technology can do and the efficiencies it can bring and trying to perhaps even develop some of its own technology, but also what kinds of societal and legal issues are coming straight at them. And so it was very interesting. Was an international group of speakers, and I felt very honored to be asked to be one of them. And, you know, one of the issues we actually touched on at this conference was one that we're going to talk about today, which is this new generation of deep fakes and what are being called right now synthetic identity threats. And, you know, that was something that we talked about very briefly. Professor Said Raouh was my primary contact there and one of the two lead organizers of the conference. And so it was great to have the conversations that we did.
Scott Caravello: Well, that sounds very exciting. And by my count, that makes what, four or five continents that you've taken your AI speaking tour to so far. And that's great.
Katherine Forrest: Well, it's global. AI is global. This is the beauty of it.
Scott Caravello: That's great. And so today we're headed into another interesting area, and that's about changing and emerging issues with deep fakes like you teed up, Katherine.
And so today's episode isn't really about the deep fakes that we talked about last year, and it's really about a new, more dangerous wave. And those are adaptive deep fakes, behavioral replicas, and identity engines that can really basically be you online.
Katherine Forrest: Right, and I really want our listeners to understand from the start that deep fakes, which we've been talking about and everybody's been talking about now, which are the audio and the video, either replicas of someone who's real saying things or acting in ways that they never said or did, or it could be just an entirely made-up, you know, sort of avatar-type person with a voice or mannerisms of another person. But that’s sort of only the very beginning of the deep fake issues. What we're talking about today is taking that and essentially alerting our audience to the fact that these things have become turbocharged. They're far more interactive and able to pivot and to imitate your voice. Let's just take like you, Scott, you know, they could, I could be a deep fake right now that you could be interacting with and I could interact with you. I could have a conversation with you and have some back and forth. They can learn from you, adjust, engage. And so you don't just have sort of fake video and fake audio that can be sort of put in the can. You can actually have something that is really like an avatar, but that is photorealistic, looks like me or looks like you, that can interact with another person who can be a human in real time in actual sort of nanoseconds. And that's what we're talking about, this interactive kind of deep fake.
Scott Caravello: It’s a really striking development. So then maybe we can start about the technical leap that's played into all of this, and that's what's called “sensor consistent forensics.” You know, historically the little quirks in camera sensors have helped experts authenticate real footage and tell what's legitimate versus what's synthetic or made up. But that safeguard is evaporating here.
Katherine Forrest: Right, and that's one of the ways, you know, the sensor consistent forensic was one of the ways that you could tell a deep fake from a realistic, an actual real, not just realistic, but a real video, but the generative models can now simulate those microscopic sensor signatures. Things like lens distortion patterns or pixel-level noise or even microphone frequency textures-textures-textures, I have to say it ten times fast. You see I'm jet-lagged and so the word textures comes out a little funny. But once deep fakes can mirror those, which they can, the classic tests that courts and others have relied on to try and tell real from fake have become increasingly, I don't know, one might even use the word meaningless or much less effective.
Scott Caravello: Totally, and so that's what's pushing us into what researchers are now calling the “zero trust evidence era”. And so right for decades, audiovisual recordings were treated as reliable unless someone proved otherwise. And now that presumption has basically been flipped on its head.
Katherine Forrest: Right, and when that presumption flips and you think that things might be fake until proven otherwise, you don't just have evidentiary problems, you actually have problems that go to the foundation of the video itself. And so synthetic identity capabilities, they're really threatening the basic ability of institutions, not just courts, but other institutions to verify reality. And that can include news and media organizations. It can include what's put out there through social media. And that can present some issues for misinformation and even for the democratic process.
Scott Caravello: So on that note, let's talk about what different jurisdictions are actually doing in response. I mean, the listeners are always asking who's actually trying to get ahead of this.
Katherine Forrest: Right. And there are a number of jurisdictions that are trying to get ahead of this. And it's really an outgrowth of the work that had already been started with deep fakes generally, because deep fakes have been now an issue that has been escalating. And people have been increasing concerns about it for now several years, including from the last election. And so the EU AI Act's mandate includes a watermarking provision and a provenance for synthetic media provision. So you've got to really be able to trace back the provenance of synthetic media. And China is already requiring labeling of AI-generated content and real-name registration for creators. So you can't just say some kind of cartoon character is the creator. It's a real-name registration. And Brazil is debating right now a digital responsibility framework that would impose significant penalties for malicious deep fake deployment. And, you know, these are not small moves; they're really attempts to try and stabilize information ecosystems at a national or international level.
Scott Caravello: Yeah, but what I would actually add to that there, right, we mentioned very briefly last time that the EU is undertaking a simplification initiative for the digital landscape, and that includes its AI regulations. And so as currently drafted, that watermarking synthetic provenance provision that you mentioned would actually be pushed by six months if the simplification initiative goes through as drafted. And so the effective date would move from August of next year to February of 2027. And then here in the US, you know, it feels a bit more like a–like a patchwork. So what are the states doing on this front?
Katherine Forrest: Yeah, you know, actually, I'll say this, you know, Scott, you're really the expert on the states. So you're just letting me speak to this out of some sort of like throwing me a bone in my jet-lagged sort of moment. But, in the United States, as we all know, we don't have right now a federal set of regulations for deep fakes. And so we do have this state patchwork quilt. And so California right now requires political deep fakes to carry certain kinds of disclosure labels when they are within a certain time period of an election. And latent disclosures, or in other words, hidden machine-readable markings that provide provenance data, provenance data, provenance data, however one pronounces that, about the content, such as its authenticity and origin, will more generally be required in California for certain AI systems starting in just a couple of months, actually in about a month, at January 2026 as well. And so those AI system providers are also gonna have to give users the option to include a visual watermark or label called a manifest disclosure as well. So we've got that in California. Then we've got the Texas that criminalizes deceptive deep fakes in political communications and New York that is moving towards Biometric likeness protections that would essentially expand right of publicity laws. And then we've got election regulators moving outside of the United States now. You know, election regulators globally, including in, like, in India and Slovakia, are scrambling because, you know, deep fakes have started to disrupt all kinds of electoral campaigns.
Scott Caravello: Let's anchor this then in what these systems can actually do today though, because some of the examples are actually pretty astounding.
Katherine Forrest: Right, they really are. This is now again sort of the level of not only incremental but real step changes in deep fake technologies. Deep fake technologies. You gotta listen to that. Don't edit that out because this is my true blue, you know, sort of jet-lag voice, and people need to hear it. So, but today we have models that can adjust your facial expressions or the facial expression of the entity that's appearing on the deep fake, whether it be an avatar or a video of a real person, in real time. And they can correct facial expressions in mid-conversation if something looks a little bit off. And so imagine a video call like a Zoom where the person you think you're speaking to isn't just a recording, where they would just sort of record and not be able to react, but they're a reactive system that's responding to you. So you can actually impersonate a person in a real time, very interactive way. That's what we're talking about with today's deep fakes. It's not just recorded video, it's the ability to impersonate somebody in real time in an interactive way.
Scott Caravello: And that brings us to one of the biggest conceptual shifts here, which are the autonomous identity engines.
Katherine Forrest: Right, and those are the AI agents that are trained not only on your voice or your image, but on your behavioral patterns. Like, for instance, you see me moving my hands as I speak. And so the deep fake would actually learn that. They would learn, you know, how you negotiate something, how you pause, how you phrase uncertainty, how you laugh, how you don't laugh, whether you're awkward, whether you're not awkward. They can run Zoom meetings and maintain your style throughout. And this is no longer, again, I keep repeating this, but I find it really fascinating and really scary. It's no longer about just a digital video clip of anyone. It's about actually being able to put that person in a realistic way into an interactive situation with a human being—talk about the Turing test—a human being who will not understand or necessarily know from the typical kinds of forensics that we've been using so far that they're interacting with a deep fake. So it's about inhabiting a person's digital life.
Scott Caravello: Well, Katherine, if I ever get a Zoom invite from you and we get on and you ask me to go out and buy a bunch of iTunes gift cards, uh, you know, who knows if I'm going to be able to tell if it's real or not.
Katherine Forrest: Right, no! That one—well that will not happen. Definitely do not do that! Definitely do not do that!
Scott Caravello: But jokes aside, that real-world misuse is already here. There was the Hong Kong finance department deep fake heist that was a big wake-up call.
Katherine Forrest: Yeah, and that was actually some time ago and they've gotten much better. But just to remind our audience who may not have seen that, there was a case where there was a deep fake scammer call. It was a Zoom call that was set. I don't actually know if it was Zoom or with some other platform, but it was a video call that was set up and it had multiple executives. One human, was invited, and then a bunch of deep fake—I don't know what to call them—avatars, people. They looked like the real CEO, the CFO. It looked like real people from this organization, employees that the individual dealt with. It had their voices and it had their mannerisms, and it—these fake people, if you will—convinced this human employee on this Zoom call to transfer the equivalent of $25 million from the bank's accounts to another account. And the employee had no idea that other than himself, everybody else was fake. And so today's tools make even that look a little bit primitive. And so, you know, if you go online and you see some of these interactive deep fakes, they're really extraordinary.
Scott Caravello: Well, that is wild. But in your day-to-day, how are you seeing regulators and plaintiff's lawyers thinking about the liability here? Because this feels like a really new frontier for litigation.
Katherine Forrest: Well, you know—the FTC has already been warning about AI-enabled impersonation and talked about it as a potential violation of Section 5 of the FTC Act. And so we'll see whether or not there are cases brought there. And there are some plaintiffs who are experimenting with claims like negligent enablement of impersonation and failure to warn, and we're going to start to see a new liability landscape. It'll depend upon how we start to see these interactive deep fakes actually be used and what kinds of harms they create. And then I think the type of harm will really give rise to the type of claim or lawsuit that's brought.
Scott Caravello: Totally. And so then one term that people keep hearing though, and that's algorithmic defamation—I think that could benefit from some unpacking. So what does that actually mean?
Katherine Forrest: Well, you know, algorithmic defamation is sort of what it sounds like. It's really these defamatory statements that are allegedly generated by AI, and it concerns whether or not AI outputs that are generated by a particular tool could trigger additional defamation liability or something closer to product liability. So the stakes are pretty enormous, but then you add that to a video deep fake and you've really got a very, very powerful combination.
Scott Caravello: Yeah, and then on the judicial front, judges are already seeing challenges to digital evidence based on the possibility of AI manipulation.
Katherine Forrest: You know, this is something that I've spoken to a number of judicial conferences about—how do you talk about the burdens of determining when a deep fake issue can be raised, how much evidence has to be presented in order to have that deep fake issue be taken seriously, when is there a sufficient amount that you can present it to the jury. You know, we've got federal rules of evidence that were not built for synthetic identities, and certain courts now are looking at whether or not there has to be chain-of-custody metadata, certain kinds of cryptographic signatures, and certainly expert testimony when there's a challenge or when there's a particular kind of video that's being presented. So authentication issues are really sort of rising up and taking front and center seats in the courtroom.
Scott Caravello: Yeah, and then another emerging issue is behavioral replication. The idea that AI can mirror how a person thinks or decides.
Katherine Forrest: Right, I mean, that's, you know, using certain kinds of world modeling techniques, and you know how I like world models. We've talked about that before. And these systems can actually predict likely responses in unfamiliar scenarios. And that's really what a world model is designed to do, is to navigate unfamiliar environments. And so even if someone tries to test a deep fake that's interacting with questions that only you should know, the system might still answer correctly based upon probabilistic analysis of all kinds of things, including, for instance, if you're the kind of person where there's a fair amount of data about how you speak and how you talk, you know, it can do a pretty good job with that. And so when behavior can be analyzed and put into a probabilistic formula, it can be replicated.
Scott Caravello: And that just fundamentally undermines identity verification, right? I mean, just puts everyone off balance. And so the old question-and-answer methods just aren't going to work.
Katherine Forrest: Right. If an AI has enough of your data, including emails, recording, transcripts, podcasts, articles, it can approximate your decision patterns, you know, better than even a tired jet-lagged version of yourself. And identity then becomes something that, and get this, it could actually sort of stop allowing you to possess. I don't mean in a realistic sense. I mean in a practical sense possess your identity as something that you are singularly in control of. And that to me is really wild. And, again, I go back to the Turing test because we thought of the Turing test as really about written text. The Turing test was really written text that was going back and forth. And now we're talking about interactive videos that look like humans, that sound like humans, that sound like you or me. Like literally impersonating and interacting in real time. So it's pretty wild and pretty big stuff.
Scott Caravello: It is, and in a way it reminds me of, you know, the motivation behind the biometric privacy statutes when they were first passed, right? Because compared to other data points that you can change, like a password, you can't change your biometrics, you can't change your face in a meaningful way. And so once this is replicated and once this technology is really widespread, you know, there's not really putting that genie back in the bottle. So anyway, let's talk about companies. What does, uh, responsible governance look like in this new environment?
Katherine Forrest: You know, the biggest issue for any kind of governance is awareness and making sure that the individuals who are responsible for protecting the company against incursions, whether it be cyber threats, the fraudulent incursions from AI and responsible AI, accurate AI, that those individuals remain on top of this issue, and that they remain then on top of and have their technical people remain on top of the issues of how you're going to distinguish between real and fake videos because for the, you know, the bigger the company or the bigger the event, even with a smaller company, you can actually now have fake interactive videos. And so being aware of that possibility is, I think, very, very important to do.
Scott Caravello: And incident response is going to look different from classic cybersecurity playbooks too. I mean, here at the firm, we have John Carlin who specializes in all of the newest cyber tricks and how to combat them. And I know his team's all over this.
Katherine Forrest: Right, right, right. Because in a world where your CEO can be deep faked on a Zoom call, you need procedures to confirm reality. And if those procedures are subject to a breach of some sort, then you've got, uh, sort of double trouble on your hands.
Scott Caravello: Yeah, you know, whatever the exact path is taken by companies, there still needs to be a certain flexibility or different workflows that are designed to stay up to date on the technology and the threat vectors and then adapt accordingly. Because like with all things AI, the state of the art keeps moving forward and the fraudsters are moving with it. And you know, while we're on the topic, though, the insurance world is also going to be watching this closely. They have to. These are new risks and they need to be considered from all sides.
Katherine Forrest: Right, you know, it's so true. Insurers are already discussing how to grapple with policies that did not anticipate this kind of conduct, whether there should be certain exclusions or whether there should be certain addenda. You know, and you've got the reinsurers who are also considering what kinds of policies and provisions should be dealing with synthetic identity risk, and so it affects really every business and every geography right now.
Scott Caravello: Totally, but going back to sort of the real personal element of this, there's a pretty disturbing area in which this pops up, which is emotional manipulation. And that's deep fakes of family members, loved ones, children.
Katherine Forrest: Right, you know—we've all heard, I think by now, the audio calls of, you know, John Doe, the grandchild of somebody, picks up, the grandparent gets a phone call in the middle of the night and John Doe is allegedly in jail and needs money for bail and, you know, gives an account that is fake, and it's actually all just an audio deep fake. That's been around. But can you imagine a situation where you can actually get a deep fake video call from a person who you think of as your actual loved one and have that be fake? I mean, this is really some tough stuff.
Scott Caravello: Totally, totally. So let's look forward. What do you think is happening when adaptive deep fakes, behavioral engines, and world models all converge?
Katherine Forrest: Well, we're gonna have, I think, some very interesting technical developments that will be seeking to try and, in real time, verify whether something is a deep fake or not a deep fake. And that technology will become very, very valuable. And that will be done by the AI model developers. It'll be done by companies that specialize in this. But the attempt is going to be to try to distinguish what's real from what's not real because you don't want a situation where reality itself can be contested. And so we're going to be looking for what are the right tools that will be able to, in real time, determine whether or not something has a big X across it, you know, beware of potential fraud. And there'll be some legal frameworks that will develop about different kinds of identity theft and identity harms, but trying to track down the perpetrators is going to be its own problem.
Scott Caravello: Okay, and so for listeners who want one single takeaway, what's the one thing that organizations should do now?
Katherine Forrest: Well, they should build an enterprise-wide authenticity strategy. The first piece of that is education for folks, and that if they think something is not right, to immediately raise it to the awareness of whoever a designated person is. There needs to be provenance controls, identity verification protocols, and, of course, AI risk mapping. If something happens, what do you do next? Where do you go next? And trust, it just can't be assumed anymore. You know, if you've got an era of synthetic identity, then you have to be able to find a way to reestablish trust. And so I thought as a little teaser for our next episode, what we'd say is we're gonna spend some time on what some of those technologies are, what the state of those deep fake recognition technologies are. Where does all of that stand? Because it really is like whack-a-mole. But I think, Scott, that's all we've got time for today.
Katherine Forrest: All right, Scott, but that's all we've got time for today. I'm Katherine Forrest.
Scott Caravello: And I'm Scott Caravello. If you've liked the podcast, please don't forget to like and subscribe.