Katherine Forrest brings you "Paul, Weiss Waking Up With AI," an innovative podcast focused on cutting-edge AI in both tech and law. Katherine will walk you through the day’s biggest developments in AI just in time for your first cup of coffee.
Find and listen to the latest episode here, or click on an icon to subscribe to "Paul, Weiss Waking Up With AI" on your preferred podcast app:
Recent
Episodes
Deals, Data Centers and Disney
In this episode, Katherine Forrest and Scott Caravello kick off 2026 with predictions on humanoid robots, agentic AI and the legal flashpoints to watch. They break down the AI infrastructure boom and unpack the line between acqui-hire and acquisition. Our hosts also spotlight OpenAI’s Disney licensing for Sora and the rights issues behind AI-generated “behind-the-scenes” clips, foreshadowing a deeper conversation to come.
AI and Financial Institutions: Emerging Trends in Regulation and Compliance
In this episode, Katherine Forrest and Scott Caravello unpack how regulators are thinking about AI in the financial sector. Joined by Paul, Weiss colleagues Roberto Gonzalez and Sam Kleiner of the firm’s Economic Sanctions and Anti-Money Laundering (“AML”) practice group, they explore the Financial Stability Oversight Council’s initiatives on AI, the Office of the Comptroller of the Currency’s model risk management guidance, FinCEN’s position on AI tools for AML compliance, and how financial institutions are utilizing AI in their compliance programs. This is both the first episode of the new year and the first in the podcast’s history to feature guests—kicking off 2026 with fresh perspectives on AI for financial institutions and key regulatory trends that banks should keep in mind.
GPT-5.2: OpenAI Strikes Back
In their first episode of the New Year, Katherine Forrest and Scott Caravello unpack OpenAI’s release of GPT-5.2, covering its performance on benchmarks like Humanity’s Last Exam and on OpenAI’s own GDPval—an evaluation designed to test the model's ability to match or surpass professionals' performance on real world tasks. Our hosts also examine the model’s sharp drop in hallucinations and break down OpenAI’s discussion of the model’s resistance to prompt injections and how it stacks up under the company’s safety framework.