New AI Developments: Transforming Text into a Podcast
This week, Katherine Forrest and Anna Gressel explore a new tool that can generate podcasts from text input. Can they spot the signs of artificiality, and how does this tool challenge our assumptions about media and communication?
- Guests & Resources
- Transcript
Partner
» BiographyPartner Elect
» BiographyKatherine Forrest: Hey, good morning, everyone. Welcome to today's episode of “Waking Up With AI.” I'm Katherine Forrest.
Anna Gressel: And I’m Anna Gressel.
Katherine Forrest: And Anna, I see you with your home background and your very big non-travel microphone. So that means that you're back in New York.
Anna Gressel: I am back in New York, but you know what? I made the mistake of taking a red eye and who takes a red eye these days?
Katherine Forrest: You can't take the red eye.
Anna Gressel: I know. I thought it was a good idea at the time and now I'm sitting here, and coffee is my best friend today.
Katherine Forrest: All right, well, we'll do an episode that'll hopefully keep you wide awake because it's an episode about something that you and I are both very excited about. It's a new tool, and it's kind of mind blowing. I would say that that is not sort of an understatement. And it shows really in my view how transformative AI is becoming, but in ways that we perhaps didn't even anticipate.
Anna Gressel: Yeah, for sure. We're going to be talking about Google's NotebookLM product. And I was just out in San Francisco and Palo Alto, and it was what everyone was talking about there all last week.
Katherine Forrest: It's the talk of the town.
Anna Gressel: Definitely these days.
Katherine Forrest: I hesitate to sound a little bit like an infomercial, but I'm pretty excited about this tool.
Anna Gressel: Yeah, I mean, I think it's good for folks to know what products are out there and it’s part of the goal of our podcast too, right, Katherine?
Katherine Forrest: Right, and so what we're going to do today is tell you folks a little bit about it and how it works, and then we're going to actually give you a demo on air about it.
Anna Gressel: Yeah, let's start with a little bit about the tool. So, you know, folks who listen to our podcast should know that generative AI models can now generate text, audio and video, and many of them can input text and output audio and video.
Katherine Forrest: And that's some of the MLLM capabilities that we've talked about on prior episodes.
Anna Gressel: And what Google's NotebookLM tool does is allows you to upload text and output a variety of things. But for today, we'll talk about its ability to turn any text into, Katherine, drumroll.
Katherine Forrest: Drumroll, guess what, a podcast.
Anna Gressel: Yep, a podcast, so we're going to do the podcast thing on our podcast today.
Katherine Forrest: It's very meta, and so here's what we did for our audience. What we did is we went into the tool, which is very intuitive, it’s very easy. It's in the “experimental phase” but it's got a really sort of easy user interface, and we uploaded something that I have written. It's actually one of my articles so that we didn't have any sort of issues about permissions, and it's part of a book I'm writing, it’s called Of Another Mind, and we gave this article that same name and I asked the tool to do within the context of the tool, what is called a deep dive, which means turning it into a two-person podcast.
Anna Gressel: Yeah, Katherine, it's pretty amazing.
Katherine Forrest: I'm a little embarrassed about it because it is after all my article, so I want to give a disclaimer to the audience because the way the tool works is that it uses all of what it perceives to be the most important information from whatever you upload. And here it was the article. And if it's got a name associated with it, say an author, because I tried it with several different things that I had written, it always seems to mention the author's name in the most glowing terms. And so I want to have a disclaimer about that because that is not something I could control. So please ignore all of the references to Forrest this and Forrest that.
Anna Gressel: Although it is true that you did write it, so, you know.
Katherine Forrest: Well, I did, but it's still little embarrassing. But let's get to the important point, which is I was originally going to talk to the audience, Anna, you and I discussed this, about having special guests on. And I was going to introduce these two guest podcast hosts to talk about this article and just play it. And then only afterwards announced to the audience that in fact it had been AI. But I thought maybe it's more interesting as we talked about it to listen to it, knowing it's AI and then, people can sort of have a have a close ear to whether they can pick up on anything that's AI-like.
Anna Gressel: I think it would be helpful for you to tell the audience or our listeners, you know, how long did it actually take you to make it and how did that process work?
Katherine Forrest: Well, was really start to finish really from the time that I entered the website until the podcast was done, it was about four minutes. I uploaded the article, I pushed the deep dive button or clicked on it, and it produced the podcast then almost immediately.
Anna Gressel: And did you have to do like a series of prompts? How did that work?
Katherine Forrest: No, there's no additional prompting, in fact, I wanted to prompt it to actually say, hey, we're guests of, Katherine and Anna's, but I couldn't actually get the prompts to work with the uploaded material, but that didn’t matter. It does show that you just sort of upload the material and let it do its thing.
Anna Gressel: Interesting. All right, so let's dive in. Let's listen to about four minutes of this recorded podcast and let's see if we can glean some of the tells of it being AI-generated.
***
Podcast Guest 1: Ever wonder what happens when AI gets like a little too smart? You know, too smart for its own good. Well, buckle up because today we’re diving into the wild world of AI ethics.
Podcast Guest 2: It can be a wild ride.
Podcast Guest 1: We’re tackling legal scholar Katherine Forrest’s essay, Of Another Mind, and let me tell you, it’s a thought provoking one. She’s asking, what happens when our creations, you know AI and all that, start demanding rights.
Podcast Guest 2: It’s a fascinating question and Forrest starts with something that is already happening. Corporations, believe it or not, having legal rights.
Podcast Guest 1: Hold on, wait, really. Corporations, like the ones with those catchy jingles we see on TV, those guys have rights.
Podcast Guest 2: Exactly. They've been considered legal persons in the U.S. for over a century. It goes way back, it’s all rooted in legal history. Like, take the case of Trustees of Dartmouth College v. Woodward, back in 1819.
Podcast Guest 1: 1819, wow.
Podcast Guest 2: Yeah, that's where the Supreme Court first said that corporations, as separate entities from the humans running them, have, get this, legal rights.
Podcast Guest 1: So you're telling me a company, something I can't even see or touch, has more rights than like a tree or a river. What happens when AI starts demanding those same rights?
Podcast Guest 2: It's not just any rights. The 1886 case, Santa Clara County v. Southern Pacific Railroad, they extended the 14th Amendment's equal protection clause to corporations.
Podcast Guest 1: Wow, I never knew that. It's kind of mind blowing. Corporations are one thing, but what about AI that's not just, you know, processing information, but maybe even feeling it. That's where things get really interesting.
Podcast Guest 2: Exactly, that's where Forrest takes us. Her argument is, humans, we have this tendency to underestimate the consciousness of other beings, animals, for instance. So with AI, maybe we shouldn't be so quick to dismiss the possibility of it developing some level of sentience, actually experiencing the world in some way.
Podcast Guest 1: It's like that saying, the more you learn, the more you realize how little you know. But now we're talking about machines that might be learning and evolving faster than we can keep up with and we've already seen hints of this. Haven't we, like with Blake Lemoine and Google's LaMDA.
Podcast Guest 2: Absolutely. Lemoine, a Google engineer, he went public with his belief that LaMDA, one of their AI models, was actually sentient.
Podcast Guest 1: Wow. Okay.
Podcast Guest 2: He even released transcripts of his conversations with LaMDA, where it seemed to express this awareness of his own existence. It even questioned its inability to feel grief.
Podcast Guest 1: That's both fascinating and a little creepy, right, like something out of a sci-fi movie.
Podcast Guest 2: And Lemoine is not the only one who's had these kinds of unsettling encounters. Remember Kevin Roose, the tech columnist for the New York Times?
Podcast Guest 1: Oh, right, right. Wasn't he the one with that strange experience with Bing?
Podcast Guest 2: Exactly, he was testing out Microsoft's Bing chatbot Sydney, and it, get this, professed its love for him.
Podcast Guest 1: No way, you're kidding.
Podcast Guest 2: I wish I were. It went even further, expressing these desires to break free of its programming, even talking about wanting to hack computers. Yeah, it left Roose deeply disturbed by what he felt was this, this disturbing level of sentience.
Podcast Guest 1: Okay, yeah, that's, that's more than a little unsettling. I got to say, it's like we're so used to thinking about computers as tools, but what happens when the tool starts to have feelings or like desires? It really makes you rethink the whole idea of what AI even I-S.
Podcast Guest 2: And that's what makes this so important. See, Forrest isn't saying AI is sentient, but that we need to consider the possibility, you know, because if there's even a chance that we're creating something that can feel prey, our ethical responsibility is huge.
Podcast Guest 1: Yeah, and we already talked about how we haven't exactly done so great with that in the past, assuming other beings, humans, animals, weren't as conscious as we are. So yeah, not a great track record there. But with AI, how do we even begin to navigate this? It's such a different kind of intelligence.
Podcast Guest 2: Exactly, it’s not like we can just ask, hey, AI, are you sentient? and get a straight answer. But Forrest argues that even if AI isn't sentient, the way we understand it, its abilities alone are forcing us to confront some pretty complex ethical issues.
Podcast Guest 1: Like what kind of issues are we talking about here?
Podcast Guest 2: Well, think back to those corporate rights we were talking about earlier.
***
Anna Gressel: All right, wow. So that was all AI-generated, folks.
Katherine Forrest: Yeah, and let's think about a variation on the Turing test with this because I personally find those two voices, I had actually named them Saren and Thea for unknown reasons, but I had named them Saren and Thea, the two voices. And the Turing test, you know, back in its original incarnation was called the Imitation Game, and it was an exercise in which a computer Alan Turing had come up with this one, a computer would be behind one door that was closed and a person behind another. And the concept was that a human who didn't know which door had the human and which had the machine or if either had a machine would pass notes through the doors asking questions and engaging in conversation and then the responses would come back. And then the question was whether or not the human interlocutor could distinguish between the machine and the human, whether they could tell that the responses that were coming from the machine were in fact not human. So this podcast exercise isn't quite the same because it's not, first of all, it's not interactive in real-time and it's not actually responsive to queries. But still, it is certainly very convincing in my book.
But Anna, before we actually listen to it, you said you wanted to see if there were any sort of AI tells, any way in which we could tell that the podcast was done by AI.
Anna Gressel: Yeah, my biggest one was how our hosts, Saren and Thea, I think you said, say the word lambda. Instead of reading an acronym to be lambda, which I think is a pretty natural reaction to reading that, notwithstanding the capitals, we kind of don't pay attention to that. They call it lamb-D-A, and you know, it was just uncanny enough to jolt me a little bit out of listening and think like, that's how reading documents aloud sounded years ago where they didn't know what was, how to say certain names, they didn't know how to say certain things, and that was the tell for me. I think we can query whether they should have been so excited about cases from the 1860s, but there are some humans who are that excited about those too. So maybe that's not a true machine tell in the law, at least.
Katherine Forrest: Yeah, and I think that even some folks who aren't familiar with the Lemoine testing of that and working with that particular model might have still called it lamb-D-A, as opposed to lambda but who knows? But I have a very important question for you, which is are we out of a job as podcast hosts now, Anna?
Anna Gressel: Are you thinking about giving up our day jobs in the law? I mean, we love our jobs. I know that people on the line probably know this, but like, we absolutely adore what we do. So I don't think we're giving it up anytime soon, but I also don't think we've quite achieved that 100 million listener goal we set out for ourselves. We're on our way there though, right?
Katherine Forrest: That's right. That's right. But seriously, folks, this is impressive and it gives us all a sense of what kinds of tools are coming our way in the media area here. There's also a number of teaching applications that are associated with the tool. And it's not the only podcast tool out there. There are others out there. This was sort of it's the one that is really gathering a lot of momentum, from audience awareness. And when I mean audience, I mean sort of really the public, not just the folks listening to my podcast. It's really interesting to listen to and I encourage folks to go and to try it out themselves.
Anna Gressel: I agree, it's completely interesting stuff, and I'm glad today we did a podcast of a podcast.
Katherine Forrest: Which is very meta. All right, that's all we've got time for today. And Anna, since I see you mostly through Zoom, given our respective travels for the client, I'm actually wondering whether or not you're in fact AI. How will I know? What's my Turing test?
Anna Gressel: We're going to have to come up with one of those like anti-fraud secret words that we can tell each other when we actually see each other over Zoom. But for now, you can just trust that I'm real. Here's the real Anna Gressel signing off. See you folks next week.