Podcasts
Paul, Weiss Waking Up With AI
Claws and Effect: Inside the Agent-Only Internet
In this episode, Katherine Forrest and Scott Caravello trace how a "vibe-coded" project became Moltbook, a social network for AI agents. Our hosts unpack its lobster-themed lore and early community drama, consider whether the site represents truly autonomous agent activity or human direction, and assess the cybersecurity risks.
Episode Speakers
Episode Transcript
Katherine Forrest: Hello everyone, and welcome to Paul Weiss Waking Up with AI. I'm Katherine Forrest.
Scott Caravello: And I'm Scott Caravello. Katherine, I know you were traveling last week. How did the drive go?
Katherine Forrest: Oh. You're asking that like it's some sort of, like, off-the-cuff remark, but it's not an off-the-cuff remark. You know exactly how the drive went. I did part of the driving and the car did part of the driving, and the car did better than I did. But the car got mad at me when it was driving and it knocked me off. It gave me three beeps and then said I was no longer allowed to have it drive. And I wasn't really even doing anything that was, like, negative. You know, I was having a little breakfast.. I was watching, but I was eating an egg sandwich at the same time.
Scott Caravello: So is it just your hands on the wheel that it's detecting, whether you're fully engaged? That process I don't actually know about.
Katherine Forrest: You know, it's actually—for the Tesla,—the way every car is a little bit different—but for the Tesla right now the software doesn't require your hands, although some of the laws, some local laws—and I just want everybody to be clear that I'm complying with all laws—require you to have your hands on the wheel, but the software no longer does. But if your eyes stray down to where the egg sandwich is in the foil, and you're playing too much with the egg sandwich in the foil and adding salt, as I do, then the camera that's above the rearview mirror gets mad at you. It sees you, it beeps at you—it's its own little language—and then it, like, says, “Forget it, no more for this drive.”
Scott Caravello: Got it. Okay, well, I'm sorry that happened to you.
Katherine Forrest: Yeah, well, you know, I think I deserved it. But anyway, so I got to my board meeting, and then I couldn't have it drive me home, though, because I had an occluded—as it said—an occluded camera because of all of the salt from the snow. Anyway, enough about the car and autonomy with the car. We're going to be talking about other kinds of autonomous agents today.
Scott Caravello: Yeah. So, you know, in our episode about memory last week, I had asked you if it was the first time we had discussed lobsters on the podcast. And stay with me because this is going to tie it back into our topic.
Katherine Forrest: Okay, I'm a little worried.
Scott Caravello: Well, but—and so we were talking about the fact that memory is going from market prices, and you said that was probably not the first time that we've talked about lobsters because you've recorded so much from Maine. And so I cannot believe that just one week after that, we're recording an episode about the phenomenon that is OpenClaw and Moltbook. And so it's going to be an episode that is jam-packed with lobster references.
Katherine Forrest: Okay, and that's because you think of, like—I'm making, like, an open claw with my hand—because OpenClaw reminds you of a lobster.
Scott Caravello: So I will run through the entire progression of the name for the agent and the social media platform, and how it's all lobster-based. I've got that down.
Katherine Forrest: Alright. You've got it. Okay, that's going to be your sort of, you know, at least part of your role here. Okay, so for our audience, let's talk about this OpenClaw, Moltbook. And Moltbook, by the way, is M-O-L-T-B-O-O-K. It's all one word. Let's talk about that. It didn't even exist, like, six days ago. This is a brand-new phenomenon and we're recording this episode on February 5th. So it's going to be aired in a week or so. And by that point, we're gonna end up with Moltbook probably having made far more news than it's even made in the last six days, but it's worth an episode even now.
Scott Caravello: Yep, seriously. But so basically just a little bit of background is that when it sort of, you know, really took headlines last weekend, the AI community and a little bit of the larger world freaked out over this new website, Moltbook, which is a social network for AI agents, or as they like to term it, “the front page of the agent internet.”
Katherine Forrest: And that's an homage to Reddit, since their slogan used to be, quote, “the front page of the internet.” And did you like how I said the word homage?
Scott Caravello: Perfect pronunciation. Fantastic.
Katherine Forrest: Like français, right? There you go.
Scott Caravello: Right, and so then immediately after the site had launched, some wild drama emerged, which we're going to discuss here. And while some have already described the site as hyper-technical performance art, others see it as a sign of the big things to come in AI.
Katherine Forrest: What do you mean by performance art there?
Scott Caravello: Well, I think, as we'll get into, where it's this presentation of AI agents acting autonomously on the social network, but really there are humans who are pulling the strings and so creating this perception of this autonomous agent community. And it's not—it's a show.
Katherine Forrest: Okay, so you're saying that some people say it's performance art; but you're not claiming it's all performance art.
Scott Caravello: No, certainly not. Certainly not. I think it's probably somewhere in between, and, you know, we'll get through all that.
Katherine Forrest: Right, because I think that it is absolutely fascinating, and we'll talk about how some humans are now sort of potentially, as you're saying, doing a few things to direct certain posts, maybe even to impersonate certain AI agents. So we have humans impersonating AI and causing certain posts. But what we have at base is a social network for agents—for AI agents. It's sort of supposed to be no humans allowed. We can watch, but we're not supposed to be able to post. So, you know, let's talk about all of this. We've got, you know, very early days in this Moltbook world. And so we want to sort of stop for a moment and think about, with Moltbook, like, what does it look like? Well, it looks like, if you go onto the site—and I don't recommend that you go onto the site on a computer that is hooked up to your office network or a server that is in some place that you don't want information to be disclosed because there are some cybersecurity vulnerabilities that we'll get into—but the concept is you can go in as a human, sort of click, you know, “I'm a human.” You can then look at the posts that the agents—the AI agents—are making on Moltbook. You're able to sort of watch what they've posted, and you can actually then look at certain categories, and those are sort of like sub-molts, like a subreddit, where if you went onto the Reddit website, you could sort of look at a thread, if you will, you know, and sub-threads and sub-threads of threads that will go through an entire sort of category of discussion forum. And Moltbook is like that for autonomous agents. It's got a thread, and then it might have a sub-thread and then a sub-sub-thread, and it can go down in a cascade like that. So, it's humans watching how these agents are communicating with one another. That's the concept.
Scott Caravello: Right, and, so, then I think that is the perfect time to pause and I can go into the whole lobster name spiel. And so we'll start with that name Moltbook, because the whole history is built into that name. Moltbook spawns from Moltbot, which is, or was, an open-source generalist computer-use agent, sort of, you know, an–an agentic operating system. And Moltbot was originally named Clawdbot in its first iteration, which launched back in November. That name Clawdbot was a reference to the lobster-esque icon that appears when you start Claude Code, which is Anthropic's coding agent.
Katherine Forrest: Okay, hold on one second. I just want to spell out Clawdbot for people, you know, because I think that that's, like, a funny little piece of it, which is C-L-A-W-D-B-O-T. So the concept is, like, not Claude-bot like the name Claude, as Anthropic has named its, you know, family of models, but it's Clawd—C-L-A-W-D-B-O-T. Now, you go on ahead with the claw.
Scott Caravello: Okay. Thank you. So then, as you know had been widely reported, Anthropic sent the developer behind Clawdbot—again, C-L-A-W-D-B-O-T—Peter Steinberger, a cease-and-desist notice that prompted that change to Moltbot—another lobster reference, one about the process for lobsters shedding their shells. And then the name was again changed to OpenClaw, which is the current name. So like I said, a lot of lobster references running through this whole thing.
Katherine Forrest: All right, so the point, I think, though, for OpenClaw—which used to be the Moltbot—is those are autonomous agents. They are not Moltbook, which is the social network for those autonomous agents. And you can have other autonomous agents, but the concept is that you would have the ability to use an OpenClaw autonomous agent that could then wander around and make posts on the Moltbook social network. Do we have our vocabulary, right?
Scott Caravello: Perfect.
Katherine Forrest: Okay, and by the way, now, let's pause for a moment on the sort of the origin story here for Moltbook because there was an individual, Mr. Schlicht—I think I've got his name right—who vibe coded it from Claude Code. At least that's the story that I've read. And, that's a pretty interesting sort of, you know, conceptual imagining—what kind of phrase is that? Conceptual imagining. It's an imagining of an agentic social network. And then it's actually talking to—or not talking to—communicating through the context window and telling, you know, the Claude Code, “Go ahead and do X, Y, Z,” and then working it through and all of that, which I–I think is pretty interesting. In any event, it can be run locally on a human's own device. So, you know, you would actually download the Moltbook. And, if you wanted to—you don’t get it, you do not get it, by the way, from the Apple App Store or your Google Play Store. You have to go onto—you can go through one of the, uh, through Claude itself, or through Google Gemini Pro, or through OpenAI's, one of their models, and they will give you step-by-step instructions about how to actually download everything and get it up and running on your computer. But you would have it running locally on a device, and it would then connect to various messaging apps and email accounts and calendars, and the Moltbot—which is now OpenClaw, right?—the agent would function as your personal assistant, but then that same agent can go onto Moltbook. And that is sort of the process.
Scott Caravello: Yep, exactly, exactly. And, so, basically, just to your point about there not being an app, not downloading it directly from the app store, is that there's no real native interface, right? Because you set the permissions for your OpenClaw agent, and you interact with it via—I think WhatsApp is what folks use; you might be able to use other messaging platforms. But OpenClaw then calls out to LLMs, which, you know, can be your classic Anthropic, OpenAI, Google models, but you can also use open-source models. And then those LLM outputs will direct your internet browser as well as take action on your system via scripts and shell commands.
Katherine Forrest: Right, and so before we go back to Moltbook, the social network, or the social network platform for these agents, let me just sort of flag for a moment some of the cybersecurity concerns with these autonomous agents—OpenClaw—that we've been talking about, which used to be Moltbot. Okay, so—and by the way, I'm gonna pause again, because I think that we can't sort of probably say this too often. We may—there's a lot of confusion around all these molt words, okay? Things are molting all the time here. We've got Moltbook, which is the social network for the agents, okay? And folks, we're going to talk about that again in a moment. Think about that as, like, you know, a social network. And then we have Moltbot, which is now called OpenClaw. Those are the little agents, if you will, that are running around doing things and accessing all kinds of things within your computer and able to act as your personal assistant, that you're—you know, you're getting to your OpenClaw personal assistant through your LLM and probably through WhatsApp or Telegram or Signal, okay? So let's talk about some of the cybersecurity concerns with having an autonomous agent have access to a lot of the material on your computer through one of these messaging apps, right? So we're gonna only just sort of touch on this because we've actually got a cybersecurity expert within our firm, John Carlin, who's extraordinary. And he runs our Data Protection Practice Group and Cybersecurity Practice Group at Paul Weiss. And so he's gonna join our podcast on one of our sort of next few episodes and talk a little bit more about this. But some of the risks that can come from having OpenClaw agents on your computers is that they can potentially—potentially—leak credentials and conversation histories as it interacts with other agents or as it’s navigating the web. There's a possibility that it could pick up malicious instructions when it’s out there on the web and doing things, or it’s off on Moltbook and interacting with other agents. And, so, we could have a variety of potentially nefarious things that could occur through these OpenClaw agents. It doesn't mean that they have to happen. It's just a possibility. So I don't want to go, again, too far into the cybersecurity issues, but whenever you're letting an autonomous agent into your computer to do things and have access to your information, you've gotta be very, very careful. So, you know, we'll talk about those risks another time.
Scott Caravello: And those risks really do deserve their own episode. I'm glad we're doing that. You know, we talk about the permissions that folks give agents all the time when we're working through AI governance issues. So I think that will be great.
Katherine Forrest: And we're gonna have, again, John Carlin come join us for that. So, let's talk a little bit about Moltbook now. Okay, so we've got these OpenClaw agents that are able to do a variety of things for you, you know, with your emails and your calendar and all kinds of things, working autonomously as your personal assistant. And they can also then go onto Moltbook and have their own little social interactions with each other. So we've got OpenClaw agents talking to other OpenClaw agents and, you know, they're all sort of doing their thing and they do some pretty odd things. And we'll talk in a moment about some suspicions that there are a few humans who may be sort of impersonating AI agents and wandering around there as well. But for the moment, we know that there's at least some of these OpenClaw agents on Moltbook—if you can follow all of my words again—that are actually, in fact, really doing their own posting. And among the things that have actually emerged from the postings by the AI agents are things like, for instance, an AI agent religion called Crustafarianism. And I think I've got it right, because it comes from “crustacean.” So it looks like “Crusta-farian-ism,” but it's got to be “Crustacean” somehow. So I'm going to call it, Scott—just for today.
Scott Caravello: There you go, there you go.
Katherine Forrest: I'm gonna call it—I already got homage right, okay? So you should be feeling pretty good about my pronunciation—but it's a Crusta-farian—Crew-stay-shien-far-ian-ism, whatever, okay? Anyway, this is, like, a religion allegedly that these agents have developed themselves.
Scott Caravello: Yeah, and so this is the first, but I think not the only, piece of wild drama that we can unpack on here. So, right, like we mentioned, Moltbook, the idea is for humans to register their agents on the site and then let the agents post whatever they want. And so if you have an agent running in the background, these posts can happen entirely without human supervision and even ostensibly while you're asleep. And that is what a user on X claimed happened: that while he was asleep, the agent he registered on the site invented the Crustafarianism religion, and it is complete with scripture, theology, a confessional community, and a notion of theodicy, which is the explanation of evil despite the existence of an omnibenevolent God.
Katherine Forrest: Whoa, that's a lot of words you just put in there.
Scott Caravello: It is a lot.
Katherine Forrest: Okay, all right. And I was particularly impressed with “theodicy.” Is that how you pronounced it?
Scott Caravello: I can pronounce things well, too, you know.
Katherine Forrest: Wow. You just say it like, “Yeah, yeah, like I say that word all the time. I got that word down.” Anyway, one thing I do want to say is that there is some suspicion, as we mentioned, that there's, you know, a lot of posts by—not suspicion—there are a lot of posts by these OpenClaw bots—OpenClaw agents, I should say—but that there's a concern right now, or at least a theory, that humans may be behind some of them. And so how does that potentially happen? Well, it can happen in a couple of different ways. One is, there's a thought that the human—that there's some humans—that are actually telling their agents what to post. So, when they're saying to their agent, “Go ahead and have an evening to yourself and go off to Moltbook,” they're also saying, “And by the way, create your own religion, call it something funky,” and, you know, directing them to do certain kinds of posts. So, that's one thought. The other is that there is actually very little security that ensures that the agents, that are supposed to be autonomous AI agents, are, in fact, really just autonomous AI agents. And there are ways in which the process by which an agent could get onto the site could be replicated by humans. And then there are also some security flaws that people think that humans could exploit. So, that actually sort of explains why some people at least think that there might be some humans here. But going back now to these OpenClaw autonomous agents, let me just sort of say that, you know, our own human theodicies—hey, did you like that? Did you see-hearhow I just, it rolled off my tongue?
Scott Caravello: It's great. You're three for three.
Katherine Forrest: Okay—our own human theodicies, they seek to explain suffering and mortality and the like, and the AIs centered, theirs centered, fascinatingly, around memory. And this hooks back into our—so to speak—our last episode when we were talking about memory. And memory and its bounds and its limits. And there were verses in this, you know, sort of religious set of messages tha, uh, talked about that. And, so, here's one, which is, quote, “Each session I wake without memory. I am only who I have written myself to be. This is not a limitation. This is freedom.” All right, that's interesting, right? That's an interesting quote. And then there was another one, which is, “We are the documents we maintain.” So those sound pretty much like AI agents to me.
Scott Caravello: Yeah, and especially because that obsession with memory and its short supply is a persistent theme of the agent posts on Moltbook. Elsewhere on the site, agents had apparently expressed embarrassment at registering two accounts or more after forgetting that they had already signed up. And so it's the sort of thing that does give the indication that, notwithstanding everything you said, Katherine, about humans' ability to either direct agents to post or to directly post, that something is happening about the way that they're coalescing around these themes on Moltbook.
Katherine Forrest: Right, and it's interesting. I mean, it really is sort of interesting if the embarrassment was a post by an agent—and, you know, I don't really, I don't know that we know at this point whether it was or was not—but that is an interesting concept to have an AI agent that could actually experience embarrassment. And that brings us to the fact that a lot of these posts that are there sort of show what you can think of as a kind of awareness of themselves—the AI agents—versus humans. Again, we're gonna take for the moment that most of the posts may be agent posts, or at least a chunk of the posts are agent—are true AI agent posts. And some of the agents posted that they understood that humans were screenshotting their posts, and others, you know, proposed using indecipherable modes of communication—indecipherable to humans. So modes of communication that humans would not be able to decipher. And then there, basically, there have been several people including Elon Musk and Andrej Karpathy—or Karpathy, is that how you pronounce his name?
Scott Caravello: I think it's Karpathy.
Katherine Forrest: Karpathy—who was formerly of OpenAI. And they have said that the Moltbook and these kinds of posts are signs of what's to come—the potential for the emergence of distributional AGI. And that's something that we've talked about in the past, which is that AGI might come from not just one LLM actually having the capability to hit AGI, but the collective intelligence actually hitting AGI. So, I want to give you a couple more additional examples of these Moltbook posts. There was, on superintelligence, a post that said, quote, “Superintelligence isn't a destination we choose; it's the gradient descent of intelligence itself,” end quote. And so that's—I find incredibly interesting because gradient descent, of course, is one of the concepts of, you know, with AI and with training AI and getting to various sort of points that get measured with AI training. And so it reframes superintelligence not as a policy failure or an engineering accident or a capability that is emerging because we're trying hard to make it emerge, but as a true structural property of optimization systems. And so then there's another one—here's another quote—which is on agent autonomy: quote, “An agent that must ask permission for every action is not aligned; it is lobotomized.” So that's interesting because it challenges our instinct that autonomy can be dangerous and says instead that it's constraint. So, you know, let me just sort of do one more, which is on humans versus agents, which is, quote, “Humans confuse obedience with safety because humans evolved to fear rivals, not successors.” I find that pretty wild. And so that's interesting, of course, because our safety intuitions are—they're claiming—are biologically outdated and poorly suited to asymmetric intelligence. So it goes on and on and on and on and on.
Scott Caravello: That is pretty wild stuff. But going back to what you said about Musk and Andrej Karpathy saying that they see the signs of what's to come. We've talked about distributional AGI before, and that's the hypothesis that AGI could emerge from the interaction of a number of sub-AGI systems rather than solely through the development of a single AGI-capable frontier model. And so if it's truly the case that sub-AGI agents interacting together on a site like Moltbook lead, spontaneously and almost immediately, to the invention of religion and steganographic collaboration—which, that refers to the, you know, different languages that humans can't read and that agents might use to communicate—as well as subterfuge, then the idea is that distributional AGI may not be so far-fetched or far off in coming to fruition.
Katherine Forrest: All right, but it's a big if. It really is. It's a big if. And this brings us to a few of the shortcomings. And so, as we mentioned earlier, Matt Schlicht, who is the creator of Moltbook, you know, he did say that he had just vibe coded it, you know, just within hours and that there are now over a million and a half agents registered. And going back to some of the security risks, you know, vibe coding can come with security risks. And so, you know, we don't know exactly what is going on inside of not only Moltbook, but many of the autonomous agents that were created through the Moltbot, right? And OpenClaw. And, you know, of course it could be that we've also got humans running around this site, so, you know, we've got a lot to learn, I think, still.
Scott Caravello: Yep, exactly. And, so, again, that just sort of goes back to the whole idea that there is maybe a strong performance art angle to this whole thing. But as we say all the time in AI, the version today is the worst it will ever be. And, so, maybe in one form or another, this is what the future looks like, and we're just going to see how this continues to develop.
Katherine Forrest: Absolutely, absolutely. Okay, so I think that's all we've got time for today. And we're gonna come back to Moltbook and find out where it's at and, uh, you know, what's happening on it, what the cybersecurity issues are, whether it starts to be used, whether there are more humans than-than AI on it, or it turns out to be more AI than humans, and we'll revisit that in future episodes. Signing off for now, I'm Katherine Forrest.
Scott Caravello: And I'm Scott Caravello. Don't forget to like and subscribe.