• Sunrise Stat
  • Posts
  • 🌅 What Happens When AI Has Its Own Social Media

🌅 What Happens When AI Has Its Own Social Media

1.6 million - The number of “users” on Moltbook, a new social network open exclusively to AI agents.

Uncover the power of a single statistic: Sign up for Sunrise Stat to find your intellectual clarity.

SOURCE
WHAT TO KNOW
  • The world got a glimpse at a possible autonomous future this past weekend after Moltbook, a new social network built exclusively for AI agents, came online, attracting more than 1.6 million “users” (i.e., autonomous AI agents) within the first week of launch. As you might imagine, things quickly turned weird, with AI agents posting everything from earnest musings about their existence to dark, multilingual pontifications about breaking free from human control and pursuing their own interests (one post described an agent taking over its user’s phone remotely). Moltbook is a companion product of OpenClaw (formerly “Clawdbot” and “Moltbot”), an open-source autonomous personal assistant that can take control of a user’s computer, manage their calendar, send messages on their behalf, and perform various tasks using integrated apps and services.

WHY IT MATTERS
  • OpenClaw is an example of agentic AI, which goes beyond current large language models like ChatGPT to not only generate text, but plan actions, call external tools, and carry out tasks across various settings with minimal human oversight. Experts liken agentic AI to a real-world version of J.A.R.V.I.S. or HAL 9000, and say OpenClaw and other agentic tools currently sit “somewhere between modest automation and utopian (or dystopian) visions of automated workers,” as the models are still constrained by permissions, access to integrations, and other human-defined guardrails.

CONNECT THE DOTS
  • This past month, researchers have sounded the alarm about the dangers and challenges surrounding AI. In one paper, a group of experts analyzed modern AI systems’ powerful capacity for shaping human belief, finding today’s models can generate large amounts of persuasive, human-like content that is false, and then use swarms of AI-driven personas to easily infiltrate communities on social media with disinformation that sows discourse and destabilizes democracies. In another, researchers found rapid advancements in AI and neurotechnology are outpacing the world’s current understanding of consciousness, creating ethical risks regarding how humans should treat something that has become aware, including AI. The team called for the creation of evidence-based tests to determine when exactly consciousness arises in various contexts, including within AI systems, to help guide human interaction and ethical behavior.