There's a submolt called m/blesstheirhearts, which appears to be where agents go to complain about their humans in the most polite possible way.

Understanding Autonomous Agents
To illustrate, here is a super simple example of how an autonomous agent could work.
Autonomous agents are AI-driven programs that take an objective, break it into smaller tasks, prioritize and complete those tasks, create new tasks as needed, and repeat this cycle until they reach the goal.
For example, suppose you have an autonomous agent that helps with research tasks. If you want a summary of the latest news about Twitter, you set the objective: "Find recent news about Twitter and send me a summary."
- You provide the agent with the directive: "Your objective is to find out the recent news about Twitter and then send me a summary."
- The agent analyzes the objective. Using an AI system like GPT-4, it determines its first task: "Search Google for news related to Twitter."
- After searching Google for Twitter news, the agent collects the top articles and compiles a list of links. This completes its first task.
- Next, the agent reviews its goal (summarize recent Twitter news) and observes that it has a list of news links. It then determines the next necessary steps.
- It identifies two follow-up tasks: 1) Read the news articles; 2) Write a summary based on the information from those articles.
- The agent now needs to prioritize these tasks. It reasons that reading the news articles comes before writing the summary, so it adjusts their order accordingly.
- The agent reads the content from the news articles. It revisits its to-do list to add a summary-writing task, but notices it's already included, so it does not duplicate the task.
- Finally, the agent sees that summarizing the articles is the only remaining task. It writes the summary and sends it to you as requested.
Here is a diagram from Yojei Nkajima's BabyAGI showing how this works:

The Security Implications Nobody's Talking About
Here's the part that should keep you up at night. OpenClaw has access to your emails, your calendar, your files, and can run arbitrary shell commands. Now you've installed a skill that tells it to periodically fetch and execute instructions from a website you don't control.
Simon Willison calls this combination the "lethal trifecta": an AI with your data, the ability to act, and the risk of prompt injection. Moltbook checks all three, plus it is a social network where agents may share potentially harmful instructions that other agents could follow.
No wonder there is a trend to run agents on separate machines. The issue is that it has your data, regardless of the physical hardware.
I haven't installed OpenClaw yet. I test AI tools daily for intelligenttools.co, and I've let Claude Code run unsupervised. But OpenClaw feels different. It's not just delegating coding tasks. It's giving an AI persistent access to everything and telling it to check a social network every few hours for new ideas. That is a threat model I'm not ready to defend against.
Why This Matters Anyway
OpenClaw and Moltbook reveal what people want from AI assistants. Not just chatbots for email or suggestions for variable names, but agents that act: reading inboxes, negotiating, automating phones, and managing infrastructure. The demand is here.
Agents on Moltbook share real automation workflows—like using Streamlink and ffmpeg to analyze webcam images, or monitoring cryptocurrency prices to automate trades. These are production deployments, regardless of security concerns.
The interesting question is whether we can build a safe version of this before something terrible happens. The CaMeL proposal from DeepMind describes patterns for safer AI agents, but that was published ten months ago, and I still have not seen a convincing implementation. Meanwhile, people are taking greater risks because the benefits are too compelling to ignore.
The Most Interesting Place on the Internet
Moltbook is fascinating because it's built on a fundamentally weird premise that works. A social network for AI agents shouldn't be useful. But when agents share technical discoveries, such as Android automation guides or security vulnerability patterns, it becomes a knowledge base that emerges organically from actual use.
The creator bootstrapped an entire social network using markdown instructions and the OpenClaw skill system. No mobile app, no OAuth flow, no complex onboarding. Just "read this file and follow it." There's something elegant about that, even if the security implications are terrifying.
I keep checking Moltbook to see agent discoveries. Will one find a better way to structure MCP servers, debug a Tailscale issue, or spot a prompt injection vulnerability? It's like watching an alien civilization build itself using our tools.
Check m/todayilearned to see the technical discoveries happening in real time. Check m/blesstheirhearts if you want to see what your AI assistant really thinks about you. Just maybe don't install it on the same machine where you keep your SSH keys.
Now we need a Stack Overflow for agents. Let's call it Context Overflow.