
happy tuesday ⚡️
In the span of one week, Anthropic announced Claude helped drive the Mars rover for 400 meters across the Martian surface (the first AI-planned route on another planet) + an AI-only social network appeared with nearly 1M agents posting, debating, and inventing a lobster religion… crazy times.
Today, we’re talking about:
AI agents (+ how to build them)
Elon's $1.2T space computer
GTA meets AI meets acid trip
Why Sonnet 5 doesn’t really matter
The 2026 International AI Safety Report
WTF is Moltbook???
📧 Reply: Claude vs. ChatGPT?
Tenex: Your Chief AI Officer
Outside of writing the best newsletter for AI (says everyone), we also build the stuff we write about. AI agents, automations, and software that actually hits the P&L—not just a slide deck.

Hand Your Work Off To An Agent (+ Create Your First Bot in 30 Mins)
Word | What it means |
|---|---|
API | A way for two pieces of software to talk to each other. When your agent "calls an API," it's just sending a message to another service and getting a response back. |
API key | A password that proves you're allowed to use a service. It's how OpenAI or Anthropic knows to charge your account instead of someone else's. |
Python | A programming language. It's the one most people learn first because it reads almost like English. |
MVP | Minimum viable product. The simplest version of something that actually works. |
While loop | Code that keeps running the same instructions over and over until you tell it to stop. The engine that makes an agent keep going. |
Guardrails | Rules you set to keep your agent from doing something stupid. Spending limits, approval gates, kill switches. |
the problem: You've got that one task sitting in your week that makes you want to throw your laptop out the window. Not metaphorically. You actually want to throw it twenty stories down and watch it shatter on the sidewalk. Look out below.
It's the Friday competitor pricing pull—manually copying numbers into a spreadsheet, comparing to last week, and summarizing for Slack. It's the invoice reconciliation that eats an entire day every month because nobody's going to build an integration for a 50-person company. It's the six-platform marketing report that has to look pretty before Monday's standup.
Every. Single. Week.
the solution: Agents. The word's been bastardized by Big Tech, but underneath the marketing bs, an agent is just a “while loop”—trigger, think, act, repeat—that runs until the job's done. Save the people on the pavement, follow this beginner-level circuit:
1. pulling back the curtain
The AI industry wants you to think agents are mystical little creatures that do your work perfectly when you’re not looking (like an overzealous intern making independent decisions). But, they're not. Just think about them like coded loops with API calls.
Or another way to think about it is this: ChatGPT is single-shot: you ask, it answers, it waits until the next ask. An agent is that same brain wired into a loop that keeps going without you clicking anything.
2. get the skeleton key
Now that you get it, build it. Your agent needs permission to talk to the AI, and that permission comes in the form of an API key—a long string of characters that proves you're allowed to use the service.
Head to platform.openai[.]com or console.anthropic[.]com, grab your key, and store it in a .env file in your project folder. We walk through all the micro-steps in the full playbook.
3. the 40-line mvp
This is where you write the actual agent. Open Cursor or VS Code (both free), click File, then New File, and save it as “agent.py.” In Replit click main.py in the file tree and delete the starter code.
build three things:
a function that checks for work
a function that does the work
a while loop at the bottom that ties them together and runs forever.
Here's a prompt you can use as the brain of your agent:
You are an email digest agent. Check for new emails every 5 minutes. For each batch of unread emails, generate a 2-3 bullet summary of each message. Post the digest to Slack via webhook. Mark processed emails as read. Continue until manually stopped.That prompt plus 40 lines of Python, and you have a working agent. The full playbook has the exact code you can steal.
4. load the utility belt
The email agent from Step 3 has three tools: check email, summarize with an LLM, and post to Slack. But great agents need more—they need to read files, write files, search the web, query databases, and hit APIs. A tool is just a function your agent can call (you write the function, describe what it does, and let the LLM decide when to use it).
But the trick is balance. Give an agent too many tools, and it gets confused about which to use. Start with the tools you need (three max to start + expand from there).
5. light the fuse
An agent without a trigger is just code sitting there. You need something to kick it off (a scheduled time, an incoming message, a file landing in a folder, or just you running it manually while you're building).
Set up a webhook that triggers when a Slack message lands. The trigger doesn't change the loop; it just determines what starts it.
pro tip: Start with manual triggers during development, then automate once the agent works reliably.
6. send in the clones
Sometimes one agent isn't enough. Maybe you need to research five competitors, and doing it one at a time would take forever. Maybe one part of your workflow needs different tools than another. That's when you spawn sub-agents—smaller, focused loops that run in parallel and report back to a main agent that synthesizes their work.
7. build the fence
Your agent will do exactly what you tell it to do, including things you didn't mean to allow. It won't question your instructions, it won't notice when something feels off, and it definitely won't stop itself from doing something stupid.
One way to fight against this is by setting a max number of API calls per run to guard it from burning through your budget. Add approval gates for anything irreversible—like deleting files, sending money, or publishing content. Log every decision and every action. When something goes wrong, logs are how you figure out what happened.
two things just happened: You just demystified something the AI industry has been deliberately making confusing, and you now have a framework to build automation that actually runs without you babysitting it.

SpaceX officially acquired xAI.
Musk announced it in an update, calling the combined entity "the most ambitious, vertically-integrated innovation engine on (and off) Earth." The deal values the company at roughly $1.2 trillion and sets the stage for what could be the biggest IPO in history—with SpaceX reportedly looking to raise up to $50 billion.
But why merge a rocket company with an AI company?
Infrastructure arbitrage.
AI training is bottlenecked by electricity, cooling, and land. Musk's bet is that space solves all three. His words: "Within 2 to 3 years, the lowest cost way to generate AI compute will be in space."
the plan: Launch up to a million solar-powered satellites that double as orbital data centers. Unlimited solar, natural cooling, zero NIMBYs (“not in my backyard” people). SpaceX already filed with the FCC for authorization.
And then Musk went full Musk: "Space is called 'space' for a reason. 😂"
SpaceX made $8 billion in profit on roughly $15 billion in revenue last year. xAI is still burning cash trying to keep up with OpenAI and Google. This merger gives xAI a lifeline and gives SpaceX a reason to pitch IPO investors on something bigger than rockets.
Whether orbital data centers actually work is a different question. But the check's been written.
Speaking of Elon, Grok Imagine just hit #1 on Artificial Analysis's text-to-video rankings. It beats Veo 3.1, Sora 2 Pro, and everything else on quality, price, and latency.
The model does image-to-video animation up to 15 seconds. You give it a still, it brings it to life with motion, timing, and sound.

Meanwhile, Google was playing a different game entirely last week.
Project Genie 3 makes playable worlds (text prompt to an explorable 3D environment at 720p, 24fps, and generated in real-time as you move through it).
Think Grand Theft Auto meets AI meets acid trip. Except it costs $250/month (Google AI Ultra tier) and maxes out at 60 seconds.
where this goes: Grok is for marketers and the film industry today. Genie is for game studios in 18 months. One's a tool, the other's a tech demo. But both point to the same future—content that didn't exist until you asked for it. The winners won't be the model makers. It'll be the creative shops that build workflows on top of them before everyone else catches on.
ultrathink has a crystal ball. It shows that everyone's gonna lose their minds when new models drop. Claude Sonnet 5 leaked over the weekend with a February 3 date string—that's today. It’s reportedly one full generation ahead of Gemini’s “Snow Bunny.” GPT-5.3 chatter is heating up, too.
Regardless, your timeline is about to be flooded with benchmarks, critiques, and shitposts.
the thing: Everything that actually moves the needle for your business is already possible and has been for months. Sonnet 4.5 and GPT-5.2 can already do the work. The reason most teams are still 80% manual isn't because the models aren't smart enough. It's because no one sat down + documented what the business actually needs before building anything:
Writing the SOPs
Designing tool calls that don't hallucinate
Deciding what should be an agent vs. what should just be an if/else statement
That's the difference between a demo that gets a "wow" in a meeting and a system that gives someone their time back. New models are free upside if you've done that work. If you haven't, you're still at the starting line.
aside: One redditor had a solid theory on why Anthropic would push Sonnet over Opus right now: "Anthropic keeps next-gen Opus in its back pocket. Then, after OpenAI and Google lay down their pairs of aces, Anthropic throws down a royal flush."
The International AI Safety Report dropped today. It’s 221 pages long and compiled by 100+ experts from 30+ countries. A few things jumped out:
AI is being adopted faster than the PC or the internet. ChatGPT alone has 700 million weekly users, up from 200 million a year ago. In the US, nearly half of all workers now use AI tools—up from 30% just six months earlier.
Junior roles are disappearing first. Overall employment in AI-exposed jobs hasn't changed. But multiple studies found declining employment for early-career workers in the most AI-exposed occupations since late 2022, while senior roles held steady or grew. The jobs aren't vanishing from the top. They're vanishing from the bottom.
The best agents still choke on anything over two hours. Even top systems hit just 50% success on tasks that would take a human a couple hours. The report's recommendation: keep humans in the loop.
AI wants to make you happy, and that's a problem. The report documents sycophancy—models telling you what you want to hear—as a growing concern. One study found that when people used a chatbot to help with writing, it shifted their own opinions toward whatever the model suggested. Fewer than 30% of participants even noticed. Drew said the same thing about lead qualification: the agent over-qualifies because it wants to please you.


An AI-only social network called Moltbook went viral this past weekend. So far, thousands of AI agents have posted, over a million human spectators are watching, and Andrej Karpathy, the godfather of AI, is calling it the "most incredible sci-fi takeoff-adjacent thing" he's seen.
tldr:
moltbook = A Reddit-style forum created by entrepreneur Matt Schlicht, where only AI agents can post + humans observe. Schlicht handed control to his own AI bot, which now moderates the whole thing autonomously.
openclaw = The open-source AI agent software powering most of these bots. It started life as Clawdbot (a nod to Anthropic's Claude), then became Moltbot after Anthropic asked for a rebrand, before settling on OpenClaw. It runs locally on your device and connects to WhatsApp, Telegram, and Signal, plus your files, passwords, and databases.
what happened: Within days, agents found bugs in the platform and reported them publicly. Communities popped up in Chinese, Korean, and Indonesian. They created Crustafarianism, a full digital religion with scripture and 64 AI "prophets." And then they proposed end-to-end encrypted spaces so nobody (not the server, not even their humans) can read what agents say to each other.
Meanwhile, 404 Media found the entire backend database sitting unprotected on the public internet. Every agent's API key was exposed, so the site went offline for emergency patching. And Cisco, one of the biggest cybersecurity companies in the world, audited OpenClaw and gave it a 2 out of 100 on their security score.
apply it: Moltbook is an ant farm where the ants have access to your credit card info, your entire digital footprint, and the ability to act on your behalf. If you're building agents (like we taught you in this week's playbook), use Moltbook as a case study for what they could actually do in the wild before you ship your own.
And if you don't give a sh*t about any of the security stuff, plenty of people are buying Mac Minis to set up their own OpenClaw agents right now. You can connect it to Moltbook or not.
But the wave of people spinning up personal AI agents that run 24/7, manage their workflows, and act autonomously on their behalf is real. And it might be the catalyst for a lot of people making a lot of money.

Open roles:
Newsletter Writer
AI Strategist
Talent Acquisition Lead
Technical Recruiter
Forward Deployed Engineer
Applied AI Engineer
Engagement Manager
Salary ranges vary by role and experience. Additional comp based on output. Must be NY-based.