
Happy Tuesday ⚡️
OpenAI closed a $122 billion funding round at an $852 billion valuation this week, the largest private fundraise in history, and on the same day published a 13-page policy paper proposing robot taxes and a four-day workweek. Meanwhile: Anthropic hit $30 billion in annualized revenue, up from $19 billion in February, and signed its biggest compute deal yet — multiple gigawatts of next-generation TPU capacity with Google and Broadcom.
Also Anthropic: cut off third-party tools like OpenClaw, then told rate-limited users to stop using Opus and extended thinking — the features they'd been selling.
Today, we're talking about:
What IKEA found when they looked at the cases their AI couldn't close
How to set up Claude Cowork Projects as your multi-agent HQ (full setup guide)
The full OpenClaw story, OpenAI's blueprint for redistributing AI gains, and the Box CEO on vibe-coding limits

The Part of AI Nobody Is Reading
The way most companies are measuring AI right now: automation rate goes up, support costs go down, report looks good. What almost none of them are systematically looking at is the failure data — what the customers who didn't get an answer were actually trying to do.
IKEA's chatbot, Billie, handled 3.2 million customer service interactions and resolved 47% of them without a human. That's a solid win. Most companies would've filed that as a successful AI pilot, logged the cost savings, and moved on.
IKEA looked at the other 53%.
Almost all of it was interior design. Customers asking for help with room layouts weren't a support failure — they were signaling demand for a service IKEA didn't offer yet. So IKEA reskilled 8,500 call center employees as remote interior design consultants, gave them AI tools to work with, and launched a new service line. €1 billion in new revenue in the first year.
Part of why this doesn't happen more often is that AI dashboards are built around what the system handled successfully. Failures get escalated to humans, logged as overhead, and treated as a problem to eliminate by making the AI smarter. The question "what is this failure data telling us about what customers want?" is almost nobody's job.
For most companies, the AI playbook so far has been: automate the routine stuff, measure the savings, trim headcount where you can. But it also means treating every unresolved ticket as a cost to minimize — and some of those tickets are pointing to a business that doesn't exist yet.
IKEA treated the failure data as a product roadmap and built a €1 billion service line from it.
Need help building AI into your engineering and growth workflows?
Tenex is the team behind this awesome newsletter. We embed with your team to design, build, and ship AI systems that actually work—from agentic engineering pipelines to AI-powered growth engines.

Claude Cowork Projects: Your Multi-Agent HQ Is Here
If you've been following our Cowork guides, you're probably already getting real traction with it. They just launched Projects, and it makes the whole setup much better.
The frustrating thing about running Claude on real work has always been: every conversation starts from zero. You finish a session, come back the next day, open a new thread, and Claude has no idea what you built yesterday. Re-explain the project, re-paste the context, re-brief everything. Cowork has been chipping away at this problem for a while — local files, claude.md, folder structure — and each piece helped. Projects is the one that actually closes the gap.
The biggest unlock: every chat inside a Project shares memory with every other chat. Before, even if two threads were pulling from the same project files, they had no awareness of each other — each one started cold. Now they don't. Your instructions, your files, your skills, everything is shared across the whole workspace.
Here's how the pieces stack:
Your global instructions come first. This is the behavior file that travels with Claude into every conversation across every project — how you want it to write, what it should never do, how to handle ambiguity. It's your standing operating procedure. Always on, always in the background.
Then you build your workspace map. A context.md file sits in the project folder and tells Claude how to navigate your workspace: what files are here, what each one is for, which skills exist and when to use them. It's the difference between Claude reading the folder cold and Claude reading it like someone who's been working there for a month. Every new thread picks this up automatically.
Then you layer in scheduled tasks. Because your instructions, your workspace map, and your project files are all there, a scheduled task can do real work against the actual context of this project — not just pull generic info. A weekly research task that runs overnight and has your briefing doc waiting. A nightly task that reviews what happened in the project that day and rewrites your context.md to reflect it. Your workspace map stays current without you touching it, and every morning you open a project that already knows what it did the day before.
Running multiple agents is where Projects gets genuinely fun. The UX here is different. All your threads live in the main Projects view at once — one drafting, one researching, one editing — and you're moving between them, redirecting, reassigning. It doesn't feel like managing a chatbot — more like orchestrating a team. Every agent is pulling from the same foundation, so nobody's stepping on each other and you're not re-briefing anyone. You're just directing.
Full walkthrough (27 min) available here.

Cursor 3: what's actually new — Beyond the announcement, the build includes a hybrid agent system: cloud agents run in parallel for heavy lifting, desktop agents for direct code editing, and you manage both from a single sidebar. There's also a Design Mode that lets you click UI elements and describe changes in plain English. Full breakdown
Anthropic hits $30B ARR — Up from $19 billion in February — $11 billion added in a single month. The detail worth sitting with: 1,000+ enterprise customers are now spending $1M+ annually, a number that doubled in under two months. The Google + Broadcom compute deal (multiple gigawatts of TPU capacity, launching 2027) is what they're building to keep pace with it. Read the announcement
Anthropic cut OpenClaw from Claude subscriptions — On April 4, Anthropic blocked flat-rate Claude subscribers from using third-party tools like OpenClaw, citing "unsustainable demand." Users now need an API key or a usage bundle. Notably: the creator of OpenClaw joined OpenAI in February. Full story
Why Claude Code users were hitting limits so fast — Lydia Hallie's investigation found peak-hour throttling and 1M-context sessions were the main culprits. Her fixes: switch to Sonnet instead of Opus (which burns through limits ~2x faster), turn off extended thinking unless you actually need it, and don't resume sessions that have been idle for over an hour. The frustration: Anthropic set Opus as the default, 1M context as standard, and sold extended thinking as a flagship feature. Read the thread
Box CEO Aaron Levie on why he wouldn't vibe-code his ERP — "The billions of transactions going through that ERP system, you cannot take for granted." He's bullish on AI but draws a sharp line between where it earns trust and where the stakes are too high to experiment with. Watch the clip
OpenAI's blueprint for the AI economy — "Industrial Policy for the Intelligence Age" is a 13-page document released the same week as the $122B raise. It proposes robot taxes on automated labor, a public wealth fund seeded by AI companies that gives every American citizen a stake in AI growth, and auto-triggering safety nets that expand unemployment benefits when displacement hits preset thresholds. Sam Altman compared the moment to the Progressive Era. OpenAI stands to be one of the biggest beneficiaries of the gains they're proposing to redistribute — which is exactly why you should read it. Read the report
50 lessons through Claude Code's source — Someone built a full architecture course from the recent source code leak: 8 chapters, 1,902 annotated files, covering everything from the boot sequence to unreleased features. If you've ever wanted to know what's actually running under the hood — or what's coming — this is the rabbit hole. Dive in (free)

Open roles:
AI Strategist
Forward Deployed Engineer
Applied AI Engineer
Engagement Manager
Salary ranges vary by role and experience. Additional comp based on output. Must be NY-based.