⚡️ happy tuesday.

Aye Siri, what’s good? Alphabet just crossed $4 trillion—weeks after briefly passing Apple for the first time since 2019. The shift: Apple announced Gemini will power the next version of Siri. Google stock is up roughly 65% over the past year, capping a run that flipped it from AI snail to the market’s AI bellwether.

Today, we’re talking about:

  • Claude Code for fools

  • Dr. Claude vs. Dr. ChatGPT

  • $30B bank opts for Copilot

  • The Emmys address genAI

  • WTF are evals??

  • Curing your AI trust issues (join live)

📧 If you have a high-ROI AI use case + want to get in front of an audience of 45,000 biz executives and tech leaders, share it with us here.

Engineers Have Been Hoarding Claude Code (Now, The Normies Are Taking Over)

The word "engineer" belongs to two radically different kinds of people. Those who smell bad, speak in vocabulary no one understands, and haven't seen sunlight in weeks. And those who build things that change how you live, make millions, and dress way cooler than you.

You've stayed clear of both because the word "code" is a verbal canker sore.

And while you've been fiddlefaddling with ChatGPT like a fancy search bar, engineers have been running Claude Code—spinning up five AI workers in parallel, shipping in hours what used to take days.

Claude Code is essentially ChatGPT that actually does stuff—organizes, plans, builds tools, and turns you into Iron Man. The thing is, once you buy it, as a non-techie, you won’t know what the f*** to do.

the fix: In a Robin Hood-esque move, Arman Hezarkhani (Carnegie Mellon CS, ex-Google, Tenex co-founder) + Alex Lieberman (definitely not an engineer) are sweeping in to show you a playbook that closes the mental gap.

By the end, you'll have Claude Code installed, basic terminal skills, a finished competitive intelligence brief covering your five biggest business opponents—and the ability to code whatever you want in your native tongue.

1. code 101: Terminal is just a text box on your computer where you type commands instead of clicking around. Unpopular opinion: it’s pretty easy to use when you know the right words to say. This is where Claude Code lives, which makes it even easier to code.

On Mac, press Cmd + Space and type "Terminal." On Windows, hit the Windows key and type "PowerShell." A window opens with a blinking cursor.

Claude Code needs one prerequisite installed (Node.js), then it's two commands to get running. The playbook walks you through both, including what to do if you hit a permissions error.

pro tip: Pay for the Claude Max plan. Arman says, "The easiest way to save money in your work is to pay for the most expensive models." $200/month sounds like a lot until you realize it saves you hours per week.

2. set up your project: Claude Code only sees the folder you launch it from. Anything it creates—files, research notes, drafts—ends up there. So before you start, you need a project folder.

Create a folder called "competitive-intel" on your desktop. Then open Terminal, navigate to that folder, and launch Claude Code from inside it by typing claude. The concept is simple: Claude Code lives in a box. You decide what's in the box.

3. connect to Notion: At this point in the process, Claude Code can research your competitors, but it has no way to send the finished brief anywhere. That's what the Notion MCP does—it gives Claude Code permission to create pages directly in your workspace.

You'll create an API key in Notion (takes 2 mins), run one command, and share your target page with the integration. Then Claude Code can drop deliverables straight into Notion—formatted, shareable, no copy-pasting.

Don't use Notion? Skip this. Your brief still gets created; you'll just copy it manually at the end. The playbook also mentions MCPs for Slack, Gmail, Google Drive, Linear, and dozens of other tools.

4. run the research: Paste one prompt below + Claude Code will spin up five AI employees—called subagents—each researching a different competitor simultaneously. Just replace the bracketed competitors with real ones. Hit enter. Watch it work.

human-verified prompt

Here's my goal: create a competitive intelligence brief on my top 5 competitors.

Spin up at least 5 subagents:
- One to research [Competitor 1]: their product, pricing, positioning, recent news
- One to research [Competitor 2]: their product, pricing, positioning, recent news
- One to research [Competitor 3]: their product, pricing, positioning, recent news
- One to research [Competitor 4]: their product, pricing, positioning, recent news
- One to research [Competitor 5]: their product, pricing, positioning, recent news

Once all subagents complete, synthesize the findings into a competitive brief with:
- Executive summary (3 bullets max)
- Competitor comparison table (features, pricing, positioning)
- Key differentiators for each competitor
- Threats and opportunities for us

Finally, create this brief as a page in Notion.

5. review + refine: When the subagents finish, Claude synthesizes everything into a draft brief. Don't just accept the first version. Type feedback in plain English:

  • "The exec summary is too long. Cut it to three bullets."

  • "Dig deeper on [Competitor 3]'s pricing—I need a tier-by-tier breakdown."

  • "Add a section on their recent product launches."

Repeat until you're happy. But, if Claude's responses start feeling off after some back-and-forth, type /compact. It clears the conversation clutter while keeping your project context intact.

6. get your deliverable: Open Notion. Your competitive intelligence brief includes an executive summary, a comparison table, differentiators, threats, and opportunities. Formatted and ready to share.

two things happened: You built a competitive intel brief that would've cost you a day of expensive outsourcing, and you're now on Claude Code with the skills to build whatever you want next—research agents, tools, content workflows, data pipelines, etc.

Grab the complete CC:101 playbook (for free)—for every terminal command, the exact MCP setup steps, troubleshooting when things break, and the complete walkthrough from zero to shipped.

was this still too technical? Check out Claude’s newest product below.👇

Production AI Needs Adults in the Room: The Evals Playbook (How Teams Decide What to Trust)

  • Guest: CEO, Austen Allred + CTO, Ashalesh Tilawat from Gauntlet AI

  • Day: Wednesday, Jan 14

  • Time: 4:00 PM - 5:00 PM EST

Vercel’s v0: AI Use Cases That Can 10x Your Org

  • Guest: Vercel

  • Day: Wednesday, Jan 21

  • Time: 4:00 PM - 5:00 PM EST

Speaking of… “Claude, do my entire job for me. Reply ‘sounds good’ when someone pings me. Also, don’t make any mistakes.”

Anthropic just dropped Cowork, which is basically Claude Code for everything that isn’t coding. It’s a more consumer-friendly UX than Claude Code, serving as a Trojan horse for Anthropic’s technology.

It runs in your browser, accesses a user-selected folder to read, edit, or create files—handling tasks like reorganizing screenshots into spreadsheets, prepping to-do lists, or summarizing meetings—and connects to all your Claude data sources. It’s now available to Max subscribers in the macOS app.

What it can do:

  • Vacation research

  • Slide decks

  • Email cleanup

  • Organize files

  • Cancel subscriptions

  • Recover wedding photos from a dead hard drive… sigh

crazy fact: Anthropic built + shipped Cowork in a week and a half.

We’re going to need a second opinion. OpenAI + Anthropic both launched healthcare products within days of each other. One could say the AI health wars are on.

  • Anthropic's Claude for Healthcare is for enterprises and hospitals. It's HIPAA-ready with connectors to CMS databases and ICD-10 codes, but the killer app is prior authorization—those reviews take hours and delay patient care. Claude can now pull coverage requirements, cross-reference clinical guidelines, and draft determinations on the spot. Novo Nordisk used it to cut clinical docs from 12 weeks to 10 minutes.

  • OpenAI's ChatGPT Health is for consumers managing their own health. It connects your med records, Apple Health, and MyFitnessPal to pull all your scattered health data into one place—so you can understand test results, prep for appointments, and compare insurance options based on your real patterns. Your chats are walled off and seperate from ChatGPT training.

Here’s a building vs. buying AI case study for companies on the fence: One of Europe's biggest banks just gave up on building its own AI.

Societe Generale (SocGen)—American ultrathinkers, think JPMorgan-scale—spent years developing an in-house AI assistant. Last week, they scrapped it entirely + switched to Microsoft Copilot.

Why they switched up:

  • cost: Running LLMs in production is expensive

  • complexity: Keeping up with the pace of model improvements is a full-time job

  • talent wars: Recruiting ML engineers means bidding against big dogs like Google

An aside for entertainment leaders: The Television Academy dropped its 2026 rule changes and buried the lede: AI is now officially on their radar. The new language says the Academy "reserves the right to inquire about the use of AI in submissions," but doubles down that "the core of our recognition remains centered on human storytelling, regardless of the tools used."

translation: Use AI if you want, but be ready to explain yourself to the powers that be come award season.

eli5: You test the AI on your actual work before you let it loose on anything that matters. You hand it real tasks from your business + see how often it nails them.

the analogy: Hiring someone based on their resume vs. giving them a paid trial project. Evals are the trial project. You see how they actually perform on your specific work before you commit.

why should you care? The labs building frontier models run thousands of evals before shipping. You should be doing the same thing. Evals are game film. They show you what happens on your actual data, your weird edge cases—not the five examples the vendor cherry-picked for the demo.

Whether you're buying a tool or building with APIs, evals tell you the thing you actually need to know: does this work for us?

A few things you might test:

  • accuracy: Does it get the right answer?

  • consistency: Same input, same output—or chaos?

  • edge cases: What happens when you throw it something weird?

apply it: Before you buy any AI tool, ask the vendor: "Can I run my own evals?" If they squirm, that's your answer. We're hosting a free virtual workshop tomorrow about how to conduct these tests—link here.

Open roles:

  • Tech Recruiter

  • AI Strategist

  • Forward Deployed Engineer

  • Applied AI Engineer

Paid on output. Must be NY-based.

We’ll be your Chief AI Officer

Tenex is the modern-day Bell Labs (and the engineering team behind this newsletter). We ship software and AI that move your P&L.

Keep Reading