Zuckerberg's $1B bet on AI mercenaries
July 2, 2025 3:46PM GMT+00:00

Welcome back. Silicon Valley investor Vinod Khosla predicts AI will replace 80% of jobs by 2030 and trigger the "fastest rate of demise of Fortune 500 companies" we've ever seen. The venture capitalist also forecasts that by 2040 "the need to work will go away" and "almost everybody in the 2030s will have a humanoid robot at home."
In today’s newsletter:
🎒 AI for Good: AI making history class more fun for students
❎ No free rides for AI: Cloudflare takes a stand
💰 Zuckerberg's $1 billion bet on AI mercenaries
🎒 AI for Good: AI making history class more fun for students

Source: Midjourney v7
A small device that resembles a nightlight is helping prevent house fires and predict major blackouts hours before they occur. Whisker Labs' AI-powered sensors detected warning signs three hours before Spain's massive April blackout that left 50 million people in the dark.
The company's "Ting" devices plug into home outlets and monitor electrical voltage 24/7. When paired with AI, they can spot dangerous electrical faults that often lead to house fires or grid failures.
What happened: Whisker Labs has deployed over one million sensors in American homes, creating an unprecedented real-time view of electrical grid health. The AI processes 30 trillion voltage measurements per second, detecting dangerous electrical arcing and voltage fluctuations that are invisible to utilities.
The system proved its worth during recent disasters. It identified grid problems hours before the deadly Maui wildfires and spotted concerning power fluctuations before the Los Angeles Eaton Fire. In Spain, voltage oscillations appeared at 9:30 AM, three hours before the grid collapsed at 12:33 PM.
The company estimates it has helped identify and correct 20,000 fire hazards over five years
About 35% of detected problems stem from utility equipment failures
AI can pinpoint the source of electrical issues within 30 minutes, dispatching electricians before fires start
Why it matters: Traditional grid monitoring focuses on large infrastructure but misses dangerous conditions at the residential level, where most electrical fires begin. By crowdsourcing data from millions of homes, AI can now predict catastrophic failures that result in loss of life and billions of dollars in damage.

Workshop: Unpack OWASP Top 10 LLMs with Snyk
Join Snyk and OWASP Leader Vandana Verma Sehgal on Tuesday, July 15 at 11:00AM ET for a live session covering:
✓ The top LLM vulnerabilities
✓ Proven best practices for securing AI-generated code
✓ Snyk’s AI-powered tools automate and scale secure dev.
❎ No free rides for AI: Cloudflare takes a stand

Source: Midjourney v7
Cloudflare has become the first internet infrastructure provider to block AI crawlers by default, requiring them to obtain permission or offer compensation before scraping content. The move flips the script from "opt-out" to "opt-in" and could fundamentally change how AI companies access training data.
For years, search engines drove traffic back to websites after indexing their content. AI crawlers break that cycle — they scrape articles and images to generate answers without sending users to the original source, leaving creators without clicks or revenue.
What's happening: Cloudflare now blocks AI bots by default across its network, which powers 20% of the internet.
Website owners can choose which crawlers to allow and for what specific purpose
AI companies must clearly state their intent before accessing any data
The change affects new domains automatically, with existing sites able to opt in
One million sites have already used Cloudflare's earlier crawler-blocking tools
Why it matters: The timing is crucial, as AI companies face growing pressure over the quality and availability of training data. Rather than taking content freely, AI firms will now need to negotiate access or offer compensation for it. This creates the first major infrastructure-level protection for creators.
Leaders from Condé Nast, Dotdash Meredith, USA Today Network, Pinterest and Reddit support the change. "This opens the door to sustainable innovation built on permission and partnership," said Condé Nast CEO Roger Lynch.
What's next: Cloudflare is developing protocols for improved bot identification and providing creators with tools to manage crawler access. The goal is a transparent ecosystem where AI development and the open web can coexist without exploitation.

Try the CRM built to help small teams close big deals
Starter Suite is the easiest way for small businesses to get started with the world’s #1 CRM.
Designed for fast-growing teams, Starter helps you:
Effortlessly create effective email campaigns with pre-built templates and actionable analytics.
Speed up your sales process with guided deal management
Deliver better service with built-in case resolution tools
Make confident decisions with real-time dashboards
You don’t need IT support. You don’t need to install anything. And you don’t even need a credit card to try it.
Try it free for 30 days — and get 40% off when you’re ready to purchase.


Elon’s xAI just raised $10B in its biggest move yet to take on OpenAI
Grammarly is buying Superhuman to supercharge its AI writing platform
RFK Jr thinks AI will speed up drug approvals at the FDA very (very) quickly
Orcas are smarter than we thought, as scientists say they’re making tools now
Microsoft and the Premier League signed a 5 year AI deal to transform football
Anthropic reaches $4B in annualized revenue
OpenAI leans into consulting?
Chinese students are using AI to beat AI detectors


💰 Zuckerberg's $1 billion bet on AI mercenaries

Source: Midjourney v7
Mark Zuckerberg officially unveiled Meta Superintelligence Labs Monday, a new AI organization built around some of Silicon Valley's most expensive hires. The team brings together massive egos under intense pressure, creating perfect conditions for internal combustion.
Sources tell WIRED that Zuckerberg offered top-tier research talent up to $300 million over four years, with more than $100 million vesting immediately in the first year. Meta made at least 10 such offers to OpenAI staffers, including pitching one high-ranking researcher for a chief scientist role, which was turned down.
The all-star roster centers on three leaders whose backgrounds suggest potential friction ahead. Alexandr Wang becomes Meta's Chief AI Officer after Meta's $14.3 billion investment in Scale AI. Former GitHub CEO Nat Friedman joins as a "partner," and veteran AI investor Daniel Gross is expected to complete the triumvirate.
The leadership puzzle: Wang, at 28, is a puzzling choice for Chief AI Officer. Scale AI never developed foundation models — instead, it handled data labeling grunt work. The Information's profile described him as having a "polarizing reputation," despite his extreme ambition. What recommended Wang wasn't technical expertise but political acumen as Zuckerberg's AI advisor.
Friedman brings 25 years of experience, including running GitHub under Microsoft. Zuckerberg originally wanted Friedman to lead Meta's AI efforts, but Friedman declined and suggested Wang as an alternative. Initial reports had Friedman reporting to Wang, but Monday's announcement described them as "partners" — a notable shift suggesting unresolved power dynamics.
Gross adds another complexity layer. The Safe Superintelligence co-founder may join the team, though he wasn't mentioned in Monday's memo, and his potential role remains undefined.
The talent raid: Meta's 14 researcher hires target specific technical gaps across three key areas:
Multimodal specialists (6 researchers):
Reasoning and post-training experts (7 researchers):
Infrastructure specialist (1 researcher):
Joel Pobar (from Anthropic) — inference optimization
Rather than broad AI research, Zuckerberg is targeting the exact capabilities where Meta has struggled. Multimodal AI represents a massive opportunity given Meta's billions of users across Instagram, WhatsApp and Facebook. Getting people to talk to AI assistants might be easier when they're already in Meta's apps.
Meta disputes the compensation figures, with spokesperson Andy Stone calling them "misrepresented" and CTO Andrew Bosworth telling employees, "the market's hot. It's not that hot." But the raids stung OpenAI leadership. Chief Research Officer Mark Chen said it felt "as if someone has broken into our home and stolen something," according to WIRED.
Altman fired back in his own leaked memo, dismissing Meta's recruiting tactics as "distasteful" and claiming they "had to go quite far down their list" after failing to recruit top talent. "Missionaries will beat mercenaries," he wrote, arguing that OpenAI's mission-focused culture would outlast Meta's spending spree. He predicted Meta would eventually move on to "their next flavor of the week" while OpenAI remained focused on building AGI.
Industry insiders estimate that Meta's total compensation bill approaches $1 billion — enough to buy entire AI startups rather than individuals — including Wang's package tied to Scale's valuation, Friedman's likely nine-figure deal, and the reported packages of the OpenAI defectors.

It's a recipe for drama. Wang, who's never built AI models, nominally leading researchers who've created some of the world's most advanced systems. Friedman occupying an ambiguous "partner" role. Gross potentially waiting with unclear responsibilities. And 14 highly paid refugees working alongside existing Meta researchers who have watched their company stumble repeatedly.
The spectacle has become internet fodder. Social media users are creating trading card-style graphics of the poached researchers, complete with stats and special abilities. The whole affair feels less like serious talent acquisition and more like fantasy sports for AI nerds.
Mercenary teams rarely produce breakthrough innovation. The magic comes from years of shared mission and a collaborative culture, rather than assembling expensive free agents. Meta's AI organization has undergone repeated upheavals.
Based on the personalities involved and pressure they're under, smart money says this ends with at least one high-profile departure within six months. Zuckerberg is hoping $1 billion can paper over the cracks in Meta's AI culture — but money rarely fixes fundamental organizational dysfunction.
Read the full memo here.


Which image is real? |



🤔 Your thought process:
Selected Image 1 (Left):
“Not convinced by the light falling from the sun at different point of the picture”
“Something about the older lady's posture on [the other image] one was off. Initially, [this image] looked AI as people weren't paying attention to the sunset. Maybe a lucky guess today!!”
Selected Image 2 (Right):
“Looks less ‘staged’”
“The ‘real’ image had humans standing on each others shoulders which isn’t real common and the lighting was obviously filtered which skews the test”
💭 Poll results
Here’s your view on “Which level of physician involvement do you prefer?”
AI alone with audit trail (10%)
“Insurance companies will not be pleased with healthier patients. Think about it like this. For insurance companies which is better: 1 cargo ship sinking or 500 cargo ships sinking in a year? You would think 1 right? But if 500 sink into the ocean insurance companies will actually make more money because far more cargo companies will want to be insured. It's the same with medical insurance. They actually profit from more sick people. The insurance industry will be fighting AI in medicine.”
AI + brief tele-consult (14%)
“During the past 5+ years I have had three primary care "doctors". The current guy is my 2nd rookie with one year "experience" who I will meet this Thursday, two days from now. The previous "care giver" was in place for two years as he finished his residency at Samaritan Hospital in Corvallis, OR. There have been too many poor decisions to list here, including popular medications that caused more problems (accelerated kidney failure) than they solved. Maybe AI ... MAI-DxO will do a better job and solve the United States for profit medical system mantra: "Die quickly.” ”
AI suggests, doctor confirms (41%)
“This might depend on how much quality research is available for the diagnosis, which is why it's important to still have a human doctor present as well. Women, trans people, and non-white patients do not receive the same medical treatment as men and white patients - symptoms and pain are more likely to be dismissed, and also less researched in symptom presentation, and more likely to have inaccurate diagnoses for years. (I'm a therapist, and this occurs within the mental health system as well.) Having Both a properly trained AI system and a specialist doctor can help.”
“AI may think of things that the MD might not. But the MD should also be able to give AI prompts to look for things that it didn't consider.”
Doctor leads, AI assists (24%)
“With proper training, doctors can augment their processes to incorporate AI as a powerful resource and tool! I still like the personal touch of a doctor that I know and trust. If an unknown doctor, I might lean more towards AI suggests, doctor confirms.”
No AI at all (10%)
“I trust doctor's with my medical care? AI? Nope.”
The Deep View is written by Faris Kojok, Chris Bibey and The Deep View crew. Please reply with any feedback.
Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.
P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!
P.P.S. If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.