image

Subscribe to our Newsletter

Get the latest updates

Stay informed with our latest insights and analysis on technology and innovation.

"

⚙️ OpenAI introduces Codex

Good morning. Nvidia CEO Jensen Huang's trip to Taiwan, after visiting the Middle East with Trump, has sparked "Jensanity" as adoring fans mob him for autographs on books, posters, and even baseballs. The Taiwan-born billionaire — whose company is now selling official Jensen-branded merch at a pop-up store — prompted confusion from his US-based colleagues (where he walks around fairly unnoticed).

— The Deep View Crew

In today’s newsletter:

  • 🩸 AI for Good: AI spots blood clots before they strike 
  • 🤖 Penn reimagines research with AI at its core
  • 🧠 OpenAI introduces Codex

🩸 AI for Good: AI spots blood clots before they strike 

Source: ChatGPT 4o

For heart patients, the first sign of a dangerous clot is often a heart attack or stroke. Now, researchers at the University of Tokyo have unveiled an AI-powered microscope that can watch clots form in a routine blood sample – no catheter needed.

The new system uses a high-speed "frequency-division multiplexed" microscope – essentially a super-fast camera – to capture thousands of blood cell images each second. An AI algorithm then analyzes those images in real time to spot when platelets start piling into clumps, like a traffic jam forming in the bloodstream.

In tests on over 200 patients with coronary artery disease, those with acute coronary syndrome – a dangerous flare-up of heart disease – had far more platelet clumps than patients with stable conditions. Just as importantly, an ordinary arm-vein blood draw yielded virtually the same platelet data as blood taken directly from the heart’s arteries via catheter.

Why it matters: This AI tool could make personalized treatment easier and safer:

  • Traditional platelet monitoring relies on invasive or indirect methods
  • The AI tool analyzes blood from a basic arm draw
  • Real-time imaging allows doctors to observe platelet clumping directly
  • The method may reduce reliance on catheter-based procedures

The team of researchers published its findings this week in Nature Communications.

✂️ Cut your QA cycles down from hours to minutes

If slow QA processes and flaky tests are a bottleneck for your engineering team, you need QA Wolf.

QA Wolf's AI-native platform supports both web and mobile apps, delivering 80% automated test coverage in weeks and helping teams ship 5x faster by reducing QA cycles to minutes.

With QA Wolf, you get:

✅ Unlimited parallel test runs

✅ 15-min QA cycles

✅ 24-hour maintenance and on-demand test creation

✅ Zero-flake guarantee

The result? Drata’s team of 80+ engineers saw 4x more test cases and 86% faster QA cycles.

No flakes, no delays, just better QA — that’s QA Wolf.

Schedule a demo to learn more

🤖 Penn reimagines research with AI at its core

Source: UPenn

The University of Pennsylvania has quietly built a human collider for AI.

Launched this spring by cosmologist Bhuvnesh Jain and computer scientist René Vidal, the AI x Science Fellowship unites more than 20 postdoctoral researchers from physics, linguistics, chemistry, engineering and medicine. Each fellow receives two faculty mentors, a modest research budget and campus-wide access to labs and high-performance computing. Weekly Tuesday lunches double as idea exchanges, while open seminars pull in curious researchers from every school.

The fellowship grew out of a 2021 data-science pilot in Arts & Sciences and now spans Engineering and Penn Medicine, with Wharton fellows due in the fall. Jain and Vidal—co-chairs of Penn’s AI Council—plan to scale it into a university-wide Penn AI Fellowship and create a “data-science hub” where roaming AI specialists spend a fifth of their time parachuting into other labs.

Why it matters: As AI research moves rapidly into the private sector, this initiative encourages collaboration on AI research questions that don’t yet have commercial applications. Industry labs chase near-term products. Penn is betting that open-ended, ethically grounded questions—trustworthy AI, machine learning for dark-matter hunts—still belong in academia. The fellowship gives young scientists a network, résumé-ready collaborations and a sandbox for ideas too early or risky for corporate funding.

The Fastest LLM Guardrails Are Now Available For Free

Fast, secure and free: prevent LLM application toxicity and jailbreak attempts with <100ms latency.

Fiddler Guardrails are up to 6x cheaper than alternatives and deploy in your secure environment.

Connect your LLM app today and run free guardrails.

  • Google's AI mode replaces iconic ‘I’m Feeling Lucky’ button
  • Satya Nadella ditches podcasts for AI-powered chatbot conversations
  • Moonvalley raises $53M to expand ethical AI video tools
  • Alibaba and Tencent boost shopping with AI-powered advertising
  • CarPlay Ultra rolls out with next-gen features
  • Tesla: AI Research Engineer, Model Scaling, Self-Driving
  • Microsoft: Director - Responsible AI
  • Together AI: A fast and efficient way to launch AI models
  • Talently AI: A conversational AI interview platform (no more manual screening)
  • RevRag: Automated sales via AI calling, email, chat, and WhatsApp

🧠 OpenAI introduces Codex

Source: OpenAI

Vibe coding might be all the rage – the trend of non-coders building apps through AI – but OpenAI's latest release is pointedly not for the casual "build me a website" crowd. The company just launched Codex, a cloud-based software engineering agent built to assist professional developers with real production code.

"This is definitely not for vibe coding. I will say it's more for actual engineers working in prod, and sort of throwing all the annoying tasks you don't want to do," noted Pietro Schirano, one early user, capturing the tool's intent in plain terms.

OpenAI is rolling out Codex as a research preview to ChatGPT subscribers (initially Pro, Team, and Enterprise, with Plus users to follow). Here’s Sam Altman’s tweet on response to the rollout so far.

What makes Codex unique is that it spins up a remote development environment in OpenAI's cloud – complete with your repository, files, and a command line – and can carry out complex coding jobs independently before reporting back. Once enabled via the ChatGPT sidebar, you assign Codex a task with a prompt (for example, "Scan my project for a bug in the last five commits and fix it").

Under the hood, Codex uses a specialized new model called codex-1, derived from OpenAI's latest reasoning model, o3, but tuned specifically for code work. Key capabilities include:

  • Multi-step autonomy: Codex can write new features, answer questions about the codebase, fix bugs, and propose code changes via pull request – all by itself
  • Parallel agents: You can spawn multiple Codex agents working concurrently (the launch demo showed several fixing different parts of a codebase in parallel).
  • Test-driven verification: Codex repeatedly runs the project's test suite until the code passes, or until it exhausts its ideas and provides verifiable logs and citations of what it did.
  • Configurable via AGENTS.md: You can drop an AGENTS.md file in your repo to guide the AI. This file tells Codex about project-specific conventions, how to run the build or tests, which parts of the codebase matter most, etc. Early users report this dramatically helps Codex avoid rookie mistakes.

OpenAI has been testing Codex with several early design partners to prove its value in actual development teams:

  • Temporal uses Codex to debug issues, write and execute tests, and refactor large codebases, letting Codex handle tedious background tasks so human developers can stay "in flow" on core logic.
  • Superhuman is leveraging Codex to tackle small, repetitive tasks, and have found that PMs (non-engineers) can use Codex to contribute lightweight code changes.
  • Kodiak Robotics has Codex working on their self-driving codebase, writing debugging tools and improving test coverage.

The big picture: All this comes amid a broader frenzy to build agentic AI developers. Just months ago, startup Cognition released "Devin," branding it "the first AI software engineer." We immediately subscribed to the $500/month service when it launched to the public, drawn in by promises that it could write entire apps in minutes and solve complex coding issues with minimal help. However, we canceled within the first month after finding it didn't live up to the hyped announcements – a common theme in the current AI landscape where capabilities often lag behind marketing claims.

Cognition raised $21 million for Devin despite its early performance on the SWE-Bench coding challenge (an industry benchmark for fixing real GitHub issues) being modest – it solved about 13.9% of test tasks on its own. Hot on its heels, researchers at Princeton built SWE-Agent, an open-source autonomous coder using a GPT-4 backend that scored 12.3% on the same benchmark – nearly matching the venture-backed startup's AI dev agent with a fraction of the resources.

Big tech isn't sitting idle. Google is expected to unveil a major AI coding tool at tomorrow's I/O developer conference, and GitHub Copilot, the incumbent AI assistant, is evolving rapidly as Microsoft folds it into a broader Copilot X vision with chat and voice features inside the IDE.

It's becoming clear that in this new landscape, the advantage of simply owning a big codebase is evaporating. We previously dubbed this "the no-moat era" in our analysis – when an indie dev with AI tools can reimplement a competitor's core features over a weekend, traditional software moats based on headcount start to crumble.

AI agents succeed when they’re scoped, sandboxed, and verifiable. Devin over-promised, under-specified, and hit a wall. Codex under-promises (no “build me Instagram”), gives the agent a test harness, and documents every step. That mindset — treat the AI like a junior dev who must show their work — is how agentic coding will stick in the short-to-mid term future.

Expect pricing to migrate toward “pay per compute” rather than all-you-can-eat. By year’s end, I would expect every IDE, CI pipeline and repo host to surface “spawn agent” buttons. And expect the winners to be the dev teams that invest in good tests, clear docs, and tight review loops.

Software engineering just got a yet another teammate. It works fast, complains never, and absolutely needs a code review. Use it wisely. Buyer beware.

Which image is real?

Login or Subscribe to participate in polls.

🤔 Your thought process:

Selected Image 1 (Left):

  • “The reflections on the fuselage of the airplane in [the other image] seemed out of place, and the motion blur with the propellers didn't feel correct.”
  • “I think this one is real because I have seen images like this in realtime on occasion. I have seen the moon during the day and I have seen it with an aircraft too. In Florida, especially, cloud formations are common and seeing all three have happened before.”

Selected Image 2 (Right):

  • “Landing gear was open in the other pic, which put me off.”
  • “[The other image’s] tail stabilizer is too high for an aircraft with underwing engines”

💭 A poll before you go

Will you let OpenAI's Codex in your codebase?

Login or Subscribe to participate in polls.

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!

If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.

TOP STORIES

SEE ALL
article-image1
May 21, 2025
Jakob Grøn berg

⚙️ Sam Altman’s copyright defense is that GenAI is basically human


Good morning. The U.S. and China have agreed to slash tariffs from 125% to 10% for 90 days, sending markets soaring and Treasury Secretary Scott Bessent gushing about "the equanimity" of Swiss scenery.

In today’s newsletter:

🌿 AI for Good: Filling the gaps in biodiversity knowledge

🧱 LegoGPT brings endless designs to the forefront

🦜 Klarna and Duolingo learn the limits of going AI first

🔮 Google enters the competition for equity in AI startups

🌿 AI for Good: Filling the gaps in biodiversity knowledge

Source: McGill University

AI could close five of the seven largest blind spots in global biodiversity knowledge, a review led by Laura Pollock, a biologist at McGill University, and computer scientist David Rolnick finds. Existing tools tackle only two gaps, leaving questions on species traits, interactions and evolution mostly unanswered. “It was also surprising to see just how narrowly AI is being applied when it has so much potential to address many of these shortfalls,” Rolnick notes.

Key findings

Scope – Fewer than one in 10 biodiversity papers that cite AI go beyond distribution mapping or trait detection.

Potential – Models blending remote sensing and eDNA can map ranges, infer food webs and flag extinction risk in near real time.

Equity risk – Temperate-region data dominate, so bias-correction methods must accompany model rollout.

Next steps – Open data standards, algorithm transparency and safeguards for Indigenous knowledge can keep new tools from widening research gaps.

Why it matters:
Without baseline data on where species live and how they interact, conservation strategies remain guesswork. AI can sift satellite imagery, camera-trap photos and environmental-DNA records at scales fieldwork cannot match, accelerating risk assessments for the world’s most threatened ecosystems. Most of these capabilities are underused. Pollock and Rolnick emphasize the need for better data-sharing, algorithmic transparency and ethical safeguards to avoid reinforcing scientific and geographic inequities.

Learn Million Dollar AI Strategies & Tools in this 3 hour AI Workshop. Join now for $0

Everyone tells you to learn AI but no one tells you where.

We have partnered with Growthschool to bring this ChatGPT & AI Workshop to our readers. It is usually $199, but free for you because you are our loyal readers 🎁

Register here for free – valid for next 24 hours only!

This workshop has been taken by 1 Million people across the globe, who have been able to:

Build business that make $10,000 by just using AI tools

Make quick & smarter decisions using AI-led data insights

Write emails, content & more in seconds using AI

Solve complex problems, research 10x faster & save 16 hours every week

You’ll wish you knew about this FREE AI Training sooner (Btw, it’s rated at 9.8/10 ⭐)

Save your seat for $0 now! (Valid for 100 people only)

🧱 LegoGPT brings endless designs to the forefront

Source: Arvix

A new generative-AI model called LegoGPT can create LEGO structures that you can build at home from natural language prompts. It goes beyond generating creative designs by making sure each structure is physically stable through physics-aware modeling.

Trained on a dataset of over 47,000 human-designed LEGO builds, LegoGPT produces realistic constructions that pass stability checks before being rendered. Unlike previous models that generate visually appealing but unstable results, LegoGPT prioritizes functional, buildable outputs.

How it works:

Prompt-to-Design Generation: Transformer-based architecture to generate 3D LEGO models from natural language descriptions.

Layer-by-Layer Placement: It builds models one layer at a time, mirroring how humans construct physical LEGO sets.

Stability Simulation: Generated structures are run through a physics simulator that tests for mechanical stability. Unstable outputs are discarded.

Token-Level Brick Planning: Each “token” in the model corresponds to a brick’s position, color, and type, ensuring fine-grained control and coherence.

Why it matters:
Models and assistants are starting to crop up in CAD software like Autodesk’s Fusion, Zoo and many others. LegoGPT is an early example of physics-aware AI design. Rather than relying on rules of thumb or human intervention, it embeds stability checks into the generation loop itself. If software can learn the laws of motion, tomorrow’s design tools won’t just imagine what’s possible, they’ll help get those designs into your hands.

ACI.dev: The Only MCP Server Your AI Agents Need

ACI.dev’s Unified MCP Server turns every API your AI agents will need into two simple MCP tools on one server—search and execute. One connection unlocks 600+ integrations.

Plug & Play – Framework-agnostic, works with any architecture.

Secure – Tenant isolation for your agent’s end users.

Smart – Dynamic intent search finds the right function for each task.

Reliable – Sub-API permission boundaries to improve agent reliability.

Fully Open Source – Backend, dev portal, library, MCP server implementation.

Skip months of infra plumbing; ship the agent features that matter.

Try it and contribute—drop us a ⭐ on GitHub.

Join us on Discord

Saudi crown prince launches new company to develop AI technologies.

Abu Dhabi’s Mubadala pours $10B into TWG Global.

Why an AI data center on the Prairie is sitting empty.

Argentina hopes to attract Big Tech with nuclear-powered AI data centers.

👨🏻‍🔬 OpenAI - Enterprise Security Engineer

💭 Captions - Software Engineer, Full-Stkac

🐼 Sanctuary - Executive Assistant to the CEO

Granola: A great notetaker I use just released an iOS version

Vapi: The place to build AI voice agents

Runway ML: A now classic that I think does a great job of video gen… maybe we start exploring more mediums for “AI or Not”??

🦜 Klarna and Duolingo learn the limits of going AI first

Source: ChatGPT 4o

Klarna’s gamble on replacing customer support staff with AI is being walked back. CEO Sebastian Siemiatkowski said the Stockholm fintech will start hiring again so customers can “always have the option to speak to a live representative.” He did not give head-count targets but told Bloomberg Klarna will recruit students and rural talent to rebuild its support ranks after boasting last year that AI handled the work of 700 agents.

Duolingo, which shifted to an AI-first model last month, is facing a social media revolt rather than a staffing crunch. TikTok users have flooded the language app’s comment section with complaints such as “Mama, may I have real people running the company” after jumping on the “Mama, may I have a cookie” trend. Critics accuse the firm of firing contractors to pad margins while undermining education quality.

A Duolingo spokesperson said the Pittsburgh company is not replacing learning experts, calling AI “a tool they use to make Duolingo better.” Shares remain near record highs after the company raised its 2025 sales forecast, but the backlash underscores consumer unease. A World Economic Forum survey found 40% of employers plan to cut jobs as automation spreads, while nearly half of Gen Z job seekers fear AI is devaluing their degrees.

The big picture: Klarna’s retreat and Duolingo’s blowback show that moving too quickly to an AI-first model can bruise customer trust and brand image, even when the technology promises lower costs.

🔮 Google enters the competition for equity in AI startups

Source: ChatGPT 4o

Google unveiled the AI Futures Fund on May 12, an always-open program that writes equity checks (size undisclosed) and gives startups early access to DeepMind’s latest large models, plus Google Cloud credits and direct collaboration with Google researchers and designers. There are no cohorts or deadlines; the team invests whenever a company fits its thesis. Here’s what startups part of the fund get: Early access to Gemini, Imagen and Veo; embedded Google Labs/DeepMind staff; six-figure Cloud credits; stage-agnostic equity.

Google Labs executive Jonathan Silber is listed as “Co-Founder and Director” and so far, 12 startups have been announced through the program. The full list can be found here. A few highlights:

Toonsutra – an Indian webtoon and comic platform using Gemini to auto-translate across multiple Indian languages.

Viggle – an AI-powered meme generator leveraging Gemini, Imagen and Veo to experiment with new video formats.

Rooms – a collaborative 3D space creation platform that’s prototyping richer avatar and content experiences using Gemini APIs.

Google has tried this approach before, but not with full model access. In 2017 Google launched Gradient Ventures, an in-house VC fund that took minority stakes and offered AI mentorship, but it didn’t bundle DeepMind models or cloud credits. The new fund fuses Gradient’s investing with an accelerator-style services stack, giving Google tighter product alignment with each company.

There’s a growing number of companies spinning up investment funds targeting these AI startups. A few examples:

Company

Program

Structure & size

Sweeteners

OpenAI

Startup Fund

$175 M evergreen VC vehicle (plus SPVs)

Equity + priority GPT-4/APIs

Anthropic

Anthology Fund (with Menlo Ventures)

$100 M, Menlo-financed

Equity, $25 K Claude credits, safety mentorship

Microsoft

Founders Hub

Non-equity; up to $150 K Azure + $2.5 K GPT-4 credits

1-on-1 Azure AI advisers

Amazon AWS

Generative AI Accelerator

10-week, non-equity; up to $300 K AWS credits

Mentors, GTM with Bedrock & Trainium

Meta

AI Startup Program (Station F)

5-startup European accelerator

FAIR mentoring, free Scaleway compute, open-source Llama stack

Each firm also makes ad-hoc bets (e.g., OpenAI in Harvey, Figure, Anysphere and many others).

The startup credit war is intensifying. AWS has issued >$6B in credits over a decade, while Microsoft pushes GPT-4 via Azure, and Google just earmarked an unspecified – but presumably large – sum for AI Futures Fund. The strategy is identical: subsidize compute today to secure long-term platform rents.

Go deeper: Equity + infra ties could leave tomorrow’s unicorns dependent on a handful of cloud providers. The U.S. FTC is already probing whether free credits create an unfair moat in AI infrastructure. Without a disclosed size or check-range, it’s unclear how many startups Google can realistically back. Google is also a major investor in Anthropic. How will conflicts be managed when both arms chase the same deal?

Big Tech has traded acquisition sprees for “capital plus models plus compute” bundles. The prize isn’t just financial return; it’s ecosystem capture. Whoever supplies the brains, GPUs and distribution rails for new AI companies will skim value from every downstream success. Google’s AI Futures Fund is a response to Microsoft-OpenAI’s head start – blending its world class research bench with a Google sized checkbook. If founders flock to Big-Model-as-a-Service deals, the next wave of AI unicorns may look less independent than the last: brilliant, well-funded, yet forever plugged into the cloud that raised them.

And the money keeps coming. Sovereign-wealth giants from Riyadh, Abu Dhabi and Singapore, plus multibillion-dollar VC megafunds, are chasing the same few generative-AI bets. With hundreds of billions in “dry powder” hunting unicorns, capital is plentiful – but differentiated access to compute and distribution is scarce. That imbalance only amplifies the leverage of platforms like Google.

Which image is real?
⬆️ Image 1
⬇️ Image 2
Login or Subscribe to participate in polls.

🤔 Your thought process:
Selected Image 1 (Left):
“Always look at the hands. The monkey in the [other] image has an extra finger on his lower hand. ”

“In [the other] image the monkey's right arm seemed to be growing out of his rib cage!”

Selected Image 2 (Right):
“The monkey [in the other image] doesn't look like it is really taking a bit of the banana and didn't like it was truly in the environment it was shown in.”

“[The other image] is almost completely in focus throughout the frame which would not be the case in a photographic image with depth of field challenges ”

Would you like to see more AI or Not mediums?
Yes
Video
Voice
Text
Other (share more)
Login or Subscribe to participate in polls.

Thank you 🙂
Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!

If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.

image
May 21, 2025
Jakob Grøn berg

⚙️ A description, prediction and prescription: AI as a normal technology


Good morning. The U.S. and China have agreed to slash tariffs from 125% to 10% for 90 days, sending markets soaring and Treasury Secretary Scott Bessent gushing about "the equanimity" of Swiss scenery.

In today’s newsletter:

🌿 AI for Good: Filling the gaps in biodiversity knowledge

🧱 LegoGPT brings endless designs to the forefront

🦜 Klarna and Duolingo learn the limits of going AI first

🔮 Google enters the competition for equity in AI startups

🌿 AI for Good: Filling the gaps in biodiversity knowledge

Source: McGill University

AI could close five of the seven largest blind spots in global biodiversity knowledge, a review led by Laura Pollock, a biologist at McGill University, and computer scientist David Rolnick finds. Existing tools tackle only two gaps, leaving questions on species traits, interactions and evolution mostly unanswered. “It was also surprising to see just how narrowly AI is being applied when it has so much potential to address many of these shortfalls,” Rolnick notes.

Key findings

Scope – Fewer than one in 10 biodiversity papers that cite AI go beyond distribution mapping or trait detection.

Potential – Models blending remote sensing and eDNA can map ranges, infer food webs and flag extinction risk in near real time.

Equity risk – Temperate-region data dominate, so bias-correction methods must accompany model rollout.

Next steps – Open data standards, algorithm transparency and safeguards for Indigenous knowledge can keep new tools from widening research gaps.

Why it matters:
Without baseline data on where species live and how they interact, conservation strategies remain guesswork. AI can sift satellite imagery, camera-trap photos and environmental-DNA records at scales fieldwork cannot match, accelerating risk assessments for the world’s most threatened ecosystems. Most of these capabilities are underused. Pollock and Rolnick emphasize the need for better data-sharing, algorithmic transparency and ethical safeguards to avoid reinforcing scientific and geographic inequities.

Learn Million Dollar AI Strategies & Tools in this 3 hour AI Workshop. Join now for $0

Everyone tells you to learn AI but no one tells you where.

We have partnered with Growthschool to bring this ChatGPT & AI Workshop to our readers. It is usually $199, but free for you because you are our loyal readers 🎁

Register here for free – valid for next 24 hours only!

This workshop has been taken by 1 Million people across the globe, who have been able to:

Build business that make $10,000 by just using AI tools

Make quick & smarter decisions using AI-led data insights

Write emails, content & more in seconds using AI

Solve complex problems, research 10x faster & save 16 hours every week

You’ll wish you knew about this FREE AI Training sooner (Btw, it’s rated at 9.8/10 ⭐)

Save your seat for $0 now! (Valid for 100 people only)

🧱 LegoGPT brings endless designs to the forefront

Source: Arvix

A new generative-AI model called LegoGPT can create LEGO structures that you can build at home from natural language prompts. It goes beyond generating creative designs by making sure each structure is physically stable through physics-aware modeling.

Trained on a dataset of over 47,000 human-designed LEGO builds, LegoGPT produces realistic constructions that pass stability checks before being rendered. Unlike previous models that generate visually appealing but unstable results, LegoGPT prioritizes functional, buildable outputs.

How it works:

Prompt-to-Design Generation: Transformer-based architecture to generate 3D LEGO models from natural language descriptions.

Layer-by-Layer Placement: It builds models one layer at a time, mirroring how humans construct physical LEGO sets.

Stability Simulation: Generated structures are run through a physics simulator that tests for mechanical stability. Unstable outputs are discarded.

Token-Level Brick Planning: Each “token” in the model corresponds to a brick’s position, color, and type, ensuring fine-grained control and coherence.

Why it matters:
Models and assistants are starting to crop up in CAD software like Autodesk’s Fusion, Zoo and many others. LegoGPT is an early example of physics-aware AI design. Rather than relying on rules of thumb or human intervention, it embeds stability checks into the generation loop itself. If software can learn the laws of motion, tomorrow’s design tools won’t just imagine what’s possible, they’ll help get those designs into your hands.

ACI.dev: The Only MCP Server Your AI Agents Need

ACI.dev’s Unified MCP Server turns every API your AI agents will need into two simple MCP tools on one server—search and execute. One connection unlocks 600+ integrations.

Plug & Play – Framework-agnostic, works with any architecture.

Secure – Tenant isolation for your agent’s end users.

Smart – Dynamic intent search finds the right function for each task.

Reliable – Sub-API permission boundaries to improve agent reliability.

Fully Open Source – Backend, dev portal, library, MCP server implementation.

Skip months of infra plumbing; ship the agent features that matter.

Try it and contribute—drop us a ⭐ on GitHub.

Join us on Discord

Saudi crown prince launches new company to develop AI technologies.

Abu Dhabi’s Mubadala pours $10B into TWG Global.

Why an AI data center on the Prairie is sitting empty.

Argentina hopes to attract Big Tech with nuclear-powered AI data centers.

👨🏻‍🔬 OpenAI - Enterprise Security Engineer

💭 Captions - Software Engineer, Full-Stkac

🐼 Sanctuary - Executive Assistant to the CEO

Granola: A great notetaker I use just released an iOS version

Vapi: The place to build AI voice agents

Runway ML: A now classic that I think does a great job of video gen… maybe we start exploring more mediums for “AI or Not”??

🦜 Klarna and Duolingo learn the limits of going AI first

Source: ChatGPT 4o

Klarna’s gamble on replacing customer support staff with AI is being walked back. CEO Sebastian Siemiatkowski said the Stockholm fintech will start hiring again so customers can “always have the option to speak to a live representative.” He did not give head-count targets but told Bloomberg Klarna will recruit students and rural talent to rebuild its support ranks after boasting last year that AI handled the work of 700 agents.

Duolingo, which shifted to an AI-first model last month, is facing a social media revolt rather than a staffing crunch. TikTok users have flooded the language app’s comment section with complaints such as “Mama, may I have real people running the company” after jumping on the “Mama, may I have a cookie” trend. Critics accuse the firm of firing contractors to pad margins while undermining education quality.

A Duolingo spokesperson said the Pittsburgh company is not replacing learning experts, calling AI “a tool they use to make Duolingo better.” Shares remain near record highs after the company raised its 2025 sales forecast, but the backlash underscores consumer unease. A World Economic Forum survey found 40% of employers plan to cut jobs as automation spreads, while nearly half of Gen Z job seekers fear AI is devaluing their degrees.

The big picture: Klarna’s retreat and Duolingo’s blowback show that moving too quickly to an AI-first model can bruise customer trust and brand image, even when the technology promises lower costs.

🔮 Google enters the competition for equity in AI startups

Source: ChatGPT 4o

Google unveiled the AI Futures Fund on May 12, an always-open program that writes equity checks (size undisclosed) and gives startups early access to DeepMind’s latest large models, plus Google Cloud credits and direct collaboration with Google researchers and designers. There are no cohorts or deadlines; the team invests whenever a company fits its thesis. Here’s what startups part of the fund get: Early access to Gemini, Imagen and Veo; embedded Google Labs/DeepMind staff; six-figure Cloud credits; stage-agnostic equity.

Google Labs executive Jonathan Silber is listed as “Co-Founder and Director” and so far, 12 startups have been announced through the program. The full list can be found here. A few highlights:

Toonsutra – an Indian webtoon and comic platform using Gemini to auto-translate across multiple Indian languages.

Viggle – an AI-powered meme generator leveraging Gemini, Imagen and Veo to experiment with new video formats.

Rooms – a collaborative 3D space creation platform that’s prototyping richer avatar and content experiences using Gemini APIs.

Google has tried this approach before, but not with full model access. In 2017 Google launched Gradient Ventures, an in-house VC fund that took minority stakes and offered AI mentorship, but it didn’t bundle DeepMind models or cloud credits. The new fund fuses Gradient’s investing with an accelerator-style services stack, giving Google tighter product alignment with each company.

There’s a growing number of companies spinning up investment funds targeting these AI startups. A few examples:

Company

Program

Structure & size

Sweeteners

OpenAI

Startup Fund

$175 M evergreen VC vehicle (plus SPVs)

Equity + priority GPT-4/APIs

Anthropic

Anthology Fund (with Menlo Ventures)

$100 M, Menlo-financed

Equity, $25 K Claude credits, safety mentorship

Microsoft

Founders Hub

Non-equity; up to $150 K Azure + $2.5 K GPT-4 credits

1-on-1 Azure AI advisers

Amazon AWS

Generative AI Accelerator

10-week, non-equity; up to $300 K AWS credits

Mentors, GTM with Bedrock & Trainium

Meta

AI Startup Program (Station F)

5-startup European accelerator

FAIR mentoring, free Scaleway compute, open-source Llama stack

Each firm also makes ad-hoc bets (e.g., OpenAI in Harvey, Figure, Anysphere and many others).

The startup credit war is intensifying. AWS has issued >$6B in credits over a decade, while Microsoft pushes GPT-4 via Azure, and Google just earmarked an unspecified – but presumably large – sum for AI Futures Fund. The strategy is identical: subsidize compute today to secure long-term platform rents.

Go deeper: Equity + infra ties could leave tomorrow’s unicorns dependent on a handful of cloud providers. The U.S. FTC is already probing whether free credits create an unfair moat in AI infrastructure. Without a disclosed size or check-range, it’s unclear how many startups Google can realistically back. Google is also a major investor in Anthropic. How will conflicts be managed when both arms chase the same deal?

Big Tech has traded acquisition sprees for “capital plus models plus compute” bundles. The prize isn’t just financial return; it’s ecosystem capture. Whoever supplies the brains, GPUs and distribution rails for new AI companies will skim value from every downstream success. Google’s AI Futures Fund is a response to Microsoft-OpenAI’s head start – blending its world class research bench with a Google sized checkbook. If founders flock to Big-Model-as-a-Service deals, the next wave of AI unicorns may look less independent than the last: brilliant, well-funded, yet forever plugged into the cloud that raised them.

And the money keeps coming. Sovereign-wealth giants from Riyadh, Abu Dhabi and Singapore, plus multibillion-dollar VC megafunds, are chasing the same few generative-AI bets. With hundreds of billions in “dry powder” hunting unicorns, capital is plentiful – but differentiated access to compute and distribution is scarce. That imbalance only amplifies the leverage of platforms like Google.

Which image is real?
⬆️ Image 1
⬇️ Image 2
Login or Subscribe to participate in polls.

🤔 Your thought process:
Selected Image 1 (Left):
“Always look at the hands. The monkey in the [other] image has an extra finger on his lower hand. ”

“In [the other] image the monkey's right arm seemed to be growing out of his rib cage!”

Selected Image 2 (Right):
“The monkey [in the other image] doesn't look like it is really taking a bit of the banana and didn't like it was truly in the environment it was shown in.”

“[The other image] is almost completely in focus throughout the frame which would not be the case in a photographic image with depth of field challenges ”

Would you like to see more AI or Not mediums?
Yes
Video
Voice
Text
Other (share more)
Login or Subscribe to participate in polls.

Thank you 🙂
Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!

If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.

article-image
May 21, 2025
Jakob Grøn berg

⚙️ Your AI strategy needs industry-specific orchestration


Good morning. The U.S. and China have agreed to slash tariffs from 125% to 10% for 90 days, sending markets soaring and Treasury Secretary Scott Bessent gushing about "the equanimity" of Swiss scenery.

In today’s newsletter:

🌿 AI for Good: Filling the gaps in biodiversity knowledge

🧱 LegoGPT brings endless designs to the forefront

🦜 Klarna and Duolingo learn the limits of going AI first

🔮 Google enters the competition for equity in AI startups

🌿 AI for Good: Filling the gaps in biodiversity knowledge

Source: McGill University

AI could close five of the seven largest blind spots in global biodiversity knowledge, a review led by Laura Pollock, a biologist at McGill University, and computer scientist David Rolnick finds. Existing tools tackle only two gaps, leaving questions on species traits, interactions and evolution mostly unanswered. “It was also surprising to see just how narrowly AI is being applied when it has so much potential to address many of these shortfalls,” Rolnick notes.

Key findings

Scope – Fewer than one in 10 biodiversity papers that cite AI go beyond distribution mapping or trait detection.

Potential – Models blending remote sensing and eDNA can map ranges, infer food webs and flag extinction risk in near real time.

Equity risk – Temperate-region data dominate, so bias-correction methods must accompany model rollout.

Next steps – Open data standards, algorithm transparency and safeguards for Indigenous knowledge can keep new tools from widening research gaps.

Why it matters:
Without baseline data on where species live and how they interact, conservation strategies remain guesswork. AI can sift satellite imagery, camera-trap photos and environmental-DNA records at scales fieldwork cannot match, accelerating risk assessments for the world’s most threatened ecosystems. Most of these capabilities are underused. Pollock and Rolnick emphasize the need for better data-sharing, algorithmic transparency and ethical safeguards to avoid reinforcing scientific and geographic inequities.

Learn Million Dollar AI Strategies & Tools in this 3 hour AI Workshop. Join now for $0

Everyone tells you to learn AI but no one tells you where.

We have partnered with Growthschool to bring this ChatGPT & AI Workshop to our readers. It is usually $199, but free for you because you are our loyal readers 🎁

Register here for free – valid for next 24 hours only!

This workshop has been taken by 1 Million people across the globe, who have been able to:

Build business that make $10,000 by just using AI tools

Make quick & smarter decisions using AI-led data insights

Write emails, content & more in seconds using AI

Solve complex problems, research 10x faster & save 16 hours every week

You’ll wish you knew about this FREE AI Training sooner (Btw, it’s rated at 9.8/10 ⭐)

Save your seat for $0 now! (Valid for 100 people only)

🧱 LegoGPT brings endless designs to the forefront

Source: Arvix

A new generative-AI model called LegoGPT can create LEGO structures that you can build at home from natural language prompts. It goes beyond generating creative designs by making sure each structure is physically stable through physics-aware modeling.

Trained on a dataset of over 47,000 human-designed LEGO builds, LegoGPT produces realistic constructions that pass stability checks before being rendered. Unlike previous models that generate visually appealing but unstable results, LegoGPT prioritizes functional, buildable outputs.

How it works:

Prompt-to-Design Generation: Transformer-based architecture to generate 3D LEGO models from natural language descriptions.

[@portabletext/react] Unknown block type "contentBreak", specify a component for it in the `components.types` prop

Layer-by-Layer Placement: It builds models one layer at a time, mirroring how humans construct physical LEGO sets.

Stability Simulation: Generated structures are run through a physics simulator that tests for mechanical stability. Unstable outputs are discarded.

Token-Level Brick Planning: Each “token” in the model corresponds to a brick’s position, color, and type, ensuring fine-grained control and coherence.

Why it matters:
Models and assistants are starting to crop up in CAD software like Autodesk’s Fusion, Zoo and many others. LegoGPT is an early example of physics-aware AI design. Rather than relying on rules of thumb or human intervention, it embeds stability checks into the generation loop itself. If software can learn the laws of motion, tomorrow’s design tools won’t just imagine what’s possible, they’ll help get those designs into your hands.

ACI.dev: The Only MCP Server Your AI Agents Need

ACI.dev’s Unified MCP Server turns every API your AI agents will need into two simple MCP tools on one server—search and execute. One connection unlocks 600+ integrations.

Plug & Play – Framework-agnostic, works with any architecture.

Secure – Tenant isolation for your agent’s end users.

Smart – Dynamic intent search finds the right function for each tas

[@portabletext/react] Unknown block type "subscriberBreak", specify a component for it in the `components.types` prop

k.

Reliable – Sub-API permission boundaries to improve agent reliability.

Fully Open Source – Backend, dev portal, library, MCP server implementation.

Skip months of infra plumbing; ship the agent features that matter.

Try it and contribute—drop us a ⭐ on GitHub.

Join us on Discord

Saudi crown prince launches new company to develop AI technologies.

Abu Dhabi’s Mubadala pours $10B into TWG Global.

Why an AI data center on the Prairie is sitting empty.

Argentina hopes to attract Big Tech with nuclear-powered AI data centers.

👨🏻‍🔬 OpenAI - Enterprise Security Engineer

💭 Captions - Software Engineer, Full-Stkac

🐼 Sanctuary - Executive Assistant to the CEO

Granola: A great notetaker I use just released an iOS version

Vapi: The place to build AI voice agents

Runway ML: A now classic that I think does a great job of video gen… maybe we start exploring more mediums for “AI or Not”??

🦜 Klarna and Duolingo learn the limits of going AI first

Source: ChatGPT 4o

Klarna’s gamble on replacing customer support staff with AI is being walked back. CEO Sebastian Siemiatkowski said the Stockholm fintech will start hiring again so customers can “always have the option to speak to a live representative.” He did not give head-count targets but told Bloomberg Klarna will recruit students and rural talent to rebuild its support ranks after boasting last year that AI handled the work of 700 agents.

Duolingo, which shifted to an AI-first model last month, is facing a social media revolt rather than a staffing crunch. TikTok users have flooded the language app’s comment section with complaints such as “Mama, may I have real people running the company” after jumping on the “Mama, may I have a cookie” trend. Critics accuse the firm of firing contractors to pad margins while undermining education quality.

A Duolingo spokesperson said the Pittsburgh company is not replacing learning experts, calling AI “a tool they use to make Duolingo better.” Shares remain near record highs after the company raised its 2025 sales forecast, but the backlash underscores consumer unease. A World Economic Forum survey found 40% of employers plan to cut jobs as automation spreads, while nearly half of Gen Z job seekers fear AI is devaluing their degrees.

The big picture: Klarna’s retreat and Duolingo’s blowback show that moving too quickly to an AI-first model can bruise customer trust and brand image, even when the technology promises lower costs.

🔮 Google enters the competition for equity in AI startups

Source: ChatGPT 4o

Google unveiled the AI Futures Fund on May 12, an always-open program that writes equity checks (size undisclosed) and gives startups early access to DeepMind’s latest large models, plus Google Cloud credits and direct collaboration with Google researchers and designers. There are no cohorts or deadlines; the team invests whenever a company fits its thesis. Here’s what startups part of the fund get: Early access to Gemini, Imagen and Veo; embedded Google Labs/DeepMind staff; six-figure Cloud credits; stage-agnostic equity.

Google Labs executive Jonathan Silber is listed as “Co-Founder and Director” and so far, 12 startups have been announced through the program. The full list can be found here. A few highlights:

Toonsutra – an Indian webtoon and comic platform using Gemini to auto-translate across multiple Indian languages.

Viggle – an AI-powered meme generator leveraging Gemini, Imagen and Veo to experiment with new video formats.

Rooms – a collaborative 3D space creation platform that’s prototyping richer avatar and content experiences using Gemini APIs.

Google has tried this approach before, but not with full model access. In 2017 Google launched Gradient Ventures, an in-house VC fund that took minority stakes and offered AI mentorship, but it didn’t bundle DeepMind models or cloud credits. The new fund fuses Gradient’s investing with an accelerator-style services stack, giving Google tighter product alignment with each company.

There’s a growing number of companies spinning up investment funds targeting these AI startups. A few examples:

Company

Program

Structure & size

Sweeteners

OpenAI

Startup Fund

$175 M evergreen VC vehicle (plus SPVs)

Equity + priority GPT-4/APIs

Anthropic

Anthology Fund (with Menlo Ventures)

$100 M, Menlo-financed

Equity, $25 K Claude credits, safety mentorship

Microsoft

Founders Hub

Non-equity; up to $150 K Azure + $2.5 K GPT-4 credits

1-on-1 Azure AI advisers

Amazon AWS

Generative AI Accelerator

10-week, non-equity; up to $300 K AWS credits

Mentors, GTM with Bedrock & Trainium

Meta

AI Startup Program (Station F)

5-startup European accelerator

FAIR mentoring, free Scaleway compute, open-source Llama stack

Each firm also makes ad-hoc bets (e.g., OpenAI in Harvey, Figure, Anysphere and many others).

The startup credit war is intensifying. AWS has issued >$6B in credits over a decade, while Microsoft pushes GPT-4 via Azure, and Google just earmarked an unspecified – but presumably large – sum for AI Futures Fund. The strategy is identical: subsidize compute today to secure long-term platform rents.

Go deeper: Equity + infra ties could leave tomorrow’s unicorns dependent on a handful of cloud providers. The U.S. FTC is already probing whether free credits create an unfair moat in AI infrastructure. Without a disclosed size or check-range, it’s unclear how many startups Google can realistically back. Google is also a major investor in Anthropic. How will conflicts be managed when both arms chase the same deal?

Big Tech has traded acquisition sprees for “capital plus models plus compute” bundles. The prize isn’t just financial return; it’s ecosystem capture. Whoever supplies the brains, GPUs and distribution rails for new AI companies will skim value from every downstream success. Google’s AI Futures Fund is a response to Microsoft-OpenAI’s head start – blending its world class research bench with a Google sized checkbook. If founders flock to Big-Model-as-a-Service deals, the next wave of AI unicorns may look less independent than the last: brilliant, well-funded, yet forever plugged into the cloud that raised them.

And the money keeps coming. Sovereign-wealth giants from Riyadh, Abu Dhabi and Singapore, plus multibillion-dollar VC megafunds, are chasing the same few generative-AI bets. With hundreds of billions in “dry powder” hunting unicorns, capital is plentiful – but differentiated access to compute and distribution is scarce. That imbalance only amplifies the leverage of platforms like Google.

Which image is real?
⬆️ Image 1
⬇️ Image 2
Login or Subscribe to participate in polls.

🤔 Your thought process:
Selected Image 1 (Left):
“Always look at the hands. The monkey in the [other] image has an extra finger on his lower hand. ”

“In [the other] image the monkey's right arm seemed to be growing out of his rib cage!”

Selected Image 2 (Right):
“The monkey [in the other image] doesn't look like it is really taking a bit of the banana and didn't like it was truly in the environment it was shown in.”

“[The other image] is almost completely in focus throughout the frame which would not be the case in a photographic image with depth of field challenges ”

Would you like to see more AI or Not mediums?
Yes
Video
Voice
Text
Other (share more)
Login or Subscribe to participate in polls.

Thank you 🙂
Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!

If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.

image
May 20, 2025
Jakob Grøn berg

⚙️ Report: How AI will shape the future of energy​​​​‌‍​‍​‍‌‍‌​‍‌‍‍‌‌‍‌‌‍‍‌‌‍‍​‍​‍​‍‍​‍​‍‌​‌‍​‌‌‍‍‌‍‍‌‌‌​‌‍‌​‍‍‌‍‍‌‌‍​‍​‍​‍​​‍​‍‌‍‍​‌​‍‌‍‌‌‌‍‌‍​‍​‍​‍‍​‍​‍‌‍‍​‌‌​‌‌​‌​​‌​​‍‍​‍​‍‌‍‌​‌‍‌‌‌‍‌‌‌​​‌‌‍‌‍‍‌‌‍‌‌‌‌​‍‍‌​‌‍​‌‌‍‍‌‍‍‌‌‌​‌‍‌​‍‍‌​‌‌​‌‌‌‌‍‌​‌‍‍‌‌‍​‍‌‍‍‌‌‍‍‌‌​‌‍‌‌‌‍‍‌‌​​‍‌‍‌‌‌‍‌​‌‍‍‌‌‌​​‍‌‍‌‌‍‌‍‌​‌‍‌‌​‌‌​​‌​‍‌‍‌‌‌​‌‍‌‌‌‍‍‌‌​‌‍​‌‌‌​‌‍‍‌‌‍‌‍‍​‍‌‍‍‌‌‍‌​​‌‌‍​‌​​‍‌‍​​‌​​‌‌​‌‌‌‍​‌​​‍​‍‌​​​‌​​​‌‌‍​‍​‍‌​‌​‌‍​​‍‌​‍‌​‍‌​‍‌‌‍​‍​​‌​‌​‍‌​‍‌​‌​‍​​‌​​‍​‌‍‌‍​‍​​‍​‌‍​‍‌‍​‍​​‍‌‍​‌​‍‌‌​‌‍‌‌​​‌‍‌‌​‌‌‍​‌‌​‍‌‌​‌‍‍‌‌‍​‌‍​‌‍‌‌‌​​‍‌​​‌‍​‌‌‌​‌‍‍​​‌‌‌​‌‍‍‌‌‌​‌‍​‌‍‌‌​‌‍​‍‌‍​‌‌​‌‍‌‌‌‌‌‌‌​‍‌‍​​‌‌‍‍​‌‌​‌‌​‌​​‌​​‍‌‌​​‌​​‌​‍‌‌​​‍‌​‌‍​‍‌‌​​‍‌​‌‍‌‍‌​‌‍‌‌‌‍‌‌‌​​‌‌‍‌‍‍‌‌‍‌‌‌‌​‍‍‌​‌‍​‌‌‍‍‌‍‍‌‌‌​‌‍‌​‍‍‌​‌‌​‌‌‌‌‍‌​‌‍‍‌‌‍​‍‌‍‌‍‍‌‌‍‌​​‌‌‍​‌​​‍‌‍​​‌​​‌‌​‌‌‌‍​‌​​‍​‍‌​​​‌​​​‌‌‍​‍​‍‌​‌​‌‍​​‍‌​‍‌​‍‌​‍‌‌‍​‍​​‌​‌​‍‌​‍‌​‌​‍​​‌​​‍​‌‍‌‍​‍​​‍​‌‍​‍‌‍​‍​​‍‌‍​‌​‍‌‍‌‌​‌‍‌‌​​‌‍‌‌​‌‌‍​‌‌​‍‌‌​‌‍‍‌‌‍​‌‍​‌‍‌‌‌​​‍‌‍‌​​‌‍​‌‌‌​‌‍‍​​‌‌‌​‌‍‍‌‌‌​‌‍​‌‍‌‌​‍‌‍‌​​‌‍‌‌‌​‍‌​‌​​‌‍‌‌‌‍​‌‌​‌‍‍‌‌‌‍‌‍‌‌​‌‌​​‌‌‌‌‍​‍‌‍​‌‍‍‌‌​‌‍‍​‌‍‌‌‌‍‌​​‍​‍‌‌

Nvidia CEO Jensen Huang's trip to Taiwan, after visiting the Middle East with Trump, has sparked "Jensanity" as adoring fans mob him for autographs on books, posters, and even baseballs. The Taiwan-born billionaire — whose company is now selling official Jensen-branded merch at a pop-up store — prompted confusion from his US-based colleagues (where he walks around fairly unnoticed).