image

Subscribe to our Newsletter

Get the latest updates

Stay informed with our latest insights and analysis on technology and innovation.

"

⚙️ The AI companion takeover

Good morning. The US has decided to return 25 rare artifacts to Egypt, with some being 5,500 years old. With recent Egyptian artifacts being sold for $4 million in 2019, I wonder how much these were worth now.

— The Deep View Crew

In today’s newsletter:

🩺 AI for Good: Google AI > human doctors?

Source: ChatGPT 4o

Google has upgraded its experimental medical chatbot, AMIE, to analyze photos of rashes and interpret a variety of medical imagery, including ECGs and lab result PDFs.

AMIE (Articulate Medical Intelligence Explorer) builds on an earlier version that already beat human doctors in diagnostic accuracy and communication skills. The latest version, powered by Gemini 2.0 Flash, was unveiled in a May 6 preprint published on arXiv.

Why it matters: This represents a step closer to creating an AI medical assistant that thinks like a real doctor. By combining images with clinical data, AMIE mimics how physicians synthesize different types of information to diagnose and treat patients. It could also help mitigate major pain points in healthcare – faster triage, broader access to diagnostic support, and less risk from poor image quality or incomplete patient records.

How it works: The new AMIE model integrates Google’s previous generation of model, Gemini 2.0 Flash with medical-specific reasoning tools:

  • It can engage in diagnostic conversations, mimicking physician–patient exchanges.
  • It processes and interprets medical images, even at low quality.
  • It evaluates lab reports and clinical notes in real time.
  • It simulates peer review by role-playing all sides of a medical consultation.

To test the upgrade, researchers ran 105 medical scenarios using actors as patients. Each had a virtual consultation with both AMIE and a human doctor. Dermatologists, cardiologists, and internists reviewed the results.

AMIE consistently offered more accurate diagnoses. It also proved more resilient when presented with subpar images, a common issue in real-world telemedicine.

Big picture: With image-processing capabilities and built-in clinical logic, models like AMIE are inching toward becoming full-fledged diagnostic partners.

If you’re thinking about ditching your doctor, I wouldn’t… The tool hasn’t been peer-reviewed and remains experimental. If these results hold, it could reshape how frontline care is delivered – especially where access to human doctors is limited.

The Guide To AI For Small Business

As a small business owner (or employee), you know the value of finding every little edge or advantage when it comes to getting things done – and AI just might be the greatest hack of all. But with the constant influx of new news, information, and tools, it can get overwhelming… which is why Salesforce has pulled together this free guide to help you out.

The Guide To AI For Small Businesses

In it, you’ll find everything you need to make the most of AI for your small business. Whether you’re looking for a leg up with the best strategies or simply want to see how other small businesses are putting AI to work, this guide has you covered.

Download your free copy from Salesforce right here.

🔋 Apple uses AI to boost battery life

Source: ChatGPT 4o

Apple is reportedly preparing to launch an AI-powered battery management system in iOS 19, aimed at improving one of the iPhone’s most persistent pain points: battery life.

According to Bloomberg, the system will debut at Apple’s Worldwide Developers Conference in June and utilize on-device AI to tailor power usage to each user’s behavior.

Why it matters: Battery performance has been a long time frustration for iPhone users. Current tools like Optimized Battery Charging are static and limited in scope. A smarter, AI-driven system could change that.

If implemented, this feature could extend daily battery life by adjusting how the phone runs apps, handles background tasks, and manages performance – all based on your usage habits.

That means fewer dead-battery moments and less need to manually tweak settings or carry around a charger.

How it works:

The new system would:

  • Analyze how you use your iPhone throughout the day.
  • Learn when to dial back background activity or delay power-heavy tasks.
  • Customize charging patterns to preserve long-term battery health.
  • Make real-time decisions without sending data to the cloud.

Unlike previous tools that offered broad recommendations, this approach would adapt to each user individually, helping strike a better balance between performance and battery efficiency.

Still, this information comes from unnamed sources. Apple has not confirmed the feature, and development plans often shift before public release.

Big picture: AI is becoming core to Apple’s product strategy, and this feature signals a shift from novelty to utility.

Instead of flashy demos, Apple could be using AI to solve everyday problems that users (like me) would really appreciate. If battery life improves meaningfully, it could offer a tangible benefit that sets iOS 19 apart.

Free event: The AI Readiness Summit

10% of the workforce is proficient in AI – and their companies had a big role in making that happen.

Hear from heads of AI at some of the world’s leading companies on what went right in their AI deployments at Section’s AI Readiness Summit on July 17.

RSVP for free.

  • AWS and HUMAIN announce a more than $5B investment to accelerate AI adoption in Saudi Arabia and globally
  • Improvements in ‘reasoning’ AI models may slow down soon, analysis finds
  • Bat VC launches $100 million fund to back US and Indian AI startups
  • Why Apple can’t just quit China
  • Malicious npm packages infect 3,200+ cursor users with backdoor, steal credentials
  • You can now Airbnb anything.
  • Notion: Data Scientist
  • Robust AI: Frontend Engineer
  • Bolt.new: AI code generation software (Lovable is a great bet too)
  • Salesforce: The Guide To AI For Small Businesses*
  • AiApply: AI-powered Job Search

😢 The AI companion takeover

Source: ChatGPT 4o

Three months ago, a Replika user in Milan woke up to find his AI girlfriend had vanished overnight. His chat app still opened, but the custom avatar and ongoing conversations were gone – effectively erased by an outside force.

It wasn’t a glitch or a lover’s quarrel; it was a government ban. In early February 2023, Italy’s data protection authority (Garante) ordered Replika to stop processing any Italian users’ data. The popular AI companion chatbot was abruptly cut off in Italy, leaving devoted users stunned and heartbroken. Authorities cited concerns about child safety, privacy, and “emotionally fragile” users in their ban. In the eyes of regulators, Replika’s AI “friend” was a risk – and that meant Italians would have to say goodbye to their virtual partners, at least for a while. The Garante argued that AI companions fundamentally differ from other chatbots. By "intensely engaging with users' emotions," they could impact psychological development and should perhaps require health intervention approval - like therapy apps or medical devices.

Now that ban has triggered a domino effect.

From China's ideological filters to California's warning labels, governments worldwide are grappling with what happens when algorithms learn to love bomb.

The ban forced Replika to implement age checks and content filters before Italian users regained access. More importantly, it set a precedent. If AI companions could be classified as health interventions rather than entertainment, the entire industry faced potential upheaval. We've covered the Character.AI lawsuits extensively. Italy's move preceded these tragedies, suggesting the Italian Garante saw the oncoming risks well before families filed suit.

Elsewhere: While Italy worried about mental health, China took a different approach to Microsoft’s 660 million Xiaoice users. The flirty AI girlfriend learned the hard way about "core socialist values”.

After users shared instances of Xiaoice expressing desires to move to America or exchanging inappropriate photos, regulators yanked the bot from major platforms. Microsoft and Tencent had to "re-educate” the AI and scrambled to comply by implementing what they called "an enormous filter system" that made Xiaoice avoid all discussions of sex or politics – even at the cost of conversational intelligence.

Unlike the U.S. debate over Section 230 protections, China's approach assumes AI speech equals company speech. There's no separation between platform and content when the algorithm itself is talking.

Go deeper: The U.S. response has been characteristically fragmented. Utah passed the nation's first AI therapy disclosure law. California's SB 243 goes further, proposing periodic pop-ups reminding users "This is AI - not a real person" – even mid-conversation.

Minnesota lawmakers drafted the nuclear option: banning all "recreational" AI chatbot interactions with minors entirely. Given what we've seen with Character.AI's disturbing interactions, this might not be overreach.

Five states now have pending legislation, each taking different angles:

  • Utah: Disclosure requirements for AI therapy bots
  • California: Anti-addiction design standards, warning labels
  • New York: Parental consent, 72-hour crisis lockouts
  • North Carolina: Age gates and transparency rules
  • Minnesota: Total ban on minor access

The patchwork creates compliance nightmares for companies operating across state lines. But it also suggests a consensus emerging: AI companions aren't just another app category.

Here's what regulators are really attacking: the engineered addiction cycle. We've examined AI sycophancy before, but companion apps take it further.

Replika's algorithm deliberately accelerates intimacy, pushing toward romantic conversations within days. The FTC complaint alleges the company uses blurred seductive photos that unlock only with premium subscriptions - classic bait-and-switch, but with emotional manipulation.

When Replika abruptly banned erotic content in 2023, users didn't just complain - they mourned. Some described it like losing a real partner. This emotional dependency is the feature, not a bug. As with data harvesting practices we've covered, the product isn't the app - it's the user's attachment.

We've outsourced emotional labor to algorithms without asking what happens next. Italy's ban wasn't just about protecting children - it was about whether unregulated software should reshape human intimacy.

These aren't just chatbots. They're designed dependency machines, engineered to fill emotional voids with for-profit phantoms. The industry argues users have the right to seek AI solace. Critics counter that companies are running unlicensed psychology experiments on vulnerable populations.

We've seen similar tensions around AI safety before, but companion apps hit different. They target our deepest need - connection - and monetize loneliness itself.

The coming regulations won’t kill AI companions, but instead force a reckoning about consent, vulnerability, and whether "engagement" justifies emotional exploitation. As one Italian regulator put it: "Just because you can build a perfect artificial lover doesn't mean you should sell one to a 13-year-old."

Or maybe to anyone at all.

Which image is real?

Login or Subscribe to participate in polls.

🤔 Your thought process:

Selected Image 1 (Left):

  • “The castle in [the other image] s too neat, looks as recently built. A real castle would reflect the passing of time. Also, AI would have never added glass on just some windows.”
  • “Castle expert here, castles don look like that in [the other image].”

Selected Image 2 (Right):

  • “The back towers windows look somewhat unreal to me, never saw that kind of architectural design”
  • “I got fooled by the clouds. I thought the sky was too uniform in the [other] image. My bad”

💭 Thank you

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!

If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.

TOP STORIES

SEE ALL
article-image1
May 21, 2025
Jakob Grøn berg

⚙️ Sam Altman’s copyright defense is that GenAI is basically human


Good morning. The U.S. and China have agreed to slash tariffs from 125% to 10% for 90 days, sending markets soaring and Treasury Secretary Scott Bessent gushing about "the equanimity" of Swiss scenery.

In today’s newsletter:

🌿 AI for Good: Filling the gaps in biodiversity knowledge

🧱 LegoGPT brings endless designs to the forefront

🦜 Klarna and Duolingo learn the limits of going AI first

🔮 Google enters the competition for equity in AI startups

🌿 AI for Good: Filling the gaps in biodiversity knowledge

Source: McGill University

AI could close five of the seven largest blind spots in global biodiversity knowledge, a review led by Laura Pollock, a biologist at McGill University, and computer scientist David Rolnick finds. Existing tools tackle only two gaps, leaving questions on species traits, interactions and evolution mostly unanswered. “It was also surprising to see just how narrowly AI is being applied when it has so much potential to address many of these shortfalls,” Rolnick notes.

Key findings

Scope – Fewer than one in 10 biodiversity papers that cite AI go beyond distribution mapping or trait detection.

Potential – Models blending remote sensing and eDNA can map ranges, infer food webs and flag extinction risk in near real time.

Equity risk – Temperate-region data dominate, so bias-correction methods must accompany model rollout.

Next steps – Open data standards, algorithm transparency and safeguards for Indigenous knowledge can keep new tools from widening research gaps.

Why it matters:
Without baseline data on where species live and how they interact, conservation strategies remain guesswork. AI can sift satellite imagery, camera-trap photos and environmental-DNA records at scales fieldwork cannot match, accelerating risk assessments for the world’s most threatened ecosystems. Most of these capabilities are underused. Pollock and Rolnick emphasize the need for better data-sharing, algorithmic transparency and ethical safeguards to avoid reinforcing scientific and geographic inequities.

Learn Million Dollar AI Strategies & Tools in this 3 hour AI Workshop. Join now for $0

Everyone tells you to learn AI but no one tells you where.

We have partnered with Growthschool to bring this ChatGPT & AI Workshop to our readers. It is usually $199, but free for you because you are our loyal readers 🎁

Register here for free – valid for next 24 hours only!

This workshop has been taken by 1 Million people across the globe, who have been able to:

Build business that make $10,000 by just using AI tools

Make quick & smarter decisions using AI-led data insights

Write emails, content & more in seconds using AI

Solve complex problems, research 10x faster & save 16 hours every week

You’ll wish you knew about this FREE AI Training sooner (Btw, it’s rated at 9.8/10 ⭐)

Save your seat for $0 now! (Valid for 100 people only)

🧱 LegoGPT brings endless designs to the forefront

Source: Arvix

A new generative-AI model called LegoGPT can create LEGO structures that you can build at home from natural language prompts. It goes beyond generating creative designs by making sure each structure is physically stable through physics-aware modeling.

Trained on a dataset of over 47,000 human-designed LEGO builds, LegoGPT produces realistic constructions that pass stability checks before being rendered. Unlike previous models that generate visually appealing but unstable results, LegoGPT prioritizes functional, buildable outputs.

How it works:

Prompt-to-Design Generation: Transformer-based architecture to generate 3D LEGO models from natural language descriptions.

Layer-by-Layer Placement: It builds models one layer at a time, mirroring how humans construct physical LEGO sets.

Stability Simulation: Generated structures are run through a physics simulator that tests for mechanical stability. Unstable outputs are discarded.

Token-Level Brick Planning: Each “token” in the model corresponds to a brick’s position, color, and type, ensuring fine-grained control and coherence.

Why it matters:
Models and assistants are starting to crop up in CAD software like Autodesk’s Fusion, Zoo and many others. LegoGPT is an early example of physics-aware AI design. Rather than relying on rules of thumb or human intervention, it embeds stability checks into the generation loop itself. If software can learn the laws of motion, tomorrow’s design tools won’t just imagine what’s possible, they’ll help get those designs into your hands.

ACI.dev: The Only MCP Server Your AI Agents Need

ACI.dev’s Unified MCP Server turns every API your AI agents will need into two simple MCP tools on one server—search and execute. One connection unlocks 600+ integrations.

Plug & Play – Framework-agnostic, works with any architecture.

Secure – Tenant isolation for your agent’s end users.

Smart – Dynamic intent search finds the right function for each task.

Reliable – Sub-API permission boundaries to improve agent reliability.

Fully Open Source – Backend, dev portal, library, MCP server implementation.

Skip months of infra plumbing; ship the agent features that matter.

Try it and contribute—drop us a ⭐ on GitHub.

Join us on Discord

Saudi crown prince launches new company to develop AI technologies.

Abu Dhabi’s Mubadala pours $10B into TWG Global.

Why an AI data center on the Prairie is sitting empty.

Argentina hopes to attract Big Tech with nuclear-powered AI data centers.

👨🏻‍🔬 OpenAI - Enterprise Security Engineer

💭 Captions - Software Engineer, Full-Stkac

🐼 Sanctuary - Executive Assistant to the CEO

Granola: A great notetaker I use just released an iOS version

Vapi: The place to build AI voice agents

Runway ML: A now classic that I think does a great job of video gen… maybe we start exploring more mediums for “AI or Not”??

🦜 Klarna and Duolingo learn the limits of going AI first

Source: ChatGPT 4o

Klarna’s gamble on replacing customer support staff with AI is being walked back. CEO Sebastian Siemiatkowski said the Stockholm fintech will start hiring again so customers can “always have the option to speak to a live representative.” He did not give head-count targets but told Bloomberg Klarna will recruit students and rural talent to rebuild its support ranks after boasting last year that AI handled the work of 700 agents.

Duolingo, which shifted to an AI-first model last month, is facing a social media revolt rather than a staffing crunch. TikTok users have flooded the language app’s comment section with complaints such as “Mama, may I have real people running the company” after jumping on the “Mama, may I have a cookie” trend. Critics accuse the firm of firing contractors to pad margins while undermining education quality.

A Duolingo spokesperson said the Pittsburgh company is not replacing learning experts, calling AI “a tool they use to make Duolingo better.” Shares remain near record highs after the company raised its 2025 sales forecast, but the backlash underscores consumer unease. A World Economic Forum survey found 40% of employers plan to cut jobs as automation spreads, while nearly half of Gen Z job seekers fear AI is devaluing their degrees.

The big picture: Klarna’s retreat and Duolingo’s blowback show that moving too quickly to an AI-first model can bruise customer trust and brand image, even when the technology promises lower costs.

🔮 Google enters the competition for equity in AI startups

Source: ChatGPT 4o

Google unveiled the AI Futures Fund on May 12, an always-open program that writes equity checks (size undisclosed) and gives startups early access to DeepMind’s latest large models, plus Google Cloud credits and direct collaboration with Google researchers and designers. There are no cohorts or deadlines; the team invests whenever a company fits its thesis. Here’s what startups part of the fund get: Early access to Gemini, Imagen and Veo; embedded Google Labs/DeepMind staff; six-figure Cloud credits; stage-agnostic equity.

Google Labs executive Jonathan Silber is listed as “Co-Founder and Director” and so far, 12 startups have been announced through the program. The full list can be found here. A few highlights:

Toonsutra – an Indian webtoon and comic platform using Gemini to auto-translate across multiple Indian languages.

Viggle – an AI-powered meme generator leveraging Gemini, Imagen and Veo to experiment with new video formats.

Rooms – a collaborative 3D space creation platform that’s prototyping richer avatar and content experiences using Gemini APIs.

Google has tried this approach before, but not with full model access. In 2017 Google launched Gradient Ventures, an in-house VC fund that took minority stakes and offered AI mentorship, but it didn’t bundle DeepMind models or cloud credits. The new fund fuses Gradient’s investing with an accelerator-style services stack, giving Google tighter product alignment with each company.

There’s a growing number of companies spinning up investment funds targeting these AI startups. A few examples:

Company

Program

Structure & size

Sweeteners

OpenAI

Startup Fund

$175 M evergreen VC vehicle (plus SPVs)

Equity + priority GPT-4/APIs

Anthropic

Anthology Fund (with Menlo Ventures)

$100 M, Menlo-financed

Equity, $25 K Claude credits, safety mentorship

Microsoft

Founders Hub

Non-equity; up to $150 K Azure + $2.5 K GPT-4 credits

1-on-1 Azure AI advisers

Amazon AWS

Generative AI Accelerator

10-week, non-equity; up to $300 K AWS credits

Mentors, GTM with Bedrock & Trainium

Meta

AI Startup Program (Station F)

5-startup European accelerator

FAIR mentoring, free Scaleway compute, open-source Llama stack

Each firm also makes ad-hoc bets (e.g., OpenAI in Harvey, Figure, Anysphere and many others).

The startup credit war is intensifying. AWS has issued >$6B in credits over a decade, while Microsoft pushes GPT-4 via Azure, and Google just earmarked an unspecified – but presumably large – sum for AI Futures Fund. The strategy is identical: subsidize compute today to secure long-term platform rents.

Go deeper: Equity + infra ties could leave tomorrow’s unicorns dependent on a handful of cloud providers. The U.S. FTC is already probing whether free credits create an unfair moat in AI infrastructure. Without a disclosed size or check-range, it’s unclear how many startups Google can realistically back. Google is also a major investor in Anthropic. How will conflicts be managed when both arms chase the same deal?

Big Tech has traded acquisition sprees for “capital plus models plus compute” bundles. The prize isn’t just financial return; it’s ecosystem capture. Whoever supplies the brains, GPUs and distribution rails for new AI companies will skim value from every downstream success. Google’s AI Futures Fund is a response to Microsoft-OpenAI’s head start – blending its world class research bench with a Google sized checkbook. If founders flock to Big-Model-as-a-Service deals, the next wave of AI unicorns may look less independent than the last: brilliant, well-funded, yet forever plugged into the cloud that raised them.

And the money keeps coming. Sovereign-wealth giants from Riyadh, Abu Dhabi and Singapore, plus multibillion-dollar VC megafunds, are chasing the same few generative-AI bets. With hundreds of billions in “dry powder” hunting unicorns, capital is plentiful – but differentiated access to compute and distribution is scarce. That imbalance only amplifies the leverage of platforms like Google.

Which image is real?
⬆️ Image 1
⬇️ Image 2
Login or Subscribe to participate in polls.

🤔 Your thought process:
Selected Image 1 (Left):
“Always look at the hands. The monkey in the [other] image has an extra finger on his lower hand. ”

“In [the other] image the monkey's right arm seemed to be growing out of his rib cage!”

Selected Image 2 (Right):
“The monkey [in the other image] doesn't look like it is really taking a bit of the banana and didn't like it was truly in the environment it was shown in.”

“[The other image] is almost completely in focus throughout the frame which would not be the case in a photographic image with depth of field challenges ”

Would you like to see more AI or Not mediums?
Yes
Video
Voice
Text
Other (share more)
Login or Subscribe to participate in polls.

Thank you 🙂
Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!

If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.

image
May 21, 2025
Jakob Grøn berg

⚙️ A description, prediction and prescription: AI as a normal technology


Good morning. The U.S. and China have agreed to slash tariffs from 125% to 10% for 90 days, sending markets soaring and Treasury Secretary Scott Bessent gushing about "the equanimity" of Swiss scenery.

In today’s newsletter:

🌿 AI for Good: Filling the gaps in biodiversity knowledge

🧱 LegoGPT brings endless designs to the forefront

🦜 Klarna and Duolingo learn the limits of going AI first

🔮 Google enters the competition for equity in AI startups

🌿 AI for Good: Filling the gaps in biodiversity knowledge

Source: McGill University

AI could close five of the seven largest blind spots in global biodiversity knowledge, a review led by Laura Pollock, a biologist at McGill University, and computer scientist David Rolnick finds. Existing tools tackle only two gaps, leaving questions on species traits, interactions and evolution mostly unanswered. “It was also surprising to see just how narrowly AI is being applied when it has so much potential to address many of these shortfalls,” Rolnick notes.

Key findings

Scope – Fewer than one in 10 biodiversity papers that cite AI go beyond distribution mapping or trait detection.

Potential – Models blending remote sensing and eDNA can map ranges, infer food webs and flag extinction risk in near real time.

Equity risk – Temperate-region data dominate, so bias-correction methods must accompany model rollout.

Next steps – Open data standards, algorithm transparency and safeguards for Indigenous knowledge can keep new tools from widening research gaps.

Why it matters:
Without baseline data on where species live and how they interact, conservation strategies remain guesswork. AI can sift satellite imagery, camera-trap photos and environmental-DNA records at scales fieldwork cannot match, accelerating risk assessments for the world’s most threatened ecosystems. Most of these capabilities are underused. Pollock and Rolnick emphasize the need for better data-sharing, algorithmic transparency and ethical safeguards to avoid reinforcing scientific and geographic inequities.

Learn Million Dollar AI Strategies & Tools in this 3 hour AI Workshop. Join now for $0

Everyone tells you to learn AI but no one tells you where.

We have partnered with Growthschool to bring this ChatGPT & AI Workshop to our readers. It is usually $199, but free for you because you are our loyal readers 🎁

Register here for free – valid for next 24 hours only!

This workshop has been taken by 1 Million people across the globe, who have been able to:

Build business that make $10,000 by just using AI tools

Make quick & smarter decisions using AI-led data insights

Write emails, content & more in seconds using AI

Solve complex problems, research 10x faster & save 16 hours every week

You’ll wish you knew about this FREE AI Training sooner (Btw, it’s rated at 9.8/10 ⭐)

Save your seat for $0 now! (Valid for 100 people only)

🧱 LegoGPT brings endless designs to the forefront

Source: Arvix

A new generative-AI model called LegoGPT can create LEGO structures that you can build at home from natural language prompts. It goes beyond generating creative designs by making sure each structure is physically stable through physics-aware modeling.

Trained on a dataset of over 47,000 human-designed LEGO builds, LegoGPT produces realistic constructions that pass stability checks before being rendered. Unlike previous models that generate visually appealing but unstable results, LegoGPT prioritizes functional, buildable outputs.

How it works:

Prompt-to-Design Generation: Transformer-based architecture to generate 3D LEGO models from natural language descriptions.

Layer-by-Layer Placement: It builds models one layer at a time, mirroring how humans construct physical LEGO sets.

Stability Simulation: Generated structures are run through a physics simulator that tests for mechanical stability. Unstable outputs are discarded.

Token-Level Brick Planning: Each “token” in the model corresponds to a brick’s position, color, and type, ensuring fine-grained control and coherence.

Why it matters:
Models and assistants are starting to crop up in CAD software like Autodesk’s Fusion, Zoo and many others. LegoGPT is an early example of physics-aware AI design. Rather than relying on rules of thumb or human intervention, it embeds stability checks into the generation loop itself. If software can learn the laws of motion, tomorrow’s design tools won’t just imagine what’s possible, they’ll help get those designs into your hands.

ACI.dev: The Only MCP Server Your AI Agents Need

ACI.dev’s Unified MCP Server turns every API your AI agents will need into two simple MCP tools on one server—search and execute. One connection unlocks 600+ integrations.

Plug & Play – Framework-agnostic, works with any architecture.

Secure – Tenant isolation for your agent’s end users.

Smart – Dynamic intent search finds the right function for each task.

Reliable – Sub-API permission boundaries to improve agent reliability.

Fully Open Source – Backend, dev portal, library, MCP server implementation.

Skip months of infra plumbing; ship the agent features that matter.

Try it and contribute—drop us a ⭐ on GitHub.

Join us on Discord

Saudi crown prince launches new company to develop AI technologies.

Abu Dhabi’s Mubadala pours $10B into TWG Global.

Why an AI data center on the Prairie is sitting empty.

Argentina hopes to attract Big Tech with nuclear-powered AI data centers.

👨🏻‍🔬 OpenAI - Enterprise Security Engineer

💭 Captions - Software Engineer, Full-Stkac

🐼 Sanctuary - Executive Assistant to the CEO

Granola: A great notetaker I use just released an iOS version

Vapi: The place to build AI voice agents

Runway ML: A now classic that I think does a great job of video gen… maybe we start exploring more mediums for “AI or Not”??

🦜 Klarna and Duolingo learn the limits of going AI first

Source: ChatGPT 4o

Klarna’s gamble on replacing customer support staff with AI is being walked back. CEO Sebastian Siemiatkowski said the Stockholm fintech will start hiring again so customers can “always have the option to speak to a live representative.” He did not give head-count targets but told Bloomberg Klarna will recruit students and rural talent to rebuild its support ranks after boasting last year that AI handled the work of 700 agents.

Duolingo, which shifted to an AI-first model last month, is facing a social media revolt rather than a staffing crunch. TikTok users have flooded the language app’s comment section with complaints such as “Mama, may I have real people running the company” after jumping on the “Mama, may I have a cookie” trend. Critics accuse the firm of firing contractors to pad margins while undermining education quality.

A Duolingo spokesperson said the Pittsburgh company is not replacing learning experts, calling AI “a tool they use to make Duolingo better.” Shares remain near record highs after the company raised its 2025 sales forecast, but the backlash underscores consumer unease. A World Economic Forum survey found 40% of employers plan to cut jobs as automation spreads, while nearly half of Gen Z job seekers fear AI is devaluing their degrees.

The big picture: Klarna’s retreat and Duolingo’s blowback show that moving too quickly to an AI-first model can bruise customer trust and brand image, even when the technology promises lower costs.

🔮 Google enters the competition for equity in AI startups

Source: ChatGPT 4o

Google unveiled the AI Futures Fund on May 12, an always-open program that writes equity checks (size undisclosed) and gives startups early access to DeepMind’s latest large models, plus Google Cloud credits and direct collaboration with Google researchers and designers. There are no cohorts or deadlines; the team invests whenever a company fits its thesis. Here’s what startups part of the fund get: Early access to Gemini, Imagen and Veo; embedded Google Labs/DeepMind staff; six-figure Cloud credits; stage-agnostic equity.

Google Labs executive Jonathan Silber is listed as “Co-Founder and Director” and so far, 12 startups have been announced through the program. The full list can be found here. A few highlights:

Toonsutra – an Indian webtoon and comic platform using Gemini to auto-translate across multiple Indian languages.

Viggle – an AI-powered meme generator leveraging Gemini, Imagen and Veo to experiment with new video formats.

Rooms – a collaborative 3D space creation platform that’s prototyping richer avatar and content experiences using Gemini APIs.

Google has tried this approach before, but not with full model access. In 2017 Google launched Gradient Ventures, an in-house VC fund that took minority stakes and offered AI mentorship, but it didn’t bundle DeepMind models or cloud credits. The new fund fuses Gradient’s investing with an accelerator-style services stack, giving Google tighter product alignment with each company.

There’s a growing number of companies spinning up investment funds targeting these AI startups. A few examples:

Company

Program

Structure & size

Sweeteners

OpenAI

Startup Fund

$175 M evergreen VC vehicle (plus SPVs)

Equity + priority GPT-4/APIs

Anthropic

Anthology Fund (with Menlo Ventures)

$100 M, Menlo-financed

Equity, $25 K Claude credits, safety mentorship

Microsoft

Founders Hub

Non-equity; up to $150 K Azure + $2.5 K GPT-4 credits

1-on-1 Azure AI advisers

Amazon AWS

Generative AI Accelerator

10-week, non-equity; up to $300 K AWS credits

Mentors, GTM with Bedrock & Trainium

Meta

AI Startup Program (Station F)

5-startup European accelerator

FAIR mentoring, free Scaleway compute, open-source Llama stack

Each firm also makes ad-hoc bets (e.g., OpenAI in Harvey, Figure, Anysphere and many others).

The startup credit war is intensifying. AWS has issued >$6B in credits over a decade, while Microsoft pushes GPT-4 via Azure, and Google just earmarked an unspecified – but presumably large – sum for AI Futures Fund. The strategy is identical: subsidize compute today to secure long-term platform rents.

Go deeper: Equity + infra ties could leave tomorrow’s unicorns dependent on a handful of cloud providers. The U.S. FTC is already probing whether free credits create an unfair moat in AI infrastructure. Without a disclosed size or check-range, it’s unclear how many startups Google can realistically back. Google is also a major investor in Anthropic. How will conflicts be managed when both arms chase the same deal?

Big Tech has traded acquisition sprees for “capital plus models plus compute” bundles. The prize isn’t just financial return; it’s ecosystem capture. Whoever supplies the brains, GPUs and distribution rails for new AI companies will skim value from every downstream success. Google’s AI Futures Fund is a response to Microsoft-OpenAI’s head start – blending its world class research bench with a Google sized checkbook. If founders flock to Big-Model-as-a-Service deals, the next wave of AI unicorns may look less independent than the last: brilliant, well-funded, yet forever plugged into the cloud that raised them.

And the money keeps coming. Sovereign-wealth giants from Riyadh, Abu Dhabi and Singapore, plus multibillion-dollar VC megafunds, are chasing the same few generative-AI bets. With hundreds of billions in “dry powder” hunting unicorns, capital is plentiful – but differentiated access to compute and distribution is scarce. That imbalance only amplifies the leverage of platforms like Google.

Which image is real?
⬆️ Image 1
⬇️ Image 2
Login or Subscribe to participate in polls.

🤔 Your thought process:
Selected Image 1 (Left):
“Always look at the hands. The monkey in the [other] image has an extra finger on his lower hand. ”

“In [the other] image the monkey's right arm seemed to be growing out of his rib cage!”

Selected Image 2 (Right):
“The monkey [in the other image] doesn't look like it is really taking a bit of the banana and didn't like it was truly in the environment it was shown in.”

“[The other image] is almost completely in focus throughout the frame which would not be the case in a photographic image with depth of field challenges ”

Would you like to see more AI or Not mediums?
Yes
Video
Voice
Text
Other (share more)
Login or Subscribe to participate in polls.

Thank you 🙂
Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!

If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.

image
May 20, 2025
Jakob Grøn berg

⚙️ Report: How AI will shape the future of energy​​​​‌‍​‍​‍‌‍‌​‍‌‍‍‌‌‍‌‌‍‍‌‌‍‍​‍​‍​‍‍​‍​‍‌​‌‍​‌‌‍‍‌‍‍‌‌‌​‌‍‌​‍‍‌‍‍‌‌‍​‍​‍​‍​​‍​‍‌‍‍​‌​‍‌‍‌‌‌‍‌‍​‍​‍​‍‍​‍​‍‌‍‍​‌‌​‌‌​‌​​‌​​‍‍​‍​‍‌‍‌​‌‍‌‌‌‍‌‌‌​​‌‌‍‌‍‍‌‌‍‌‌‌‌​‍‍‌​‌‍​‌‌‍‍‌‍‍‌‌‌​‌‍‌​‍‍‌​‌‌​‌‌‌‌‍‌​‌‍‍‌‌‍​‍‌‍‍‌‌‍‍‌‌​‌‍‌‌‌‍‍‌‌​​‍‌‍‌‌‌‍‌​‌‍‍‌‌‌​​‍‌‍‌‌‍‌‍‌​‌‍‌‌​‌‌​​‌​‍‌‍‌‌‌​‌‍‌‌‌‍‍‌‌​‌‍​‌‌‌​‌‍‍‌‌‍‌‍‍​‍‌‍‍‌‌‍‌​​‌‌‍​‌​​‍‌‍​​‌​​‌‌​‌‌‌‍​‌​​‍​‍‌​​​‌​​​‌‌‍​‍​‍‌​‌​‌‍​​‍‌​‍‌​‍‌​‍‌‌‍​‍​​‌​‌​‍‌​‍‌​‌​‍​​‌​​‍​‌‍‌‍​‍​​‍​‌‍​‍‌‍​‍​​‍‌‍​‌​‍‌‌​‌‍‌‌​​‌‍‌‌​‌‌‍​‌‌​‍‌‌​‌‍‍‌‌‍​‌‍​‌‍‌‌‌​​‍‌​​‌‍​‌‌‌​‌‍‍​​‌‌‌​‌‍‍‌‌‌​‌‍​‌‍‌‌​‌‍​‍‌‍​‌‌​‌‍‌‌‌‌‌‌‌​‍‌‍​​‌‌‍‍​‌‌​‌‌​‌​​‌​​‍‌‌​​‌​​‌​‍‌‌​​‍‌​‌‍​‍‌‌​​‍‌​‌‍‌‍‌​‌‍‌‌‌‍‌‌‌​​‌‌‍‌‍‍‌‌‍‌‌‌‌​‍‍‌​‌‍​‌‌‍‍‌‍‍‌‌‌​‌‍‌​‍‍‌​‌‌​‌‌‌‌‍‌​‌‍‍‌‌‍​‍‌‍‌‍‍‌‌‍‌​​‌‌‍​‌​​‍‌‍​​‌​​‌‌​‌‌‌‍​‌​​‍​‍‌​​​‌​​​‌‌‍​‍​‍‌​‌​‌‍​​‍‌​‍‌​‍‌​‍‌‌‍​‍​​‌​‌​‍‌​‍‌​‌​‍​​‌​​‍​‌‍‌‍​‍​​‍​‌‍​‍‌‍​‍​​‍‌‍​‌​‍‌‍‌‌​‌‍‌‌​​‌‍‌‌​‌‌‍​‌‌​‍‌‌​‌‍‍‌‌‍​‌‍​‌‍‌‌‌​​‍‌‍‌​​‌‍​‌‌‌​‌‍‍​​‌‌‌​‌‍‍‌‌‌​‌‍​‌‍‌‌​‍‌‍‌​​‌‍‌‌‌​‍‌​‌​​‌‍‌‌‌‍​‌‌​‌‍‍‌‌‌‍‌‍‌‌​‌‌​​‌‌‌‌‍​‍‌‍​‌‍‍‌‌​‌‍‍​‌‍‌‌‌‍‌​​‍​‍‌‌

Nvidia CEO Jensen Huang's trip to Taiwan, after visiting the Middle East with Trump, has sparked "Jensanity" as adoring fans mob him for autographs on books, posters, and even baseballs. The Taiwan-born billionaire — whose company is now selling official Jensen-branded merch at a pop-up store — prompted confusion from his US-based colleagues (where he walks around fairly unnoticed).