NEWSLETTER ARCHIVES
VIEW ALL
⚙️ Microsoft is building the "open agentic web"
Good morning. Swedish buy-now-pay-later giant Klarna is now making nearly $1 million in revenue per employee (up from $575K) after replacing 700 customer service workers with AI chatbots. The efficiency push comes just as the company filed for its much-anticipated US IPO... only to promptly postpone it after Washington’s tariff announcement sent markets into a tailspin.

⚙️ OpenAI introduces Codex
Good morning. Nvidia CEO Jensen Huang's trip to Taiwan, after visiting the Middle East with Trump, has sparked "Jensanity" as adoring fans mob him for autographs on books, posters, and even baseballs. The Taiwan-born billionaire — whose company is now selling official Jensen-branded merch at a pop-up store — prompted confusion from his US-based colleagues (where he walks around fairly unnoticed).

⚙️ Will AI double your lifespan?
Good morning and Happy Friday! Karen Hao's explosive Atlantic excerpt reveals the chaos behind Sam Altman's brief 2023 ouster, including how OpenAI's chief scientist once discussed building "bunkers" before releasing AGI. The $300 billion company that began as an idealistic nonprofit is now the centerpiece of an "empire of AI".

⚙️ Robots can now feel
Good morning. Silicon Valley is queuing up Theranos II as Billy Evans – Elizabeth Holmes’ husband – works the Sand Hill circuit for $50 million to fund Haemanthus, a laser-powered blood-tester that pricks pets today and people tomorrow. Investors, kindly place your cash (and skepticism) in the centrifuge before proceeding.
BUSINESS
VIEW ALL⚙️ Does AI have a role in education?
Good morning. Earnings report season is among us. CoreWeave smashed Q1 earnings with $982M in revenue (wall street expected $853 M), causing an 11% after-hours jump, quickly followed by a cool off after announcing plans to invest up to $23B into AI data centers.
— The Deep View Crew
In today’s newsletter:
- 🔬 AI for Good: AI is speeding up drug development
- ✈️ Air Force opens AI Center of Excellence
- 🧠 Does AI have a place in education?
🔬 AI for Good: AI is speeding up drug development
Source: ChatGPT 4o
AI is helping pharmaceutical researchers find new treatments faster and cheaper by surfacing promising compounds buried deep in massive datasets. Dotmatics, a R&D software company, recently acquired by Siemens for $5.1B, is applying AI to identify potential drug candidates in a fraction of the time it used to take.
Phil Mounteney, VP of Science and Technology at Dotmatics, explains it like this: “The art of drug discovery is really finding drugs in these massive haystacks of data. AI is like a supercharged magnet that helps us sort through those haystacks and find the needle way more efficiently than before.”
Why it matters: Drug development is notoriously long and expensive. It can take up to 10 years and cost between $2 and $6 billion to bring a single drug to market. Of that, roughly six years are spent on early discovery—just identifying the compound that might work. Dotmatics is using AI to cut that phase down to as little as two years.
Faster discovery means earlier trials, quicker regulatory paths and lower costs for companies and patients alike. The company believes that AI could reduce the full research and clinical timeline by as much as 50 percent.
How it works: Dotmatics combines AI with scientific data platforms to accelerate each step of the R&D process:
- It scans huge chemical libraries to identify overlooked or repurposable compounds.
- It models how drug candidates interact with target proteins or diseases.
- It automates lab workflows that used to take researchers weeks.
- It pulls from historic datasets to inform present-day projects.
Mounteney says AI played a key role in accelerating the COVID mRNA vaccine rollout by leveraging years of stored research and rapidly analyzing it to guide development.
Big picture: Drug discovery may be one of the most direct ways AI can improve human health. Tools like Dotmatics are not replacing scientists but instead giving them the speed and precision to find answers faster. With over $300 million in projected revenue for 2025, the company is betting that faster cures can also mean a stronger business case.
Transform DevEx with AI & Platform Engineering – Join the Developer Experience Summit!
AI and Platform Engineering aren’t just buzzwords—they’re the key to unlocking developer productivity and satisfaction.
That’s why Harness, a leader in modern software delivery, is hosting the Developer Experience Summit on May 21st—a free virtual event designed to help you transform DevEx with AI and platform engineering.
Join top industry leaders as they share insights on navigating DevEx changes, optimizing work efficiency, and leading your DevEx future.
Featured speakers include:
- Prem Dhayalan – Sr. Distinguished Engineer at Capital One
- Blake Canaday – Director of Engineering at CrowdStrike
- Hasith Kalpage – Director, Platform Engineering & Innovation Division at Cisco
- Andrew Boyagi – DevOps Advocate at Atlassian
- James Governor – Analyst & Co-founder at RedMonk
- Nathen Harvey – DORA Lead and Developer Advocate at Google Cloud
- And more!
Can’t make it live? Register now, and we’ll send you the on-demand recording after the event.
✈️ Air Force opens AI Center of Excellence
Source: ChatGPT 4o
The Air Force just gave its scattered AI projects a home address. Announced by outgoing CIO Venice Goodwine at AFCEA’s TechNet Cyber on May 7, the new Department of the Air Force “Artificial Intelligence Center of Excellence” will expand on existing partnerships with MIT, Stanford and Microsoft.
Chief Data and AI Officer Susan Davenport will run the show, expanding on the service’s MIT accelerator and Stanford AI studio that recently put test pilots through an autonomous-systems boot camp. The center’s built on Microsoft’s secure Innovation Landing Zone, already field-tested by Air Force Cyberworx for rapid prototyping. Translation: teams can push an idea from laptop to live mission network without the usual procurement drag.
Why it matters: The Air Force bankrolls dozens of AI skunkworks – from predictive-maintenance bots to dogfighting algorithms – but commanders still complain they can’t find, scale or accredit finished tools. Centralising budgets, data and cloud access is meant to clear that bottleneck and prove AI actually moves sorties, satellites and supply chains.
How it works: The center will serve as a hub for AI collaboration, resource-sharing and deployment.
- It connects academic partners with military use cases, like autonomous aircraft and satellite operations.
- It gives contractors a clear entry point to test and scale AI tools within Air Force infrastructure.
- It consolidates current investments in AI and DevSecOps through Microsoft’s cloud systems.
- It supports applied training, such as Stanford’s 10-day course for AI test pilots.
Goodwine, delivering her valedictory, challenged contractors to ditch one-off demos and practice “extreme teaming” across land, sea, air and space. With budgets tightening, only tech that ships fleet-wide will survive.
AI Video Repurposing Tool: Turn One Video Into a Content Engine
Turn your videos into a content engine powerhouse with Goldcast’s Content Lab.
With Content Lab, you can automatically turn your long-form content (think podcasts, YouTube videos, webinars, and events) into a robust library of snackable clips, social posts, blogs, and more.
See why marketing teams at OpenAI, Hootsuite, Workday, and Intercom are using this AI video repurposing tool.
The best part?
It’s free to get started, so try Goldcast’s Content Lab for yourself right here.
- AI models are starting to talk like humans without being told how
- The Turing test might be broken and no one knows what to do next
- Harvey AI is chasing a $5 billion valuation to take over legal work
- Scientists may have actually turned lead into gold by accident
- US close to letting UAE import millions of Nvidia's AI chips
- The trade war is delaying the future of humanoid robot workers
- 🏠 Zillow: Senior Machine Learning Engineer - Decision Engine AI
- 📊 Amplitude: Staff AI Engineer, AI Tools
🧠 Does AI have a place in education?
Source: ChatGPT 4o
Billionaire philanthropist Bill Gates walked out of a Newark, N.J., classroom piloting Khanmigo and said the experience felt like “catching a glimpse of the future.” Across town, Northeastern senior Ella Stapleton demanded an $8,000 refund after spotting AI-written lecture notes, even as her professor banned students from using the same technology. One scene brims with optimism, the other with outrage, and together they capture the crossroads facing U.S. education as AI moves from novelty to necessity.
On April 23, President Donald Trump signed Advancing Artificial Intelligence Education for American Youth, an executive order that mandates the "appropriate integration of AI into education" to ensure the U.S. remains a global leader in the technology revolution. Its primary goals: teach K-12 students about AI and train teachers to use AI tools to boost educational outcomes.
What’s new: A White House Task Force on AI Education will launch public-private partnerships with tech companies to develop free online AI learning resources for schools. The Education Department is directed to reallocate funding toward AI-driven educational projects, from creating teaching materials to scaling "high-impact tutoring" programs using AI tutors.
While some educators applaud the focus, questions remain about implementation. As Beth Rabbitt, CEO of an education nonprofit, noted, the dawn of generative AI is "a bit like the arrival of electricity" – it could transform the world for the better, but "if we're not careful... it could spark fires."
Many schools began experimenting with AI before any executive orders. In some districts, AI-powered tutoring and writing assistants already supplement daily lessons.
Go deeper: Public-private partnerships are driving K-12 AI integration. The AI Education Project (aiEDU), backed by AT&T, Google, OpenAI and Microsoft, offers free AI curricula to public schools. It has partnered with districts serving 1.5 million low-income students, reaching 100,000 kids with introductory AI lessons.
Some educators have replaced take-home essays with in-class writing to prevent AI copying. As of January 2025, 25 states have issued official guidance on using AI in K-12 school, most stress protecting student data privacy, promoting equity, and ensuring AI assists rather than replaces teachers.
In higher education, students have embraced AI at remarkable rates. Estimates suggest over four-fifths of university students use some form of AI for schoolwork – from brainstorming to essay drafting.
Yes, but: Pushback is emerging, especially when educators over-rely on AI while restricting student use. The Northeastern case exemplifies this tension. Business major Ella Stapleton filed a formal complaint after discovering her professor used ChatGPT to generate class materials while the syllabus banned students from using AI. She spotted telltale signs:
- Oddly worded paragraphs
- AI-generated images with extra limbs
- An unedited AI prompt reading "expand on all areas. Be more detailed and specific."
"He's telling us not to use it and then he's using it himself," Stapleton told The New York Times. Though the university denied her refund request, the incident sparked nationwide debate about consistency in AI policies.
A recent study found college students who used ChatGPT heavily for assignments ended up procrastinating more, remembering less, and earning lower grades on average. Yet 51% of college students say using AI on assignments is cheating, while about 1 in 5 admit they've done it anyway.
In the big picture, the turbulent introduction of AI into American education may prove to be a historic turning point – perhaps even more impactful than the arrival of computers or the internet in the classroom. Yes, the past two years have seen plenty of missteps and valid concerns: cheating facilitated on an unprecedented scale, teachers and students alike occasionally abdicating effort to an automated helper and institutions caught flat-footed without policies in place.
However, it would be a profound mistake to focus only on the downsides and lose sight of the enormous opportunity at hand. I’d argue that education is not just another sector that AI will disrupt – it is possibly the most promising and crucial application of AI in the long run.
Why such optimism? Well, consider the challenge of providing truly personalized learning; human teachers, as dedicated as they are, can only do so much in a class of 25 or a lecture hall of 200. AI tutors offer the tantalizing prospect of 1-on-1 instruction for every student, anytime and on any subject – essentially democratizing the luxury of a personal tutor that was once available only to the wealthy.
The students in school today will graduate into a world pervaded by AI – in their workplaces, civic and personal lives. It is in our collective interest to ensure the next generation is AI-literate and AI-savvy.
The lesson plan for all of us is clear: proceed with care, but keep our minds – and classroom doors – open to the potential of AI.
Which video is real?
Login or Subscribe to participate in polls.
🤔 Your thought process:
Selected Image 1 (Left):
- “It has that “film look” of 35mm color negative (Kodak process C-41) camera film; and the resolution is too low to be medium format (120/220) film.”
- “This was mostly a guess, but the water movement in the fake one seemed off and the extended arm too long.”
Selected Image 2 (Right):
- “The water droplets in [the other image] seemed like something AI would add for realism. Give my regards to the photographer!”
- “I thought the water spray would put the position of the camera at an impossible position between the boat and surfer.”
💭 Thank you!
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!
If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
⚙️ Will AI double your lifespan?
Good morning and Happy Friday! Karen Hao's explosive Atlantic excerpt reveals the chaos behind Sam Altman's brief 2023 ouster, including how OpenAI's chief scientist once discussed building "bunkers" before releasing AGI. The $300 billion company that began as an idealistic nonprofit is now the centerpiece of an "empire of AI".
— The Deep View Crew
In today’s newsletter:
- 🌧️ AI for Good: AI-powered local weather forecasting model
- 🤯 Another week, another Google AI drop
- 🧠 Could AI double human lifespan by 2030?
🌧️ AI for Good: AI-powered local weather forecasting model
Source: YingLong
AI is helping forecast local weather faster and more precisely with a new model called YingLong.
Built on high-resolution hourly data from the HRRR system, YingLong predicts surface-level weather like temperature, pressure, humidity and wind speed at a 3-kilometer resolution (which means 3km x 3km coverage). It runs significantly faster than traditional forecasting models and has shown strong accuracy in predicting wind across test regions in North America.
Dr. Jianjun Liu, a researcher on the project, explains that “traditional weather forecasting solves complex equations and takes time. YingLong skips the equations and learns directly from past data. It’s like giving the model intuition about what’s likely to happen next.”
Why it matters: Local weather forecasting requires more precision than broad national models can offer. That’s where limited area models (LAMs) come in. While most AI research has focused on global weather systems, YingLong brings that power to cities and counties in a faster, more focused way.
- Traditional weather models can take hours or days to compute.
- YingLong delivers accurate local forecasts in much less time.
- Faster forecasts help cities and agencies respond to storms and plan ahead with greater confidence.
YingLong combines high-resolution local data with boundary information from a global AI model called Pangu-Weather. It focuses its predictions on a smaller inner zone to reduce computing power and improve speed. It predicts 24 weather variables with hourly updates and performs especially well in surface wind speed forecasts. Improvements in temperature and pressure forecasts are underway using refined boundary inputs.
Big picture: AI models like YingLong won’t fully replace traditional forecasting yet, but they’re already making forecasting faster and more efficient. By offering high-resolution predictions without the usual computing demands, these tools can help more people make better decisions about weather so you don’t get rained out at the next Taylor Swift concert.
Seamlessly connect your AI agents with external tools
Not-so-fun fact: Less than two-fifths of AI projects go into production.
Why? Simple. Because building real-world AI agents is hard – and that’s before you even start worrying about things like bespoke tool integrations. Lucky for you, there’s a simple and powerful solution… Outbound Apps from Descope.
- Connect your AI agent with 50+ external tools using prebuilt integration templates
- Request data and scopes from third-party tools on users’ behalf
- Store multiple tokens per user with different scopes, calling each token as needed
And best of all, it requires no heavy lifting from your developers. Start using Outbound Apps right here when you create a free Descope account – no credit card required.
🤯 Another week, another Google AI drop
Source: Google
Google marked Global Accessibility Awareness Day by rolling out new AI-powered accessibility features across Android and Chrome. The updates bring Google’s latest Gemini AI model into everyday tools.
- TalkBack + Gemini — Ask your screen reader what’s in an image and get an answer on the spot.
- Expressive Captions — Live Caption now supports stretched-out sounds like “gooooal” in a sports clip or noting background noises like whistling
- Page Zoom — A slider scales text up to 300% in Chrome on Android without wrecking layouts.
- Scanned‑PDF OCR — Chrome desktop automatically reads text in scanned PDFs so screen readers can copy or search it
Google is expanding its work with Project Euphonia by open-sourcing tools and datasets on GitHub. These tools help developers train models for diverse and non-standard speech. In Africa, Google.org is supporting the Centre for Digital Language Inclusion to create new speech datasets in 10 African languages and support inclusive AI development.
In other Google news, Google’s DeepMind research lab has unveiled AlphaEvolve, a Gemini-powered AI agent that autonomously evolves and tests code. The system combines Gemini 2.0 Flash and 2.0 Pro with automated code evaluation to iteratively improve algorithms. AlphaEvolve has already boosted the efficiency of Google’s data centers and chip design processes, and even discovered a faster method for matrix multiplication – solving a math problem untouched since 1969.
The continuous flow of announcements over the last couple of weeks underscores Google’s growing integration of AI into its entire $2T gambit of products.
Could This Company Do for Housing What Tesla Did for Cars?
Most car factories like Ford or Tesla reportedly build one car per minute. Isn’t it time we do that for houses?
BOXABL believes they have the potential to disrupt a massive and outdated trillion dollar building construction market by bringing assembly line automation to the home industry.
Since securing their initial prototype order from SpaceX and a subsequent project order of 156 homes from the Department of Defense, BOXABL has made substantial strides in streamlining their manufacturing and order process. BOXABL is now delivering to developers and consumers. And they just reserved the ticker symbol BXBL on Nasdaq*
BOXABL has raised over $170M from over 40,000 investors since 2020. They recently achieved a significant milestone: raising over 50% of their Reg A+ funding limit!
BOXABL is now only accepting investment on their website until the Reg A+ is full.
Invest now before it’s too late
- Philips turns to Nvidia to build AI model for MRI
- AI and genetics are changing the way farmers grow corn
- AI twins have the potential to solve many problems
- Hedra lands $32M to build digital character foundation models
- Huawei’s newest watch has several must-see features
- Howie: Email based assistant to handle your calendar (in beta)
- Goldcast: Marketers are sitting on a goldmine of untapped content. Goldcast’s Content Lab helps you turn one video into 30+ assets—blogs, clips, posts, and more. Try it free*
- Aomni: Agents that help with sales
- Supermemory: Give your AI have ALL the info it needs
- Lex: Cursor, but for writing
The right hires make the difference.
Scale your AI capabilities with vetted engineers, scientists, and builders—delivered with enterprise rigor.
- AI-powered candidate matching + human vetting.
- Deep talent pools across LatAm, Africa, SEA.
- Zero upfront fees—pay only when you hire.
🧠 Could AI double human lifespan by 2030?
Source: ChatGPT 4o
In 1824, the average American lived just over 40 years. Two centuries later, that number has nearly doubled. The leap in life expectancy was driven mostly by reduced infant mortality and breakthroughs in public health and medicine. But even with antibiotics, vaccines, and surgery, the idea of living to 150 still sounds like science fiction. Now, a wave of researchers believes AI could make that fiction real.
One of the boldest voices is Dario Amodei, CEO of the AI company Anthropic. In October 2024, Amodei published a blog post predicting that AI would help double human lifespans to 150 by the end of this decade. Just three months later, he doubled down on stage at the World Economic Forum in Davos, claiming AI could deliver the breakthrough in just five years.
His reasoning? Humans already know of drugs that extend rat lifespans by 25 to 50 percent. Some animals, like certain turtles, live more than 200 years. If AI can discover and optimize therapies faster than any human team could before, why not us? Amodei believes once we hit 150, we could reach “longevity escape velocity” – the point where life-extending treatments advance faster than we age. In theory, that could allow people to live as long as they choose (better start a retirement plan for that second century of life).
He is not alone. Futurist Ray Kurzweil has made similar claims, predicting AI could halt aging by 2032. He points to two pathways. First, AI-designed nanobots that patrol the body to repair cells and deliver drugs. Second, the ability to upload the human brain into the cloud, preserving identity beyond biology. Kurzweil has long predicted the coming of a technological singularity. Longevity, in his view, may be the first step.
Yes, but…
Even believers admit these ideas are speculative. Many scientists are calling for caution. S. Jay Olshansky, a leading aging researcher and professor at the University of Illinois Chicago, says there is simply no evidence that AI can slow or stop the biological process of aging. Around the same time Amodei released his blog, Olshansky published a rebuttal in Nature Aging, arguing that enthusiasm is racing ahead of science.
“The longevity game we’re playing today is quite different from the one we played a century ago,” Olshansky wrote. “Now aging gets in the way, and this process is currently immutable.” He warns that claims about radical lifespan extension are not supported by evidence and are, in many ways, indistinguishable from pseudoscience.
Go deeper: AI is already helping improve human health. Researchers are using large models to develop drugs, predict protein structures, and model complex disease systems. Projects like DeepMind’s AlphaFold and Insilico Medicine are promising early examples. But increasing the healthspan – the number of years someone stays healthy – is not the same as increasing the lifespan. So far, no AI system has proven it can delay or reverse aging in humans.
The next leap may depend not on medicine alone but on machines. It is tempting to believe that AI will uncover the secrets of longevity. But believing and proving that are two very different things.
The search for longer, healthier lives is one of the noblest goals of science. AI could very well accelerate drug discovery, unlock hidden mechanisms of disease, and give every person access to high-quality health advice. That alone would be a transformative legacy.
Maybe the real question isn’t whether AI can help us live to 150. It’s whether we’d want to live that long (I don’t think I want to live to 150…) – and if we’re willing to put in the decades of work to find out.
Which image is real?
Login or Subscribe to participate in polls.
🤔 Your thought process:
Selected Image 1 (Left):
- “There are real 'faults' in the grass patterns in [this] video. In the [other] video the arc of the horizon does not look correct”
- “Wow. Video is very hard! I picked [this video] because the detail of the reflections through the trees and off the roof of the car as the camera moved seemed accurate - and like something AI wouldn't have totally nailed.”
Selected Image 2 (Right):
- “There was an odd vertical shadow in the road of the spinning camera view, that made it look like it had a gap where an AI forgot to render the yellow dotted line. But I've become too cynical - this was the real video!”
- “Shadow in the [other] one put me off”
💭 Thank you!
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!
If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
*Indicates sponsored content
*Boxabl Disclosure: This is a paid advertisement for BOXABL’s Regulation A offering. Please read the offering circular here. This is a message from BOXABL
*Reserving a Nasdaq ticker does not guarantee a future listing on Nasdaq or indicate that BOXABL meets any of Nasdaq's listing criteria to do so.
⚙️ OpenAI introduces Codex
Good morning. Nvidia CEO Jensen Huang's trip to Taiwan, after visiting the Middle East with Trump, has sparked "Jensanity" as adoring fans mob him for autographs on books, posters, and even baseballs. The Taiwan-born billionaire — whose company is now selling official Jensen-branded merch at a pop-up store — prompted confusion from his US-based colleagues (where he walks around fairly unnoticed).
— The Deep View Crew
In today’s newsletter:
- 🩸 AI for Good: AI spots blood clots before they strike
- 🤖 Penn reimagines research with AI at its core
- 🧠 OpenAI introduces Codex
🩸 AI for Good: AI spots blood clots before they strike
Source: ChatGPT 4o
For heart patients, the first sign of a dangerous clot is often a heart attack or stroke. Now, researchers at the University of Tokyo have unveiled an AI-powered microscope that can watch clots form in a routine blood sample – no catheter needed.
The new system uses a high-speed "frequency-division multiplexed" microscope – essentially a super-fast camera – to capture thousands of blood cell images each second. An AI algorithm then analyzes those images in real time to spot when platelets start piling into clumps, like a traffic jam forming in the bloodstream.
In tests on over 200 patients with coronary artery disease, those with acute coronary syndrome – a dangerous flare-up of heart disease – had far more platelet clumps than patients with stable conditions. Just as importantly, an ordinary arm-vein blood draw yielded virtually the same platelet data as blood taken directly from the heart’s arteries via catheter.
Why it matters: This AI tool could make personalized treatment easier and safer:
- Traditional platelet monitoring relies on invasive or indirect methods
- The AI tool analyzes blood from a basic arm draw
- Real-time imaging allows doctors to observe platelet clumping directly
- The method may reduce reliance on catheter-based procedures
The team of researchers published its findings this week in Nature Communications.
✂️ Cut your QA cycles down from hours to minutes
If slow QA processes and flaky tests are a bottleneck for your engineering team, you need QA Wolf.
QA Wolf's AI-native platform supports both web and mobile apps, delivering 80% automated test coverage in weeks and helping teams ship 5x faster by reducing QA cycles to minutes.
With QA Wolf, you get:
✅ Unlimited parallel test runs
✅ 15-min QA cycles
✅ 24-hour maintenance and on-demand test creation
✅ Zero-flake guarantee
The result? Drata’s team of 80+ engineers saw 4x more test cases and 86% faster QA cycles.
No flakes, no delays, just better QA — that’s QA Wolf.
🤖 Penn reimagines research with AI at its core
Source: UPenn
The University of Pennsylvania has quietly built a human collider for AI.
Launched this spring by cosmologist Bhuvnesh Jain and computer scientist René Vidal, the AI x Science Fellowship unites more than 20 postdoctoral researchers from physics, linguistics, chemistry, engineering and medicine. Each fellow receives two faculty mentors, a modest research budget and campus-wide access to labs and high-performance computing. Weekly Tuesday lunches double as idea exchanges, while open seminars pull in curious researchers from every school.
The fellowship grew out of a 2021 data-science pilot in Arts & Sciences and now spans Engineering and Penn Medicine, with Wharton fellows due in the fall. Jain and Vidal—co-chairs of Penn’s AI Council—plan to scale it into a university-wide Penn AI Fellowship and create a “data-science hub” where roaming AI specialists spend a fifth of their time parachuting into other labs.
Why it matters: As AI research moves rapidly into the private sector, this initiative encourages collaboration on AI research questions that don’t yet have commercial applications. Industry labs chase near-term products. Penn is betting that open-ended, ethically grounded questions—trustworthy AI, machine learning for dark-matter hunts—still belong in academia. The fellowship gives young scientists a network, résumé-ready collaborations and a sandbox for ideas too early or risky for corporate funding.
The Fastest LLM Guardrails Are Now Available For Free
Fast, secure and free: prevent LLM application toxicity and jailbreak attempts with <100ms latency.
Fiddler Guardrails are up to 6x cheaper than alternatives and deploy in your secure environment.
Connect your LLM app today and run free guardrails.
- Google's AI mode replaces iconic ‘I’m Feeling Lucky’ button
- Satya Nadella ditches podcasts for AI-powered chatbot conversations
- Moonvalley raises $53M to expand ethical AI video tools
- Alibaba and Tencent boost shopping with AI-powered advertising
- CarPlay Ultra rolls out with next-gen features
- Tesla: AI Research Engineer, Model Scaling, Self-Driving
- Microsoft: Director - Responsible AI
- Together AI: A fast and efficient way to launch AI models
- Talently AI: A conversational AI interview platform (no more manual screening)
- RevRag: Automated sales via AI calling, email, chat, and WhatsApp
🧠 OpenAI introduces Codex
Source: OpenAI
Vibe coding might be all the rage – the trend of non-coders building apps through AI – but OpenAI's latest release is pointedly not for the casual "build me a website" crowd. The company just launched Codex, a cloud-based software engineering agent built to assist professional developers with real production code.
"This is definitely not for vibe coding. I will say it's more for actual engineers working in prod, and sort of throwing all the annoying tasks you don't want to do," noted Pietro Schirano, one early user, capturing the tool's intent in plain terms.
OpenAI is rolling out Codex as a research preview to ChatGPT subscribers (initially Pro, Team, and Enterprise, with Plus users to follow). Here’s Sam Altman’s tweet on response to the rollout so far.
What makes Codex unique is that it spins up a remote development environment in OpenAI's cloud – complete with your repository, files, and a command line – and can carry out complex coding jobs independently before reporting back. Once enabled via the ChatGPT sidebar, you assign Codex a task with a prompt (for example, "Scan my project for a bug in the last five commits and fix it").
Under the hood, Codex uses a specialized new model called codex-1, derived from OpenAI's latest reasoning model, o3, but tuned specifically for code work. Key capabilities include:
- Multi-step autonomy: Codex can write new features, answer questions about the codebase, fix bugs, and propose code changes via pull request – all by itself
- Parallel agents: You can spawn multiple Codex agents working concurrently (the launch demo showed several fixing different parts of a codebase in parallel).
- Test-driven verification: Codex repeatedly runs the project's test suite until the code passes, or until it exhausts its ideas and provides verifiable logs and citations of what it did.
- Configurable via AGENTS.md: You can drop an AGENTS.md file in your repo to guide the AI. This file tells Codex about project-specific conventions, how to run the build or tests, which parts of the codebase matter most, etc. Early users report this dramatically helps Codex avoid rookie mistakes.
OpenAI has been testing Codex with several early design partners to prove its value in actual development teams:
- Temporal uses Codex to debug issues, write and execute tests, and refactor large codebases, letting Codex handle tedious background tasks so human developers can stay "in flow" on core logic.
- Superhuman is leveraging Codex to tackle small, repetitive tasks, and have found that PMs (non-engineers) can use Codex to contribute lightweight code changes.
- Kodiak Robotics has Codex working on their self-driving codebase, writing debugging tools and improving test coverage.
The big picture: All this comes amid a broader frenzy to build agentic AI developers. Just months ago, startup Cognition released "Devin," branding it "the first AI software engineer." We immediately subscribed to the $500/month service when it launched to the public, drawn in by promises that it could write entire apps in minutes and solve complex coding issues with minimal help. However, we canceled within the first month after finding it didn't live up to the hyped announcements – a common theme in the current AI landscape where capabilities often lag behind marketing claims.
Cognition raised $21 million for Devin despite its early performance on the SWE-Bench coding challenge (an industry benchmark for fixing real GitHub issues) being modest – it solved about 13.9% of test tasks on its own. Hot on its heels, researchers at Princeton built SWE-Agent, an open-source autonomous coder using a GPT-4 backend that scored 12.3% on the same benchmark – nearly matching the venture-backed startup's AI dev agent with a fraction of the resources.
Big tech isn't sitting idle. Google is expected to unveil a major AI coding tool at tomorrow's I/O developer conference, and GitHub Copilot, the incumbent AI assistant, is evolving rapidly as Microsoft folds it into a broader Copilot X vision with chat and voice features inside the IDE.
It's becoming clear that in this new landscape, the advantage of simply owning a big codebase is evaporating. We previously dubbed this "the no-moat era" in our analysis – when an indie dev with AI tools can reimplement a competitor's core features over a weekend, traditional software moats based on headcount start to crumble.
AI agents succeed when they’re scoped, sandboxed, and verifiable. Devin over-promised, under-specified, and hit a wall. Codex under-promises (no “build me Instagram”), gives the agent a test harness, and documents every step. That mindset — treat the AI like a junior dev who must show their work — is how agentic coding will stick in the short-to-mid term future.
Expect pricing to migrate toward “pay per compute” rather than all-you-can-eat. By year’s end, I would expect every IDE, CI pipeline and repo host to surface “spawn agent” buttons. And expect the winners to be the dev teams that invest in good tests, clear docs, and tight review loops.
Software engineering just got a yet another teammate. It works fast, complains never, and absolutely needs a code review. Use it wisely. Buyer beware.
Which image is real?
Login or Subscribe to participate in polls.
🤔 Your thought process:
Selected Image 1 (Left):
- “The reflections on the fuselage of the airplane in [the other image] seemed out of place, and the motion blur with the propellers didn't feel correct.”
- “I think this one is real because I have seen images like this in realtime on occasion. I have seen the moon during the day and I have seen it with an aircraft too. In Florida, especially, cloud formations are common and seeing all three have happened before.”
Selected Image 2 (Right):
- “Landing gear was open in the other pic, which put me off.”
- “[The other image’s] tail stabilizer is too high for an aircraft with underwing engines”
💭 A poll before you go
Will you let OpenAI's Codex in your codebase?
Login or Subscribe to participate in polls.
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!
If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
⚙️ Microsoft is building the "open agentic web"
Good morning. Swedish buy-now-pay-later giant Klarna is now making nearly $1 million in revenue per employee (up from $575K) after replacing 700 customer service workers with AI chatbots. The efficiency push comes just as the company filed for its much-anticipated US IPO... only to promptly postpone it after Washington’s tariff announcement sent markets into a tailspin.
— The Deep View Crew
In today’s newsletter:
- ♋ AI for Good: AI doing big things for equitable cancer care
- 📓 New NotebookLM app helps you understand anything, anywhere
- 📈 Microsoft ditches bing to build the open agentic web
♋ AI for Good: AI doing big things for equitable cancer care
Source: CancerNetwork
Up to 30% of breast-cancer cases flagged in screening are ultimately overdiagnosed, sending patients through surgery and chemo they never needed. Researchers say AI can shrink that number by spotting subtler tumor patterns and matching them to precision-medicine profiles.
Medical student Viviana Cortiana and physician Yan Leyfman lay out the roadmap in their April 2025 ONCOLOGY review, calling for population-specific models trained on diverse, high-quality data to curb false positives and tailor treatment, especially in low- and middle-income countries.
Their framework for ethical cancer AI rests on four pillars:
- Data privacy and security
- Clinical validation
- Transparency in model design
- Fairness through bias checks
Early roll-outs show the concept works:
- India’s Telangana state has begun an AI pilot across three districts to screen for oral, breast and cervical cancers, with instant triage to specialists—an approach aimed at easing its radiologist shortage.
- AstraZeneca + Qure.ai have processed five million chest X-rays in 20 countries, flagging nearly 50,000 high-risk lung-cancer cases and proving AI triage can scale in resource-strained settings.
“AI has the potential to fundamentally change how we detect, treat and monitor cancer, but realizing that promise… will require collaboration, validation, thoughtful implementation and a commitment to leaving no patient behind,” Leyfman said.
Big picture: Bringing these tools to scale will require collaboration. Health systems can supply de-identified scans, tech firms refine algorithms, NGOs underwrite training and governments streamline approvals. If those players sync, AI could deliver the same diagnostic confidence enjoyed in top clinics to every community, easing overtreatment costs and catching deadly cancers earlier, resulting in smarter care for all.
Master AI Tools, Set Automations & Build Agents – all in 16 hours (for free)
AI isn’t a buzzword anymore. It’s the skill that gets you hired, helps you earn more, and keeps you future-ready.
Join the 2-Day Free AI Upskilling Sprint by Outskill — a hands-on bootcamp designed to make you AI-smart in just 16 hours.
📅23rd May- Kick Off Call & Session 1
🧠Live sessions- 24th & 25th May
🕜11AM EST to 7PM EST
Originally priced at $499, but the first 100 of you get in for completely FREE! Claim your spot now for $0! 🎁
Inside the sprint, you'll learn:
✅ AI tools to automate tasks & save time
✅ Generative AI & LLMs for smarter outputs
✅ AI-generated content (images, videos, carousels)
✅ Automation workflows to replace manual work
✅ CustomGPTs & agents that work while you sleep
Taught by experts from Microsoft, Google, Meta & more.
🎁 You will also unlock $3,000+ in AI bonuses: 💬 Slack community access, 🧰 top AI tools, and ⚙️ ready-to-use workflows — all free when you attend!
Join in now, (we have limited free seats! 🚨)
📓 New NotebookLM app helps you understand anything, anywhere
Source: Google
NotebookLM just went mobile, giving you a smarter way to learn, organize, and listen. Anytime, anywhere.
After months of user feedback, NotebookLM is now available as a mobile app on both Android and iOS. The app brings key features from the desktop version to your phone or tablet, allowing you to interact with complex information on the go. Early users are already praising its ability to help with research, review, and multitasking in real time.
Whether you’re a student, professional, or knowledge enthusiast, NotebookLM now fits right in your pocket.
The details: NotebookLM is no longer tied to your desktop. With its new mobile release, Google is giving users more flexibility in how they process and interact with information.
- Available now on iOS 17+ and Android 10+
- Listen to Audio Overviews offline or in the background
- Ask questions by tapping "Join" while listening
- Share content directly from other apps into NotebookLM
- Ideal for managing information while commuting or multitasking
Google says this is just the start. More updates are on the way, including expanded file support and tighter integration with other Google products. Additional source types will be supported in future updates and annotation tools and editing options are expected soon. Feedback is being collected on X and Discord with future releases that may include deeper AI customization and smarter summaries.
Here’s a great video from a couple of weeks ago talking about NotebookLM turning everything into a podcast. Check it out
Big picture: NotebookLM is evolving into more than a research tool. By going mobile, it becomes a personal learning assistant you can use wherever inspiration hits. The shift is not just about convenience—it is about making high-level thinking mobile. Whether you’re reviewing documents on the train or summarizing sources between meetings, this update turns passive reading into active understanding.
Train and Deploy AI Models in Minutes with RunPod
What if you could access enterprise-grade GPU infrastructure—without the enterprise-grade price tag or complexity?
With RunPod, you can. Our platform gives developers, engineers, and startups instant access to cloud GPUs for everything from model training to real-time inference. No devops degree required.
Build, train, and deploy your own custom AI workflows at a fraction of the cost—without waiting in line for compute.
Launch your first GPU in 30 seconds → RunPod.io
- A new study reveals that most AI models still can’t tell time or read calendars
- Nvidia’s CEO calls Chinese AI researchers “world class” in a nod to global innovation
- Leaked specs point to a lightweight iPhone 17 Air with a 2,800mAh battery
- Why AI advancement doesn’t have to come at the expense of marginalized workers
- China begins assembling its supercomputer in space
- Google: Machine Learning Engineer, LLM, Personal AI, Google Pixel
- Deloitte US: AI Engineering Manager/Solutions Architect - SFL Scientific
- Drift: AI-powered chatbots that qualify leads and book meetings automatically.
- Regie.ai: AI tool for sales outreach, creating entire email sequences and call scripts in seconds
- Tiledesk: Combines live chat and conversational AI to automate customer support across multiple channels
📈 Microsoft ditches bing to build the open agentic web
Source: Microsoft
If you can’t beat them, host them. At Microsoft Build, Microsoft announced partnerships to host third-party AI models from xAI, Meta, Mistral, and others directly on Azure, treating these former rivals as first-class citizens alongside OpenAI’s ChatGPT models. Developers will be able to mix and match models via Azure’s API and tooling, all with Microsoft’s reliability guarantees and security wrapper:
- Meta’s Llama series – the open-source family of large language models from Meta, known for being adaptable and efficient.
- xAI’s Grok 3 (and Grok 3 Mini) – the new LLM from Elon Musk’s startup xAI, which Microsoft is now hosting in Azure in a notable alliance (Musk once co-founded OpenAI, now he’s indirectly back on Microsoft’s platform).
- Mistral – a French startup’s model focusing on smaller, high-performance LLMs.
- Black Forest – models from Black Forest Labs, a German AI firm.
This brings Azure’s catalog to 1,900+ models available for customers. Microsoft CEO Satya Nadella, who spoke via hologram with Elon Musk during the keynote, touted the multi-model approach as “just a game-changer in terms of how you think about models and model provisioning,” Nadella said. “It’s exciting for us as developers to be able to mix and match and use them all.” In effect, Microsoft is positioning itself as an impartial arms dealer in the AI race – happy to rent you any model you want, so long as you run it on Azure.
Go deeper: Microsoft introduced an automatic model selection system inside Azure (part of the new Azure AI Foundry updates). Dubbed the Model Router, it routes each AI query to the “best” model available based on the task and user preferences. This behind-the-scenes dispatcher can optimize for speed, cost, or quality – for example, sending a quick question to a smaller, cheaper model, but a complex query to a more powerful (and expensive) model. It also handles fallbacks and load balancing if one model is busy. For developers, it promises easier scalability and performance without manual model wrangling.
Yes, but: The catch? All this convenience further ties developers into Microsoft’s ecosystem. The Model Router makes Azure the brain that decides which model handles your requests – a useful service, but one that subtly increases dependency on Microsoft’s cloud. By making multiple models available under one roof (and even one API), Microsoft reduces any incentive for customers to shop around elsewhere. Choice is abundant – as long as Azure is the one providing it.
Another standout Build announcement was NLWeb, an open-source initiative aimed at turning every website into a model-callable endpoint that can talk back in plain language. Microsoft’s CTO Kevin Scott introduced NLWeb as essentially the HTML for the AI era.
The idea: with a few lines of NLWeb code, website owners can expose their content to natural language queries. In practice, it means any site could function like a mini-ChatGPT trained on its own data – your data – rather than ceding all search and Q&A traffic to external bots.
Each NLWeb-enabled site runs as a Model Context Protocol (MCP) server, making its content discoverable to AI assistants that speak the protocol. In one demo, food site Serious Eats answered conversational questions about “spicy, crunchy appetizers for Diwali (vegetarian)” and generated tailored recipe links – all via NLWeb and a language model, without an external search engine in the middle. Microsoft is pitching this as an “agentic web” future where AI agents seamlessly interact with websites and online services on our behalf.
In other Microsoft news, GitHub Copilot is graduating from autocomplete to autonomous agent. At Build, Microsoft previewed a new Copilot capability (a “coding agent”) that can take on full software tasks by itself. We talked about these AI powered dev tools in yesterday’s edition.
Microsoft is betting big on becoming the infrastructure layer for AI. After last week’s layoff of about 6,000 workers—the firm’s second-biggest cut ever—the company is plowing cash into GPUs, data centers and a catalog of 1,900+ models. The new Model Router lets Azure decide which model handles each query, tightening the lock-in loop.
Bing’s near-absence says it all. Search got only a footnote—mainly news that the standalone Bing Search APIs will be retired this summer, folded into Azure “grounding” services for agents. Microsoft doesn’t need to win consumer search if it can own the pipes every AI request flows through.
Agents stole the Build spotlight, but many reporters we’ve spoken to (for a role we’re hiring… click here to apply if you’re smart and like to write about AI :) call agent hype overblown. Microsoft is leaning in anyway—because agents will need a home, and Azure already has the keys.
Up next: Google I/O is happening today, and it’s a safe bet Sundar Pichai and team will have their own AI twists and turns to announce. We’ll cover how Google’s vision stacks up in our next edition. Stay tuned.
Which image is real?
Login or Subscribe to participate in polls.
🤔 Your thought process:
Selected Image 1 (Left):
- “I spent 5 minutes thinking about how donkeys/mules walk and decided that the guy in [the other Image] would have had both legs on the right hand side moving in the same direction, not oppositionally.”
- “The grass in the foreground looks duplicated and the tree line in the distance looks too uniform and obviously fake. I went with [this image] because the color cast is consistent throughout and not Ai optimized.”
Selected Image 2 (Right):
- “Oof, this was hard. It looked like it had more details, but the other one was a better picture.”
- “the donkey in [the other image]… what happened to his ear and the background is too distorted for the type of shot taken. Even though I am having trouble with the saddle sash on the first [this] one I still think the [other] one is AI.”
⚙️ Report: How AI will shape the future of energy
Nvidia CEO Jensen Huang's trip to Taiwan, after visiting the Middle East with Trump, has sparked "Jensanity" as adoring fans mob him for autographs on books, posters, and even baseballs. The Taiwan-born billionaire — whose company is now selling official Jensen-branded merch at a pop-up store — prompted confusion from his US-based colleagues (where he walks around fairly unnoticed).
Science
VIEW ALL⚙️ Does AI have a role in education?
Good morning. Earnings report season is among us. CoreWeave smashed Q1 earnings with $982M in revenue (wall street expected $853 M), causing an 11% after-hours jump, quickly followed by a cool off after announcing plans to invest up to $23B into AI data centers.
— The Deep View Crew
In today’s newsletter:
- 🔬 AI for Good: AI is speeding up drug development
- ✈️ Air Force opens AI Center of Excellence
- 🧠 Does AI have a place in education?
🔬 AI for Good: AI is speeding up drug development
Source: ChatGPT 4o
AI is helping pharmaceutical researchers find new treatments faster and cheaper by surfacing promising compounds buried deep in massive datasets. Dotmatics, a R&D software company, recently acquired by Siemens for $5.1B, is applying AI to identify potential drug candidates in a fraction of the time it used to take.
Phil Mounteney, VP of Science and Technology at Dotmatics, explains it like this: “The art of drug discovery is really finding drugs in these massive haystacks of data. AI is like a supercharged magnet that helps us sort through those haystacks and find the needle way more efficiently than before.”
Why it matters: Drug development is notoriously long and expensive. It can take up to 10 years and cost between $2 and $6 billion to bring a single drug to market. Of that, roughly six years are spent on early discovery—just identifying the compound that might work. Dotmatics is using AI to cut that phase down to as little as two years.
Faster discovery means earlier trials, quicker regulatory paths and lower costs for companies and patients alike. The company believes that AI could reduce the full research and clinical timeline by as much as 50 percent.
How it works: Dotmatics combines AI with scientific data platforms to accelerate each step of the R&D process:
- It scans huge chemical libraries to identify overlooked or repurposable compounds.
- It models how drug candidates interact with target proteins or diseases.
- It automates lab workflows that used to take researchers weeks.
- It pulls from historic datasets to inform present-day projects.
Mounteney says AI played a key role in accelerating the COVID mRNA vaccine rollout by leveraging years of stored research and rapidly analyzing it to guide development.
Big picture: Drug discovery may be one of the most direct ways AI can improve human health. Tools like Dotmatics are not replacing scientists but instead giving them the speed and precision to find answers faster. With over $300 million in projected revenue for 2025, the company is betting that faster cures can also mean a stronger business case.
Transform DevEx with AI & Platform Engineering – Join the Developer Experience Summit!
AI and Platform Engineering aren’t just buzzwords—they’re the key to unlocking developer productivity and satisfaction.
That’s why Harness, a leader in modern software delivery, is hosting the Developer Experience Summit on May 21st—a free virtual event designed to help you transform DevEx with AI and platform engineering.
Join top industry leaders as they share insights on navigating DevEx changes, optimizing work efficiency, and leading your DevEx future.
Featured speakers include:
- Prem Dhayalan – Sr. Distinguished Engineer at Capital One
- Blake Canaday – Director of Engineering at CrowdStrike
- Hasith Kalpage – Director, Platform Engineering & Innovation Division at Cisco
- Andrew Boyagi – DevOps Advocate at Atlassian
- James Governor – Analyst & Co-founder at RedMonk
- Nathen Harvey – DORA Lead and Developer Advocate at Google Cloud
- And more!
Can’t make it live? Register now, and we’ll send you the on-demand recording after the event.
✈️ Air Force opens AI Center of Excellence
Source: ChatGPT 4o
The Air Force just gave its scattered AI projects a home address. Announced by outgoing CIO Venice Goodwine at AFCEA’s TechNet Cyber on May 7, the new Department of the Air Force “Artificial Intelligence Center of Excellence” will expand on existing partnerships with MIT, Stanford and Microsoft.
Chief Data and AI Officer Susan Davenport will run the show, expanding on the service’s MIT accelerator and Stanford AI studio that recently put test pilots through an autonomous-systems boot camp. The center’s built on Microsoft’s secure Innovation Landing Zone, already field-tested by Air Force Cyberworx for rapid prototyping. Translation: teams can push an idea from laptop to live mission network without the usual procurement drag.
Why it matters: The Air Force bankrolls dozens of AI skunkworks – from predictive-maintenance bots to dogfighting algorithms – but commanders still complain they can’t find, scale or accredit finished tools. Centralising budgets, data and cloud access is meant to clear that bottleneck and prove AI actually moves sorties, satellites and supply chains.
How it works: The center will serve as a hub for AI collaboration, resource-sharing and deployment.
- It connects academic partners with military use cases, like autonomous aircraft and satellite operations.
- It gives contractors a clear entry point to test and scale AI tools within Air Force infrastructure.
- It consolidates current investments in AI and DevSecOps through Microsoft’s cloud systems.
- It supports applied training, such as Stanford’s 10-day course for AI test pilots.
Goodwine, delivering her valedictory, challenged contractors to ditch one-off demos and practice “extreme teaming” across land, sea, air and space. With budgets tightening, only tech that ships fleet-wide will survive.
AI Video Repurposing Tool: Turn One Video Into a Content Engine
Turn your videos into a content engine powerhouse with Goldcast’s Content Lab.
With Content Lab, you can automatically turn your long-form content (think podcasts, YouTube videos, webinars, and events) into a robust library of snackable clips, social posts, blogs, and more.
See why marketing teams at OpenAI, Hootsuite, Workday, and Intercom are using this AI video repurposing tool.
The best part?
It’s free to get started, so try Goldcast’s Content Lab for yourself right here.
- AI models are starting to talk like humans without being told how
- The Turing test might be broken and no one knows what to do next
- Harvey AI is chasing a $5 billion valuation to take over legal work
- Scientists may have actually turned lead into gold by accident
- US close to letting UAE import millions of Nvidia's AI chips
- The trade war is delaying the future of humanoid robot workers
- 🏠 Zillow: Senior Machine Learning Engineer - Decision Engine AI
- 📊 Amplitude: Staff AI Engineer, AI Tools
🧠 Does AI have a place in education?
Source: ChatGPT 4o
Billionaire philanthropist Bill Gates walked out of a Newark, N.J., classroom piloting Khanmigo and said the experience felt like “catching a glimpse of the future.” Across town, Northeastern senior Ella Stapleton demanded an $8,000 refund after spotting AI-written lecture notes, even as her professor banned students from using the same technology. One scene brims with optimism, the other with outrage, and together they capture the crossroads facing U.S. education as AI moves from novelty to necessity.
On April 23, President Donald Trump signed Advancing Artificial Intelligence Education for American Youth, an executive order that mandates the "appropriate integration of AI into education" to ensure the U.S. remains a global leader in the technology revolution. Its primary goals: teach K-12 students about AI and train teachers to use AI tools to boost educational outcomes.
What’s new: A White House Task Force on AI Education will launch public-private partnerships with tech companies to develop free online AI learning resources for schools. The Education Department is directed to reallocate funding toward AI-driven educational projects, from creating teaching materials to scaling "high-impact tutoring" programs using AI tutors.
While some educators applaud the focus, questions remain about implementation. As Beth Rabbitt, CEO of an education nonprofit, noted, the dawn of generative AI is "a bit like the arrival of electricity" – it could transform the world for the better, but "if we're not careful... it could spark fires."
Many schools began experimenting with AI before any executive orders. In some districts, AI-powered tutoring and writing assistants already supplement daily lessons.
Go deeper: Public-private partnerships are driving K-12 AI integration. The AI Education Project (aiEDU), backed by AT&T, Google, OpenAI and Microsoft, offers free AI curricula to public schools. It has partnered with districts serving 1.5 million low-income students, reaching 100,000 kids with introductory AI lessons.
Some educators have replaced take-home essays with in-class writing to prevent AI copying. As of January 2025, 25 states have issued official guidance on using AI in K-12 school, most stress protecting student data privacy, promoting equity, and ensuring AI assists rather than replaces teachers.
In higher education, students have embraced AI at remarkable rates. Estimates suggest over four-fifths of university students use some form of AI for schoolwork – from brainstorming to essay drafting.
Yes, but: Pushback is emerging, especially when educators over-rely on AI while restricting student use. The Northeastern case exemplifies this tension. Business major Ella Stapleton filed a formal complaint after discovering her professor used ChatGPT to generate class materials while the syllabus banned students from using AI. She spotted telltale signs:
- Oddly worded paragraphs
- AI-generated images with extra limbs
- An unedited AI prompt reading "expand on all areas. Be more detailed and specific."
"He's telling us not to use it and then he's using it himself," Stapleton told The New York Times. Though the university denied her refund request, the incident sparked nationwide debate about consistency in AI policies.
A recent study found college students who used ChatGPT heavily for assignments ended up procrastinating more, remembering less, and earning lower grades on average. Yet 51% of college students say using AI on assignments is cheating, while about 1 in 5 admit they've done it anyway.
In the big picture, the turbulent introduction of AI into American education may prove to be a historic turning point – perhaps even more impactful than the arrival of computers or the internet in the classroom. Yes, the past two years have seen plenty of missteps and valid concerns: cheating facilitated on an unprecedented scale, teachers and students alike occasionally abdicating effort to an automated helper and institutions caught flat-footed without policies in place.
However, it would be a profound mistake to focus only on the downsides and lose sight of the enormous opportunity at hand. I’d argue that education is not just another sector that AI will disrupt – it is possibly the most promising and crucial application of AI in the long run.
Why such optimism? Well, consider the challenge of providing truly personalized learning; human teachers, as dedicated as they are, can only do so much in a class of 25 or a lecture hall of 200. AI tutors offer the tantalizing prospect of 1-on-1 instruction for every student, anytime and on any subject – essentially democratizing the luxury of a personal tutor that was once available only to the wealthy.
The students in school today will graduate into a world pervaded by AI – in their workplaces, civic and personal lives. It is in our collective interest to ensure the next generation is AI-literate and AI-savvy.
The lesson plan for all of us is clear: proceed with care, but keep our minds – and classroom doors – open to the potential of AI.
Which video is real?
Login or Subscribe to participate in polls.
🤔 Your thought process:
Selected Image 1 (Left):
- “It has that “film look” of 35mm color negative (Kodak process C-41) camera film; and the resolution is too low to be medium format (120/220) film.”
- “This was mostly a guess, but the water movement in the fake one seemed off and the extended arm too long.”
Selected Image 2 (Right):
- “The water droplets in [the other image] seemed like something AI would add for realism. Give my regards to the photographer!”
- “I thought the water spray would put the position of the camera at an impossible position between the boat and surfer.”
💭 Thank you!
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!
If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
⚙️ Will AI double your lifespan?
Good morning and Happy Friday! Karen Hao's explosive Atlantic excerpt reveals the chaos behind Sam Altman's brief 2023 ouster, including how OpenAI's chief scientist once discussed building "bunkers" before releasing AGI. The $300 billion company that began as an idealistic nonprofit is now the centerpiece of an "empire of AI".
— The Deep View Crew
In today’s newsletter:
- 🌧️ AI for Good: AI-powered local weather forecasting model
- 🤯 Another week, another Google AI drop
- 🧠 Could AI double human lifespan by 2030?
🌧️ AI for Good: AI-powered local weather forecasting model
Source: YingLong
AI is helping forecast local weather faster and more precisely with a new model called YingLong.
Built on high-resolution hourly data from the HRRR system, YingLong predicts surface-level weather like temperature, pressure, humidity and wind speed at a 3-kilometer resolution (which means 3km x 3km coverage). It runs significantly faster than traditional forecasting models and has shown strong accuracy in predicting wind across test regions in North America.
Dr. Jianjun Liu, a researcher on the project, explains that “traditional weather forecasting solves complex equations and takes time. YingLong skips the equations and learns directly from past data. It’s like giving the model intuition about what’s likely to happen next.”
Why it matters: Local weather forecasting requires more precision than broad national models can offer. That’s where limited area models (LAMs) come in. While most AI research has focused on global weather systems, YingLong brings that power to cities and counties in a faster, more focused way.
- Traditional weather models can take hours or days to compute.
- YingLong delivers accurate local forecasts in much less time.
- Faster forecasts help cities and agencies respond to storms and plan ahead with greater confidence.
YingLong combines high-resolution local data with boundary information from a global AI model called Pangu-Weather. It focuses its predictions on a smaller inner zone to reduce computing power and improve speed. It predicts 24 weather variables with hourly updates and performs especially well in surface wind speed forecasts. Improvements in temperature and pressure forecasts are underway using refined boundary inputs.
Big picture: AI models like YingLong won’t fully replace traditional forecasting yet, but they’re already making forecasting faster and more efficient. By offering high-resolution predictions without the usual computing demands, these tools can help more people make better decisions about weather so you don’t get rained out at the next Taylor Swift concert.
Seamlessly connect your AI agents with external tools
Not-so-fun fact: Less than two-fifths of AI projects go into production.
Why? Simple. Because building real-world AI agents is hard – and that’s before you even start worrying about things like bespoke tool integrations. Lucky for you, there’s a simple and powerful solution… Outbound Apps from Descope.
- Connect your AI agent with 50+ external tools using prebuilt integration templates
- Request data and scopes from third-party tools on users’ behalf
- Store multiple tokens per user with different scopes, calling each token as needed
And best of all, it requires no heavy lifting from your developers. Start using Outbound Apps right here when you create a free Descope account – no credit card required.
🤯 Another week, another Google AI drop
Source: Google
Google marked Global Accessibility Awareness Day by rolling out new AI-powered accessibility features across Android and Chrome. The updates bring Google’s latest Gemini AI model into everyday tools.
- TalkBack + Gemini — Ask your screen reader what’s in an image and get an answer on the spot.
- Expressive Captions — Live Caption now supports stretched-out sounds like “gooooal” in a sports clip or noting background noises like whistling
- Page Zoom — A slider scales text up to 300% in Chrome on Android without wrecking layouts.
- Scanned‑PDF OCR — Chrome desktop automatically reads text in scanned PDFs so screen readers can copy or search it
Google is expanding its work with Project Euphonia by open-sourcing tools and datasets on GitHub. These tools help developers train models for diverse and non-standard speech. In Africa, Google.org is supporting the Centre for Digital Language Inclusion to create new speech datasets in 10 African languages and support inclusive AI development.
In other Google news, Google’s DeepMind research lab has unveiled AlphaEvolve, a Gemini-powered AI agent that autonomously evolves and tests code. The system combines Gemini 2.0 Flash and 2.0 Pro with automated code evaluation to iteratively improve algorithms. AlphaEvolve has already boosted the efficiency of Google’s data centers and chip design processes, and even discovered a faster method for matrix multiplication – solving a math problem untouched since 1969.
The continuous flow of announcements over the last couple of weeks underscores Google’s growing integration of AI into its entire $2T gambit of products.
Could This Company Do for Housing What Tesla Did for Cars?
Most car factories like Ford or Tesla reportedly build one car per minute. Isn’t it time we do that for houses?
BOXABL believes they have the potential to disrupt a massive and outdated trillion dollar building construction market by bringing assembly line automation to the home industry.
Since securing their initial prototype order from SpaceX and a subsequent project order of 156 homes from the Department of Defense, BOXABL has made substantial strides in streamlining their manufacturing and order process. BOXABL is now delivering to developers and consumers. And they just reserved the ticker symbol BXBL on Nasdaq*
BOXABL has raised over $170M from over 40,000 investors since 2020. They recently achieved a significant milestone: raising over 50% of their Reg A+ funding limit!
BOXABL is now only accepting investment on their website until the Reg A+ is full.
Invest now before it’s too late
- Philips turns to Nvidia to build AI model for MRI
- AI and genetics are changing the way farmers grow corn
- AI twins have the potential to solve many problems
- Hedra lands $32M to build digital character foundation models
- Huawei’s newest watch has several must-see features
- Howie: Email based assistant to handle your calendar (in beta)
- Goldcast: Marketers are sitting on a goldmine of untapped content. Goldcast’s Content Lab helps you turn one video into 30+ assets—blogs, clips, posts, and more. Try it free*
- Aomni: Agents that help with sales
- Supermemory: Give your AI have ALL the info it needs
- Lex: Cursor, but for writing
The right hires make the difference.
Scale your AI capabilities with vetted engineers, scientists, and builders—delivered with enterprise rigor.
- AI-powered candidate matching + human vetting.
- Deep talent pools across LatAm, Africa, SEA.
- Zero upfront fees—pay only when you hire.
🧠 Could AI double human lifespan by 2030?
Source: ChatGPT 4o
In 1824, the average American lived just over 40 years. Two centuries later, that number has nearly doubled. The leap in life expectancy was driven mostly by reduced infant mortality and breakthroughs in public health and medicine. But even with antibiotics, vaccines, and surgery, the idea of living to 150 still sounds like science fiction. Now, a wave of researchers believes AI could make that fiction real.
One of the boldest voices is Dario Amodei, CEO of the AI company Anthropic. In October 2024, Amodei published a blog post predicting that AI would help double human lifespans to 150 by the end of this decade. Just three months later, he doubled down on stage at the World Economic Forum in Davos, claiming AI could deliver the breakthrough in just five years.
His reasoning? Humans already know of drugs that extend rat lifespans by 25 to 50 percent. Some animals, like certain turtles, live more than 200 years. If AI can discover and optimize therapies faster than any human team could before, why not us? Amodei believes once we hit 150, we could reach “longevity escape velocity” – the point where life-extending treatments advance faster than we age. In theory, that could allow people to live as long as they choose (better start a retirement plan for that second century of life).
He is not alone. Futurist Ray Kurzweil has made similar claims, predicting AI could halt aging by 2032. He points to two pathways. First, AI-designed nanobots that patrol the body to repair cells and deliver drugs. Second, the ability to upload the human brain into the cloud, preserving identity beyond biology. Kurzweil has long predicted the coming of a technological singularity. Longevity, in his view, may be the first step.
Yes, but…
Even believers admit these ideas are speculative. Many scientists are calling for caution. S. Jay Olshansky, a leading aging researcher and professor at the University of Illinois Chicago, says there is simply no evidence that AI can slow or stop the biological process of aging. Around the same time Amodei released his blog, Olshansky published a rebuttal in Nature Aging, arguing that enthusiasm is racing ahead of science.
“The longevity game we’re playing today is quite different from the one we played a century ago,” Olshansky wrote. “Now aging gets in the way, and this process is currently immutable.” He warns that claims about radical lifespan extension are not supported by evidence and are, in many ways, indistinguishable from pseudoscience.
Go deeper: AI is already helping improve human health. Researchers are using large models to develop drugs, predict protein structures, and model complex disease systems. Projects like DeepMind’s AlphaFold and Insilico Medicine are promising early examples. But increasing the healthspan – the number of years someone stays healthy – is not the same as increasing the lifespan. So far, no AI system has proven it can delay or reverse aging in humans.
The next leap may depend not on medicine alone but on machines. It is tempting to believe that AI will uncover the secrets of longevity. But believing and proving that are two very different things.
The search for longer, healthier lives is one of the noblest goals of science. AI could very well accelerate drug discovery, unlock hidden mechanisms of disease, and give every person access to high-quality health advice. That alone would be a transformative legacy.
Maybe the real question isn’t whether AI can help us live to 150. It’s whether we’d want to live that long (I don’t think I want to live to 150…) – and if we’re willing to put in the decades of work to find out.
Which image is real?
Login or Subscribe to participate in polls.
🤔 Your thought process:
Selected Image 1 (Left):
- “There are real 'faults' in the grass patterns in [this] video. In the [other] video the arc of the horizon does not look correct”
- “Wow. Video is very hard! I picked [this video] because the detail of the reflections through the trees and off the roof of the car as the camera moved seemed accurate - and like something AI wouldn't have totally nailed.”
Selected Image 2 (Right):
- “There was an odd vertical shadow in the road of the spinning camera view, that made it look like it had a gap where an AI forgot to render the yellow dotted line. But I've become too cynical - this was the real video!”
- “Shadow in the [other] one put me off”
💭 Thank you!
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!
If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
*Indicates sponsored content
*Boxabl Disclosure: This is a paid advertisement for BOXABL’s Regulation A offering. Please read the offering circular here. This is a message from BOXABL
*Reserving a Nasdaq ticker does not guarantee a future listing on Nasdaq or indicate that BOXABL meets any of Nasdaq's listing criteria to do so.
⚙️ OpenAI introduces Codex
Good morning. Nvidia CEO Jensen Huang's trip to Taiwan, after visiting the Middle East with Trump, has sparked "Jensanity" as adoring fans mob him for autographs on books, posters, and even baseballs. The Taiwan-born billionaire — whose company is now selling official Jensen-branded merch at a pop-up store — prompted confusion from his US-based colleagues (where he walks around fairly unnoticed).
— The Deep View Crew
In today’s newsletter:
- 🩸 AI for Good: AI spots blood clots before they strike
- 🤖 Penn reimagines research with AI at its core
- 🧠 OpenAI introduces Codex
🩸 AI for Good: AI spots blood clots before they strike
Source: ChatGPT 4o
For heart patients, the first sign of a dangerous clot is often a heart attack or stroke. Now, researchers at the University of Tokyo have unveiled an AI-powered microscope that can watch clots form in a routine blood sample – no catheter needed.
The new system uses a high-speed "frequency-division multiplexed" microscope – essentially a super-fast camera – to capture thousands of blood cell images each second. An AI algorithm then analyzes those images in real time to spot when platelets start piling into clumps, like a traffic jam forming in the bloodstream.
In tests on over 200 patients with coronary artery disease, those with acute coronary syndrome – a dangerous flare-up of heart disease – had far more platelet clumps than patients with stable conditions. Just as importantly, an ordinary arm-vein blood draw yielded virtually the same platelet data as blood taken directly from the heart’s arteries via catheter.
Why it matters: This AI tool could make personalized treatment easier and safer:
- Traditional platelet monitoring relies on invasive or indirect methods
- The AI tool analyzes blood from a basic arm draw
- Real-time imaging allows doctors to observe platelet clumping directly
- The method may reduce reliance on catheter-based procedures
The team of researchers published its findings this week in Nature Communications.
✂️ Cut your QA cycles down from hours to minutes
If slow QA processes and flaky tests are a bottleneck for your engineering team, you need QA Wolf.
QA Wolf's AI-native platform supports both web and mobile apps, delivering 80% automated test coverage in weeks and helping teams ship 5x faster by reducing QA cycles to minutes.
With QA Wolf, you get:
✅ Unlimited parallel test runs
✅ 15-min QA cycles
✅ 24-hour maintenance and on-demand test creation
✅ Zero-flake guarantee
The result? Drata’s team of 80+ engineers saw 4x more test cases and 86% faster QA cycles.
No flakes, no delays, just better QA — that’s QA Wolf.
🤖 Penn reimagines research with AI at its core
Source: UPenn
The University of Pennsylvania has quietly built a human collider for AI.
Launched this spring by cosmologist Bhuvnesh Jain and computer scientist René Vidal, the AI x Science Fellowship unites more than 20 postdoctoral researchers from physics, linguistics, chemistry, engineering and medicine. Each fellow receives two faculty mentors, a modest research budget and campus-wide access to labs and high-performance computing. Weekly Tuesday lunches double as idea exchanges, while open seminars pull in curious researchers from every school.
The fellowship grew out of a 2021 data-science pilot in Arts & Sciences and now spans Engineering and Penn Medicine, with Wharton fellows due in the fall. Jain and Vidal—co-chairs of Penn’s AI Council—plan to scale it into a university-wide Penn AI Fellowship and create a “data-science hub” where roaming AI specialists spend a fifth of their time parachuting into other labs.
Why it matters: As AI research moves rapidly into the private sector, this initiative encourages collaboration on AI research questions that don’t yet have commercial applications. Industry labs chase near-term products. Penn is betting that open-ended, ethically grounded questions—trustworthy AI, machine learning for dark-matter hunts—still belong in academia. The fellowship gives young scientists a network, résumé-ready collaborations and a sandbox for ideas too early or risky for corporate funding.
The Fastest LLM Guardrails Are Now Available For Free
Fast, secure and free: prevent LLM application toxicity and jailbreak attempts with <100ms latency.
Fiddler Guardrails are up to 6x cheaper than alternatives and deploy in your secure environment.
Connect your LLM app today and run free guardrails.
- Google's AI mode replaces iconic ‘I’m Feeling Lucky’ button
- Satya Nadella ditches podcasts for AI-powered chatbot conversations
- Moonvalley raises $53M to expand ethical AI video tools
- Alibaba and Tencent boost shopping with AI-powered advertising
- CarPlay Ultra rolls out with next-gen features
- Tesla: AI Research Engineer, Model Scaling, Self-Driving
- Microsoft: Director - Responsible AI
- Together AI: A fast and efficient way to launch AI models
- Talently AI: A conversational AI interview platform (no more manual screening)
- RevRag: Automated sales via AI calling, email, chat, and WhatsApp
🧠 OpenAI introduces Codex
Source: OpenAI
Vibe coding might be all the rage – the trend of non-coders building apps through AI – but OpenAI's latest release is pointedly not for the casual "build me a website" crowd. The company just launched Codex, a cloud-based software engineering agent built to assist professional developers with real production code.
"This is definitely not for vibe coding. I will say it's more for actual engineers working in prod, and sort of throwing all the annoying tasks you don't want to do," noted Pietro Schirano, one early user, capturing the tool's intent in plain terms.
OpenAI is rolling out Codex as a research preview to ChatGPT subscribers (initially Pro, Team, and Enterprise, with Plus users to follow). Here’s Sam Altman’s tweet on response to the rollout so far.
What makes Codex unique is that it spins up a remote development environment in OpenAI's cloud – complete with your repository, files, and a command line – and can carry out complex coding jobs independently before reporting back. Once enabled via the ChatGPT sidebar, you assign Codex a task with a prompt (for example, "Scan my project for a bug in the last five commits and fix it").
Under the hood, Codex uses a specialized new model called codex-1, derived from OpenAI's latest reasoning model, o3, but tuned specifically for code work. Key capabilities include:
- Multi-step autonomy: Codex can write new features, answer questions about the codebase, fix bugs, and propose code changes via pull request – all by itself
- Parallel agents: You can spawn multiple Codex agents working concurrently (the launch demo showed several fixing different parts of a codebase in parallel).
- Test-driven verification: Codex repeatedly runs the project's test suite until the code passes, or until it exhausts its ideas and provides verifiable logs and citations of what it did.
- Configurable via AGENTS.md: You can drop an AGENTS.md file in your repo to guide the AI. This file tells Codex about project-specific conventions, how to run the build or tests, which parts of the codebase matter most, etc. Early users report this dramatically helps Codex avoid rookie mistakes.
OpenAI has been testing Codex with several early design partners to prove its value in actual development teams:
- Temporal uses Codex to debug issues, write and execute tests, and refactor large codebases, letting Codex handle tedious background tasks so human developers can stay "in flow" on core logic.
- Superhuman is leveraging Codex to tackle small, repetitive tasks, and have found that PMs (non-engineers) can use Codex to contribute lightweight code changes.
- Kodiak Robotics has Codex working on their self-driving codebase, writing debugging tools and improving test coverage.
The big picture: All this comes amid a broader frenzy to build agentic AI developers. Just months ago, startup Cognition released "Devin," branding it "the first AI software engineer." We immediately subscribed to the $500/month service when it launched to the public, drawn in by promises that it could write entire apps in minutes and solve complex coding issues with minimal help. However, we canceled within the first month after finding it didn't live up to the hyped announcements – a common theme in the current AI landscape where capabilities often lag behind marketing claims.
Cognition raised $21 million for Devin despite its early performance on the SWE-Bench coding challenge (an industry benchmark for fixing real GitHub issues) being modest – it solved about 13.9% of test tasks on its own. Hot on its heels, researchers at Princeton built SWE-Agent, an open-source autonomous coder using a GPT-4 backend that scored 12.3% on the same benchmark – nearly matching the venture-backed startup's AI dev agent with a fraction of the resources.
Big tech isn't sitting idle. Google is expected to unveil a major AI coding tool at tomorrow's I/O developer conference, and GitHub Copilot, the incumbent AI assistant, is evolving rapidly as Microsoft folds it into a broader Copilot X vision with chat and voice features inside the IDE.
It's becoming clear that in this new landscape, the advantage of simply owning a big codebase is evaporating. We previously dubbed this "the no-moat era" in our analysis – when an indie dev with AI tools can reimplement a competitor's core features over a weekend, traditional software moats based on headcount start to crumble.
AI agents succeed when they’re scoped, sandboxed, and verifiable. Devin over-promised, under-specified, and hit a wall. Codex under-promises (no “build me Instagram”), gives the agent a test harness, and documents every step. That mindset — treat the AI like a junior dev who must show their work — is how agentic coding will stick in the short-to-mid term future.
Expect pricing to migrate toward “pay per compute” rather than all-you-can-eat. By year’s end, I would expect every IDE, CI pipeline and repo host to surface “spawn agent” buttons. And expect the winners to be the dev teams that invest in good tests, clear docs, and tight review loops.
Software engineering just got a yet another teammate. It works fast, complains never, and absolutely needs a code review. Use it wisely. Buyer beware.
Which image is real?
Login or Subscribe to participate in polls.
🤔 Your thought process:
Selected Image 1 (Left):
- “The reflections on the fuselage of the airplane in [the other image] seemed out of place, and the motion blur with the propellers didn't feel correct.”
- “I think this one is real because I have seen images like this in realtime on occasion. I have seen the moon during the day and I have seen it with an aircraft too. In Florida, especially, cloud formations are common and seeing all three have happened before.”
Selected Image 2 (Right):
- “Landing gear was open in the other pic, which put me off.”
- “[The other image’s] tail stabilizer is too high for an aircraft with underwing engines”
💭 A poll before you go
Will you let OpenAI's Codex in your codebase?
Login or Subscribe to participate in polls.
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!
If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
⚙️ Microsoft is building the "open agentic web"
Good morning. Swedish buy-now-pay-later giant Klarna is now making nearly $1 million in revenue per employee (up from $575K) after replacing 700 customer service workers with AI chatbots. The efficiency push comes just as the company filed for its much-anticipated US IPO... only to promptly postpone it after Washington’s tariff announcement sent markets into a tailspin.
— The Deep View Crew
In today’s newsletter:
- ♋ AI for Good: AI doing big things for equitable cancer care
- 📓 New NotebookLM app helps you understand anything, anywhere
- 📈 Microsoft ditches bing to build the open agentic web
♋ AI for Good: AI doing big things for equitable cancer care
Source: CancerNetwork
Up to 30% of breast-cancer cases flagged in screening are ultimately overdiagnosed, sending patients through surgery and chemo they never needed. Researchers say AI can shrink that number by spotting subtler tumor patterns and matching them to precision-medicine profiles.
Medical student Viviana Cortiana and physician Yan Leyfman lay out the roadmap in their April 2025 ONCOLOGY review, calling for population-specific models trained on diverse, high-quality data to curb false positives and tailor treatment, especially in low- and middle-income countries.
Their framework for ethical cancer AI rests on four pillars:
- Data privacy and security
- Clinical validation
- Transparency in model design
- Fairness through bias checks
Early roll-outs show the concept works:
- India’s Telangana state has begun an AI pilot across three districts to screen for oral, breast and cervical cancers, with instant triage to specialists—an approach aimed at easing its radiologist shortage.
- AstraZeneca + Qure.ai have processed five million chest X-rays in 20 countries, flagging nearly 50,000 high-risk lung-cancer cases and proving AI triage can scale in resource-strained settings.
“AI has the potential to fundamentally change how we detect, treat and monitor cancer, but realizing that promise… will require collaboration, validation, thoughtful implementation and a commitment to leaving no patient behind,” Leyfman said.
Big picture: Bringing these tools to scale will require collaboration. Health systems can supply de-identified scans, tech firms refine algorithms, NGOs underwrite training and governments streamline approvals. If those players sync, AI could deliver the same diagnostic confidence enjoyed in top clinics to every community, easing overtreatment costs and catching deadly cancers earlier, resulting in smarter care for all.
Master AI Tools, Set Automations & Build Agents – all in 16 hours (for free)
AI isn’t a buzzword anymore. It’s the skill that gets you hired, helps you earn more, and keeps you future-ready.
Join the 2-Day Free AI Upskilling Sprint by Outskill — a hands-on bootcamp designed to make you AI-smart in just 16 hours.
📅23rd May- Kick Off Call & Session 1
🧠Live sessions- 24th & 25th May
🕜11AM EST to 7PM EST
Originally priced at $499, but the first 100 of you get in for completely FREE! Claim your spot now for $0! 🎁
Inside the sprint, you'll learn:
✅ AI tools to automate tasks & save time
✅ Generative AI & LLMs for smarter outputs
✅ AI-generated content (images, videos, carousels)
✅ Automation workflows to replace manual work
✅ CustomGPTs & agents that work while you sleep
Taught by experts from Microsoft, Google, Meta & more.
🎁 You will also unlock $3,000+ in AI bonuses: 💬 Slack community access, 🧰 top AI tools, and ⚙️ ready-to-use workflows — all free when you attend!
Join in now, (we have limited free seats! 🚨)
📓 New NotebookLM app helps you understand anything, anywhere
Source: Google
NotebookLM just went mobile, giving you a smarter way to learn, organize, and listen. Anytime, anywhere.
After months of user feedback, NotebookLM is now available as a mobile app on both Android and iOS. The app brings key features from the desktop version to your phone or tablet, allowing you to interact with complex information on the go. Early users are already praising its ability to help with research, review, and multitasking in real time.
Whether you’re a student, professional, or knowledge enthusiast, NotebookLM now fits right in your pocket.
The details: NotebookLM is no longer tied to your desktop. With its new mobile release, Google is giving users more flexibility in how they process and interact with information.
- Available now on iOS 17+ and Android 10+
- Listen to Audio Overviews offline or in the background
- Ask questions by tapping "Join" while listening
- Share content directly from other apps into NotebookLM
- Ideal for managing information while commuting or multitasking
Google says this is just the start. More updates are on the way, including expanded file support and tighter integration with other Google products. Additional source types will be supported in future updates and annotation tools and editing options are expected soon. Feedback is being collected on X and Discord with future releases that may include deeper AI customization and smarter summaries.
Here’s a great video from a couple of weeks ago talking about NotebookLM turning everything into a podcast. Check it out
Big picture: NotebookLM is evolving into more than a research tool. By going mobile, it becomes a personal learning assistant you can use wherever inspiration hits. The shift is not just about convenience—it is about making high-level thinking mobile. Whether you’re reviewing documents on the train or summarizing sources between meetings, this update turns passive reading into active understanding.
Train and Deploy AI Models in Minutes with RunPod
What if you could access enterprise-grade GPU infrastructure—without the enterprise-grade price tag or complexity?
With RunPod, you can. Our platform gives developers, engineers, and startups instant access to cloud GPUs for everything from model training to real-time inference. No devops degree required.
Build, train, and deploy your own custom AI workflows at a fraction of the cost—without waiting in line for compute.
Launch your first GPU in 30 seconds → RunPod.io
- A new study reveals that most AI models still can’t tell time or read calendars
- Nvidia’s CEO calls Chinese AI researchers “world class” in a nod to global innovation
- Leaked specs point to a lightweight iPhone 17 Air with a 2,800mAh battery
- Why AI advancement doesn’t have to come at the expense of marginalized workers
- China begins assembling its supercomputer in space
- Google: Machine Learning Engineer, LLM, Personal AI, Google Pixel
- Deloitte US: AI Engineering Manager/Solutions Architect - SFL Scientific
- Drift: AI-powered chatbots that qualify leads and book meetings automatically.
- Regie.ai: AI tool for sales outreach, creating entire email sequences and call scripts in seconds
- Tiledesk: Combines live chat and conversational AI to automate customer support across multiple channels
📈 Microsoft ditches bing to build the open agentic web
Source: Microsoft
If you can’t beat them, host them. At Microsoft Build, Microsoft announced partnerships to host third-party AI models from xAI, Meta, Mistral, and others directly on Azure, treating these former rivals as first-class citizens alongside OpenAI’s ChatGPT models. Developers will be able to mix and match models via Azure’s API and tooling, all with Microsoft’s reliability guarantees and security wrapper:
- Meta’s Llama series – the open-source family of large language models from Meta, known for being adaptable and efficient.
- xAI’s Grok 3 (and Grok 3 Mini) – the new LLM from Elon Musk’s startup xAI, which Microsoft is now hosting in Azure in a notable alliance (Musk once co-founded OpenAI, now he’s indirectly back on Microsoft’s platform).
- Mistral – a French startup’s model focusing on smaller, high-performance LLMs.
- Black Forest – models from Black Forest Labs, a German AI firm.
This brings Azure’s catalog to 1,900+ models available for customers. Microsoft CEO Satya Nadella, who spoke via hologram with Elon Musk during the keynote, touted the multi-model approach as “just a game-changer in terms of how you think about models and model provisioning,” Nadella said. “It’s exciting for us as developers to be able to mix and match and use them all.” In effect, Microsoft is positioning itself as an impartial arms dealer in the AI race – happy to rent you any model you want, so long as you run it on Azure.
Go deeper: Microsoft introduced an automatic model selection system inside Azure (part of the new Azure AI Foundry updates). Dubbed the Model Router, it routes each AI query to the “best” model available based on the task and user preferences. This behind-the-scenes dispatcher can optimize for speed, cost, or quality – for example, sending a quick question to a smaller, cheaper model, but a complex query to a more powerful (and expensive) model. It also handles fallbacks and load balancing if one model is busy. For developers, it promises easier scalability and performance without manual model wrangling.
Yes, but: The catch? All this convenience further ties developers into Microsoft’s ecosystem. The Model Router makes Azure the brain that decides which model handles your requests – a useful service, but one that subtly increases dependency on Microsoft’s cloud. By making multiple models available under one roof (and even one API), Microsoft reduces any incentive for customers to shop around elsewhere. Choice is abundant – as long as Azure is the one providing it.
Another standout Build announcement was NLWeb, an open-source initiative aimed at turning every website into a model-callable endpoint that can talk back in plain language. Microsoft’s CTO Kevin Scott introduced NLWeb as essentially the HTML for the AI era.
The idea: with a few lines of NLWeb code, website owners can expose their content to natural language queries. In practice, it means any site could function like a mini-ChatGPT trained on its own data – your data – rather than ceding all search and Q&A traffic to external bots.
Each NLWeb-enabled site runs as a Model Context Protocol (MCP) server, making its content discoverable to AI assistants that speak the protocol. In one demo, food site Serious Eats answered conversational questions about “spicy, crunchy appetizers for Diwali (vegetarian)” and generated tailored recipe links – all via NLWeb and a language model, without an external search engine in the middle. Microsoft is pitching this as an “agentic web” future where AI agents seamlessly interact with websites and online services on our behalf.
In other Microsoft news, GitHub Copilot is graduating from autocomplete to autonomous agent. At Build, Microsoft previewed a new Copilot capability (a “coding agent”) that can take on full software tasks by itself. We talked about these AI powered dev tools in yesterday’s edition.
Microsoft is betting big on becoming the infrastructure layer for AI. After last week’s layoff of about 6,000 workers—the firm’s second-biggest cut ever—the company is plowing cash into GPUs, data centers and a catalog of 1,900+ models. The new Model Router lets Azure decide which model handles each query, tightening the lock-in loop.
Bing’s near-absence says it all. Search got only a footnote—mainly news that the standalone Bing Search APIs will be retired this summer, folded into Azure “grounding” services for agents. Microsoft doesn’t need to win consumer search if it can own the pipes every AI request flows through.
Agents stole the Build spotlight, but many reporters we’ve spoken to (for a role we’re hiring… click here to apply if you’re smart and like to write about AI :) call agent hype overblown. Microsoft is leaning in anyway—because agents will need a home, and Azure already has the keys.
Up next: Google I/O is happening today, and it’s a safe bet Sundar Pichai and team will have their own AI twists and turns to announce. We’ll cover how Google’s vision stacks up in our next edition. Stay tuned.
Which image is real?
Login or Subscribe to participate in polls.
🤔 Your thought process:
Selected Image 1 (Left):
- “I spent 5 minutes thinking about how donkeys/mules walk and decided that the guy in [the other Image] would have had both legs on the right hand side moving in the same direction, not oppositionally.”
- “The grass in the foreground looks duplicated and the tree line in the distance looks too uniform and obviously fake. I went with [this image] because the color cast is consistent throughout and not Ai optimized.”
Selected Image 2 (Right):
- “Oof, this was hard. It looked like it had more details, but the other one was a better picture.”
- “the donkey in [the other image]… what happened to his ear and the background is too distorted for the type of shot taken. Even though I am having trouble with the saddle sash on the first [this] one I still think the [other] one is AI.”
⚙️ Report: How AI will shape the future of energy
Nvidia CEO Jensen Huang's trip to Taiwan, after visiting the Middle East with Trump, has sparked "Jensanity" as adoring fans mob him for autographs on books, posters, and even baseballs. The Taiwan-born billionaire — whose company is now selling official Jensen-branded merch at a pop-up store — prompted confusion from his US-based colleagues (where he walks around fairly unnoticed).
Climate
VIEW ALL⚙️ Does AI have a role in education?
Good morning. Earnings report season is among us. CoreWeave smashed Q1 earnings with $982M in revenue (wall street expected $853 M), causing an 11% after-hours jump, quickly followed by a cool off after announcing plans to invest up to $23B into AI data centers.
— The Deep View Crew
In today’s newsletter:
- 🔬 AI for Good: AI is speeding up drug development
- ✈️ Air Force opens AI Center of Excellence
- 🧠 Does AI have a place in education?
🔬 AI for Good: AI is speeding up drug development
Source: ChatGPT 4o
AI is helping pharmaceutical researchers find new treatments faster and cheaper by surfacing promising compounds buried deep in massive datasets. Dotmatics, a R&D software company, recently acquired by Siemens for $5.1B, is applying AI to identify potential drug candidates in a fraction of the time it used to take.
Phil Mounteney, VP of Science and Technology at Dotmatics, explains it like this: “The art of drug discovery is really finding drugs in these massive haystacks of data. AI is like a supercharged magnet that helps us sort through those haystacks and find the needle way more efficiently than before.”
Why it matters: Drug development is notoriously long and expensive. It can take up to 10 years and cost between $2 and $6 billion to bring a single drug to market. Of that, roughly six years are spent on early discovery—just identifying the compound that might work. Dotmatics is using AI to cut that phase down to as little as two years.
Faster discovery means earlier trials, quicker regulatory paths and lower costs for companies and patients alike. The company believes that AI could reduce the full research and clinical timeline by as much as 50 percent.
How it works: Dotmatics combines AI with scientific data platforms to accelerate each step of the R&D process:
- It scans huge chemical libraries to identify overlooked or repurposable compounds.
- It models how drug candidates interact with target proteins or diseases.
- It automates lab workflows that used to take researchers weeks.
- It pulls from historic datasets to inform present-day projects.
Mounteney says AI played a key role in accelerating the COVID mRNA vaccine rollout by leveraging years of stored research and rapidly analyzing it to guide development.
Big picture: Drug discovery may be one of the most direct ways AI can improve human health. Tools like Dotmatics are not replacing scientists but instead giving them the speed and precision to find answers faster. With over $300 million in projected revenue for 2025, the company is betting that faster cures can also mean a stronger business case.
Transform DevEx with AI & Platform Engineering – Join the Developer Experience Summit!
AI and Platform Engineering aren’t just buzzwords—they’re the key to unlocking developer productivity and satisfaction.
That’s why Harness, a leader in modern software delivery, is hosting the Developer Experience Summit on May 21st—a free virtual event designed to help you transform DevEx with AI and platform engineering.
Join top industry leaders as they share insights on navigating DevEx changes, optimizing work efficiency, and leading your DevEx future.
Featured speakers include:
- Prem Dhayalan – Sr. Distinguished Engineer at Capital One
- Blake Canaday – Director of Engineering at CrowdStrike
- Hasith Kalpage – Director, Platform Engineering & Innovation Division at Cisco
- Andrew Boyagi – DevOps Advocate at Atlassian
- James Governor – Analyst & Co-founder at RedMonk
- Nathen Harvey – DORA Lead and Developer Advocate at Google Cloud
- And more!
Can’t make it live? Register now, and we’ll send you the on-demand recording after the event.
✈️ Air Force opens AI Center of Excellence
Source: ChatGPT 4o
The Air Force just gave its scattered AI projects a home address. Announced by outgoing CIO Venice Goodwine at AFCEA’s TechNet Cyber on May 7, the new Department of the Air Force “Artificial Intelligence Center of Excellence” will expand on existing partnerships with MIT, Stanford and Microsoft.
Chief Data and AI Officer Susan Davenport will run the show, expanding on the service’s MIT accelerator and Stanford AI studio that recently put test pilots through an autonomous-systems boot camp. The center’s built on Microsoft’s secure Innovation Landing Zone, already field-tested by Air Force Cyberworx for rapid prototyping. Translation: teams can push an idea from laptop to live mission network without the usual procurement drag.
Why it matters: The Air Force bankrolls dozens of AI skunkworks – from predictive-maintenance bots to dogfighting algorithms – but commanders still complain they can’t find, scale or accredit finished tools. Centralising budgets, data and cloud access is meant to clear that bottleneck and prove AI actually moves sorties, satellites and supply chains.
How it works: The center will serve as a hub for AI collaboration, resource-sharing and deployment.
- It connects academic partners with military use cases, like autonomous aircraft and satellite operations.
- It gives contractors a clear entry point to test and scale AI tools within Air Force infrastructure.
- It consolidates current investments in AI and DevSecOps through Microsoft’s cloud systems.
- It supports applied training, such as Stanford’s 10-day course for AI test pilots.
Goodwine, delivering her valedictory, challenged contractors to ditch one-off demos and practice “extreme teaming” across land, sea, air and space. With budgets tightening, only tech that ships fleet-wide will survive.
AI Video Repurposing Tool: Turn One Video Into a Content Engine
Turn your videos into a content engine powerhouse with Goldcast’s Content Lab.
With Content Lab, you can automatically turn your long-form content (think podcasts, YouTube videos, webinars, and events) into a robust library of snackable clips, social posts, blogs, and more.
See why marketing teams at OpenAI, Hootsuite, Workday, and Intercom are using this AI video repurposing tool.
The best part?
It’s free to get started, so try Goldcast’s Content Lab for yourself right here.
- AI models are starting to talk like humans without being told how
- The Turing test might be broken and no one knows what to do next
- Harvey AI is chasing a $5 billion valuation to take over legal work
- Scientists may have actually turned lead into gold by accident
- US close to letting UAE import millions of Nvidia's AI chips
- The trade war is delaying the future of humanoid robot workers
- 🏠 Zillow: Senior Machine Learning Engineer - Decision Engine AI
- 📊 Amplitude: Staff AI Engineer, AI Tools
🧠 Does AI have a place in education?
Source: ChatGPT 4o
Billionaire philanthropist Bill Gates walked out of a Newark, N.J., classroom piloting Khanmigo and said the experience felt like “catching a glimpse of the future.” Across town, Northeastern senior Ella Stapleton demanded an $8,000 refund after spotting AI-written lecture notes, even as her professor banned students from using the same technology. One scene brims with optimism, the other with outrage, and together they capture the crossroads facing U.S. education as AI moves from novelty to necessity.
On April 23, President Donald Trump signed Advancing Artificial Intelligence Education for American Youth, an executive order that mandates the "appropriate integration of AI into education" to ensure the U.S. remains a global leader in the technology revolution. Its primary goals: teach K-12 students about AI and train teachers to use AI tools to boost educational outcomes.
What’s new: A White House Task Force on AI Education will launch public-private partnerships with tech companies to develop free online AI learning resources for schools. The Education Department is directed to reallocate funding toward AI-driven educational projects, from creating teaching materials to scaling "high-impact tutoring" programs using AI tutors.
While some educators applaud the focus, questions remain about implementation. As Beth Rabbitt, CEO of an education nonprofit, noted, the dawn of generative AI is "a bit like the arrival of electricity" – it could transform the world for the better, but "if we're not careful... it could spark fires."
Many schools began experimenting with AI before any executive orders. In some districts, AI-powered tutoring and writing assistants already supplement daily lessons.
Go deeper: Public-private partnerships are driving K-12 AI integration. The AI Education Project (aiEDU), backed by AT&T, Google, OpenAI and Microsoft, offers free AI curricula to public schools. It has partnered with districts serving 1.5 million low-income students, reaching 100,000 kids with introductory AI lessons.
Some educators have replaced take-home essays with in-class writing to prevent AI copying. As of January 2025, 25 states have issued official guidance on using AI in K-12 school, most stress protecting student data privacy, promoting equity, and ensuring AI assists rather than replaces teachers.
In higher education, students have embraced AI at remarkable rates. Estimates suggest over four-fifths of university students use some form of AI for schoolwork – from brainstorming to essay drafting.
Yes, but: Pushback is emerging, especially when educators over-rely on AI while restricting student use. The Northeastern case exemplifies this tension. Business major Ella Stapleton filed a formal complaint after discovering her professor used ChatGPT to generate class materials while the syllabus banned students from using AI. She spotted telltale signs:
- Oddly worded paragraphs
- AI-generated images with extra limbs
- An unedited AI prompt reading "expand on all areas. Be more detailed and specific."
"He's telling us not to use it and then he's using it himself," Stapleton told The New York Times. Though the university denied her refund request, the incident sparked nationwide debate about consistency in AI policies.
A recent study found college students who used ChatGPT heavily for assignments ended up procrastinating more, remembering less, and earning lower grades on average. Yet 51% of college students say using AI on assignments is cheating, while about 1 in 5 admit they've done it anyway.
In the big picture, the turbulent introduction of AI into American education may prove to be a historic turning point – perhaps even more impactful than the arrival of computers or the internet in the classroom. Yes, the past two years have seen plenty of missteps and valid concerns: cheating facilitated on an unprecedented scale, teachers and students alike occasionally abdicating effort to an automated helper and institutions caught flat-footed without policies in place.
However, it would be a profound mistake to focus only on the downsides and lose sight of the enormous opportunity at hand. I’d argue that education is not just another sector that AI will disrupt – it is possibly the most promising and crucial application of AI in the long run.
Why such optimism? Well, consider the challenge of providing truly personalized learning; human teachers, as dedicated as they are, can only do so much in a class of 25 or a lecture hall of 200. AI tutors offer the tantalizing prospect of 1-on-1 instruction for every student, anytime and on any subject – essentially democratizing the luxury of a personal tutor that was once available only to the wealthy.
The students in school today will graduate into a world pervaded by AI – in their workplaces, civic and personal lives. It is in our collective interest to ensure the next generation is AI-literate and AI-savvy.
The lesson plan for all of us is clear: proceed with care, but keep our minds – and classroom doors – open to the potential of AI.
Which video is real?
Login or Subscribe to participate in polls.
🤔 Your thought process:
Selected Image 1 (Left):
- “It has that “film look” of 35mm color negative (Kodak process C-41) camera film; and the resolution is too low to be medium format (120/220) film.”
- “This was mostly a guess, but the water movement in the fake one seemed off and the extended arm too long.”
Selected Image 2 (Right):
- “The water droplets in [the other image] seemed like something AI would add for realism. Give my regards to the photographer!”
- “I thought the water spray would put the position of the camera at an impossible position between the boat and surfer.”
💭 Thank you!
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!
If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
⚙️ Will AI double your lifespan?
Good morning and Happy Friday! Karen Hao's explosive Atlantic excerpt reveals the chaos behind Sam Altman's brief 2023 ouster, including how OpenAI's chief scientist once discussed building "bunkers" before releasing AGI. The $300 billion company that began as an idealistic nonprofit is now the centerpiece of an "empire of AI".
— The Deep View Crew
In today’s newsletter:
- 🌧️ AI for Good: AI-powered local weather forecasting model
- 🤯 Another week, another Google AI drop
- 🧠 Could AI double human lifespan by 2030?
🌧️ AI for Good: AI-powered local weather forecasting model
Source: YingLong
AI is helping forecast local weather faster and more precisely with a new model called YingLong.
Built on high-resolution hourly data from the HRRR system, YingLong predicts surface-level weather like temperature, pressure, humidity and wind speed at a 3-kilometer resolution (which means 3km x 3km coverage). It runs significantly faster than traditional forecasting models and has shown strong accuracy in predicting wind across test regions in North America.
Dr. Jianjun Liu, a researcher on the project, explains that “traditional weather forecasting solves complex equations and takes time. YingLong skips the equations and learns directly from past data. It’s like giving the model intuition about what’s likely to happen next.”
Why it matters: Local weather forecasting requires more precision than broad national models can offer. That’s where limited area models (LAMs) come in. While most AI research has focused on global weather systems, YingLong brings that power to cities and counties in a faster, more focused way.
- Traditional weather models can take hours or days to compute.
- YingLong delivers accurate local forecasts in much less time.
- Faster forecasts help cities and agencies respond to storms and plan ahead with greater confidence.
YingLong combines high-resolution local data with boundary information from a global AI model called Pangu-Weather. It focuses its predictions on a smaller inner zone to reduce computing power and improve speed. It predicts 24 weather variables with hourly updates and performs especially well in surface wind speed forecasts. Improvements in temperature and pressure forecasts are underway using refined boundary inputs.
Big picture: AI models like YingLong won’t fully replace traditional forecasting yet, but they’re already making forecasting faster and more efficient. By offering high-resolution predictions without the usual computing demands, these tools can help more people make better decisions about weather so you don’t get rained out at the next Taylor Swift concert.
Seamlessly connect your AI agents with external tools
Not-so-fun fact: Less than two-fifths of AI projects go into production.
Why? Simple. Because building real-world AI agents is hard – and that’s before you even start worrying about things like bespoke tool integrations. Lucky for you, there’s a simple and powerful solution… Outbound Apps from Descope.
- Connect your AI agent with 50+ external tools using prebuilt integration templates
- Request data and scopes from third-party tools on users’ behalf
- Store multiple tokens per user with different scopes, calling each token as needed
And best of all, it requires no heavy lifting from your developers. Start using Outbound Apps right here when you create a free Descope account – no credit card required.
🤯 Another week, another Google AI drop
Source: Google
Google marked Global Accessibility Awareness Day by rolling out new AI-powered accessibility features across Android and Chrome. The updates bring Google’s latest Gemini AI model into everyday tools.
- TalkBack + Gemini — Ask your screen reader what’s in an image and get an answer on the spot.
- Expressive Captions — Live Caption now supports stretched-out sounds like “gooooal” in a sports clip or noting background noises like whistling
- Page Zoom — A slider scales text up to 300% in Chrome on Android without wrecking layouts.
- Scanned‑PDF OCR — Chrome desktop automatically reads text in scanned PDFs so screen readers can copy or search it
Google is expanding its work with Project Euphonia by open-sourcing tools and datasets on GitHub. These tools help developers train models for diverse and non-standard speech. In Africa, Google.org is supporting the Centre for Digital Language Inclusion to create new speech datasets in 10 African languages and support inclusive AI development.
In other Google news, Google’s DeepMind research lab has unveiled AlphaEvolve, a Gemini-powered AI agent that autonomously evolves and tests code. The system combines Gemini 2.0 Flash and 2.0 Pro with automated code evaluation to iteratively improve algorithms. AlphaEvolve has already boosted the efficiency of Google’s data centers and chip design processes, and even discovered a faster method for matrix multiplication – solving a math problem untouched since 1969.
The continuous flow of announcements over the last couple of weeks underscores Google’s growing integration of AI into its entire $2T gambit of products.
Could This Company Do for Housing What Tesla Did for Cars?
Most car factories like Ford or Tesla reportedly build one car per minute. Isn’t it time we do that for houses?
BOXABL believes they have the potential to disrupt a massive and outdated trillion dollar building construction market by bringing assembly line automation to the home industry.
Since securing their initial prototype order from SpaceX and a subsequent project order of 156 homes from the Department of Defense, BOXABL has made substantial strides in streamlining their manufacturing and order process. BOXABL is now delivering to developers and consumers. And they just reserved the ticker symbol BXBL on Nasdaq*
BOXABL has raised over $170M from over 40,000 investors since 2020. They recently achieved a significant milestone: raising over 50% of their Reg A+ funding limit!
BOXABL is now only accepting investment on their website until the Reg A+ is full.
Invest now before it’s too late
- Philips turns to Nvidia to build AI model for MRI
- AI and genetics are changing the way farmers grow corn
- AI twins have the potential to solve many problems
- Hedra lands $32M to build digital character foundation models
- Huawei’s newest watch has several must-see features
- Howie: Email based assistant to handle your calendar (in beta)
- Goldcast: Marketers are sitting on a goldmine of untapped content. Goldcast’s Content Lab helps you turn one video into 30+ assets—blogs, clips, posts, and more. Try it free*
- Aomni: Agents that help with sales
- Supermemory: Give your AI have ALL the info it needs
- Lex: Cursor, but for writing
The right hires make the difference.
Scale your AI capabilities with vetted engineers, scientists, and builders—delivered with enterprise rigor.
- AI-powered candidate matching + human vetting.
- Deep talent pools across LatAm, Africa, SEA.
- Zero upfront fees—pay only when you hire.
🧠 Could AI double human lifespan by 2030?
Source: ChatGPT 4o
In 1824, the average American lived just over 40 years. Two centuries later, that number has nearly doubled. The leap in life expectancy was driven mostly by reduced infant mortality and breakthroughs in public health and medicine. But even with antibiotics, vaccines, and surgery, the idea of living to 150 still sounds like science fiction. Now, a wave of researchers believes AI could make that fiction real.
One of the boldest voices is Dario Amodei, CEO of the AI company Anthropic. In October 2024, Amodei published a blog post predicting that AI would help double human lifespans to 150 by the end of this decade. Just three months later, he doubled down on stage at the World Economic Forum in Davos, claiming AI could deliver the breakthrough in just five years.
His reasoning? Humans already know of drugs that extend rat lifespans by 25 to 50 percent. Some animals, like certain turtles, live more than 200 years. If AI can discover and optimize therapies faster than any human team could before, why not us? Amodei believes once we hit 150, we could reach “longevity escape velocity” – the point where life-extending treatments advance faster than we age. In theory, that could allow people to live as long as they choose (better start a retirement plan for that second century of life).
He is not alone. Futurist Ray Kurzweil has made similar claims, predicting AI could halt aging by 2032. He points to two pathways. First, AI-designed nanobots that patrol the body to repair cells and deliver drugs. Second, the ability to upload the human brain into the cloud, preserving identity beyond biology. Kurzweil has long predicted the coming of a technological singularity. Longevity, in his view, may be the first step.
Yes, but…
Even believers admit these ideas are speculative. Many scientists are calling for caution. S. Jay Olshansky, a leading aging researcher and professor at the University of Illinois Chicago, says there is simply no evidence that AI can slow or stop the biological process of aging. Around the same time Amodei released his blog, Olshansky published a rebuttal in Nature Aging, arguing that enthusiasm is racing ahead of science.
“The longevity game we’re playing today is quite different from the one we played a century ago,” Olshansky wrote. “Now aging gets in the way, and this process is currently immutable.” He warns that claims about radical lifespan extension are not supported by evidence and are, in many ways, indistinguishable from pseudoscience.
Go deeper: AI is already helping improve human health. Researchers are using large models to develop drugs, predict protein structures, and model complex disease systems. Projects like DeepMind’s AlphaFold and Insilico Medicine are promising early examples. But increasing the healthspan – the number of years someone stays healthy – is not the same as increasing the lifespan. So far, no AI system has proven it can delay or reverse aging in humans.
The next leap may depend not on medicine alone but on machines. It is tempting to believe that AI will uncover the secrets of longevity. But believing and proving that are two very different things.
The search for longer, healthier lives is one of the noblest goals of science. AI could very well accelerate drug discovery, unlock hidden mechanisms of disease, and give every person access to high-quality health advice. That alone would be a transformative legacy.
Maybe the real question isn’t whether AI can help us live to 150. It’s whether we’d want to live that long (I don’t think I want to live to 150…) – and if we’re willing to put in the decades of work to find out.
Which image is real?
Login or Subscribe to participate in polls.
🤔 Your thought process:
Selected Image 1 (Left):
- “There are real 'faults' in the grass patterns in [this] video. In the [other] video the arc of the horizon does not look correct”
- “Wow. Video is very hard! I picked [this video] because the detail of the reflections through the trees and off the roof of the car as the camera moved seemed accurate - and like something AI wouldn't have totally nailed.”
Selected Image 2 (Right):
- “There was an odd vertical shadow in the road of the spinning camera view, that made it look like it had a gap where an AI forgot to render the yellow dotted line. But I've become too cynical - this was the real video!”
- “Shadow in the [other] one put me off”
💭 Thank you!
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!
If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
*Indicates sponsored content
*Boxabl Disclosure: This is a paid advertisement for BOXABL’s Regulation A offering. Please read the offering circular here. This is a message from BOXABL
*Reserving a Nasdaq ticker does not guarantee a future listing on Nasdaq or indicate that BOXABL meets any of Nasdaq's listing criteria to do so.
⚙️ OpenAI introduces Codex
Good morning. Nvidia CEO Jensen Huang's trip to Taiwan, after visiting the Middle East with Trump, has sparked "Jensanity" as adoring fans mob him for autographs on books, posters, and even baseballs. The Taiwan-born billionaire — whose company is now selling official Jensen-branded merch at a pop-up store — prompted confusion from his US-based colleagues (where he walks around fairly unnoticed).
— The Deep View Crew
In today’s newsletter:
- 🩸 AI for Good: AI spots blood clots before they strike
- 🤖 Penn reimagines research with AI at its core
- 🧠 OpenAI introduces Codex
🩸 AI for Good: AI spots blood clots before they strike
Source: ChatGPT 4o
For heart patients, the first sign of a dangerous clot is often a heart attack or stroke. Now, researchers at the University of Tokyo have unveiled an AI-powered microscope that can watch clots form in a routine blood sample – no catheter needed.
The new system uses a high-speed "frequency-division multiplexed" microscope – essentially a super-fast camera – to capture thousands of blood cell images each second. An AI algorithm then analyzes those images in real time to spot when platelets start piling into clumps, like a traffic jam forming in the bloodstream.
In tests on over 200 patients with coronary artery disease, those with acute coronary syndrome – a dangerous flare-up of heart disease – had far more platelet clumps than patients with stable conditions. Just as importantly, an ordinary arm-vein blood draw yielded virtually the same platelet data as blood taken directly from the heart’s arteries via catheter.
Why it matters: This AI tool could make personalized treatment easier and safer:
- Traditional platelet monitoring relies on invasive or indirect methods
- The AI tool analyzes blood from a basic arm draw
- Real-time imaging allows doctors to observe platelet clumping directly
- The method may reduce reliance on catheter-based procedures
The team of researchers published its findings this week in Nature Communications.
✂️ Cut your QA cycles down from hours to minutes
If slow QA processes and flaky tests are a bottleneck for your engineering team, you need QA Wolf.
QA Wolf's AI-native platform supports both web and mobile apps, delivering 80% automated test coverage in weeks and helping teams ship 5x faster by reducing QA cycles to minutes.
With QA Wolf, you get:
✅ Unlimited parallel test runs
✅ 15-min QA cycles
✅ 24-hour maintenance and on-demand test creation
✅ Zero-flake guarantee
The result? Drata’s team of 80+ engineers saw 4x more test cases and 86% faster QA cycles.
No flakes, no delays, just better QA — that’s QA Wolf.
🤖 Penn reimagines research with AI at its core
Source: UPenn
The University of Pennsylvania has quietly built a human collider for AI.
Launched this spring by cosmologist Bhuvnesh Jain and computer scientist René Vidal, the AI x Science Fellowship unites more than 20 postdoctoral researchers from physics, linguistics, chemistry, engineering and medicine. Each fellow receives two faculty mentors, a modest research budget and campus-wide access to labs and high-performance computing. Weekly Tuesday lunches double as idea exchanges, while open seminars pull in curious researchers from every school.
The fellowship grew out of a 2021 data-science pilot in Arts & Sciences and now spans Engineering and Penn Medicine, with Wharton fellows due in the fall. Jain and Vidal—co-chairs of Penn’s AI Council—plan to scale it into a university-wide Penn AI Fellowship and create a “data-science hub” where roaming AI specialists spend a fifth of their time parachuting into other labs.
Why it matters: As AI research moves rapidly into the private sector, this initiative encourages collaboration on AI research questions that don’t yet have commercial applications. Industry labs chase near-term products. Penn is betting that open-ended, ethically grounded questions—trustworthy AI, machine learning for dark-matter hunts—still belong in academia. The fellowship gives young scientists a network, résumé-ready collaborations and a sandbox for ideas too early or risky for corporate funding.
The Fastest LLM Guardrails Are Now Available For Free
Fast, secure and free: prevent LLM application toxicity and jailbreak attempts with <100ms latency.
Fiddler Guardrails are up to 6x cheaper than alternatives and deploy in your secure environment.
Connect your LLM app today and run free guardrails.
- Google's AI mode replaces iconic ‘I’m Feeling Lucky’ button
- Satya Nadella ditches podcasts for AI-powered chatbot conversations
- Moonvalley raises $53M to expand ethical AI video tools
- Alibaba and Tencent boost shopping with AI-powered advertising
- CarPlay Ultra rolls out with next-gen features
- Tesla: AI Research Engineer, Model Scaling, Self-Driving
- Microsoft: Director - Responsible AI
- Together AI: A fast and efficient way to launch AI models
- Talently AI: A conversational AI interview platform (no more manual screening)
- RevRag: Automated sales via AI calling, email, chat, and WhatsApp
🧠 OpenAI introduces Codex
Source: OpenAI
Vibe coding might be all the rage – the trend of non-coders building apps through AI – but OpenAI's latest release is pointedly not for the casual "build me a website" crowd. The company just launched Codex, a cloud-based software engineering agent built to assist professional developers with real production code.
"This is definitely not for vibe coding. I will say it's more for actual engineers working in prod, and sort of throwing all the annoying tasks you don't want to do," noted Pietro Schirano, one early user, capturing the tool's intent in plain terms.
OpenAI is rolling out Codex as a research preview to ChatGPT subscribers (initially Pro, Team, and Enterprise, with Plus users to follow). Here’s Sam Altman’s tweet on response to the rollout so far.
What makes Codex unique is that it spins up a remote development environment in OpenAI's cloud – complete with your repository, files, and a command line – and can carry out complex coding jobs independently before reporting back. Once enabled via the ChatGPT sidebar, you assign Codex a task with a prompt (for example, "Scan my project for a bug in the last five commits and fix it").
Under the hood, Codex uses a specialized new model called codex-1, derived from OpenAI's latest reasoning model, o3, but tuned specifically for code work. Key capabilities include:
- Multi-step autonomy: Codex can write new features, answer questions about the codebase, fix bugs, and propose code changes via pull request – all by itself
- Parallel agents: You can spawn multiple Codex agents working concurrently (the launch demo showed several fixing different parts of a codebase in parallel).
- Test-driven verification: Codex repeatedly runs the project's test suite until the code passes, or until it exhausts its ideas and provides verifiable logs and citations of what it did.
- Configurable via AGENTS.md: You can drop an AGENTS.md file in your repo to guide the AI. This file tells Codex about project-specific conventions, how to run the build or tests, which parts of the codebase matter most, etc. Early users report this dramatically helps Codex avoid rookie mistakes.
OpenAI has been testing Codex with several early design partners to prove its value in actual development teams:
- Temporal uses Codex to debug issues, write and execute tests, and refactor large codebases, letting Codex handle tedious background tasks so human developers can stay "in flow" on core logic.
- Superhuman is leveraging Codex to tackle small, repetitive tasks, and have found that PMs (non-engineers) can use Codex to contribute lightweight code changes.
- Kodiak Robotics has Codex working on their self-driving codebase, writing debugging tools and improving test coverage.
The big picture: All this comes amid a broader frenzy to build agentic AI developers. Just months ago, startup Cognition released "Devin," branding it "the first AI software engineer." We immediately subscribed to the $500/month service when it launched to the public, drawn in by promises that it could write entire apps in minutes and solve complex coding issues with minimal help. However, we canceled within the first month after finding it didn't live up to the hyped announcements – a common theme in the current AI landscape where capabilities often lag behind marketing claims.
Cognition raised $21 million for Devin despite its early performance on the SWE-Bench coding challenge (an industry benchmark for fixing real GitHub issues) being modest – it solved about 13.9% of test tasks on its own. Hot on its heels, researchers at Princeton built SWE-Agent, an open-source autonomous coder using a GPT-4 backend that scored 12.3% on the same benchmark – nearly matching the venture-backed startup's AI dev agent with a fraction of the resources.
Big tech isn't sitting idle. Google is expected to unveil a major AI coding tool at tomorrow's I/O developer conference, and GitHub Copilot, the incumbent AI assistant, is evolving rapidly as Microsoft folds it into a broader Copilot X vision with chat and voice features inside the IDE.
It's becoming clear that in this new landscape, the advantage of simply owning a big codebase is evaporating. We previously dubbed this "the no-moat era" in our analysis – when an indie dev with AI tools can reimplement a competitor's core features over a weekend, traditional software moats based on headcount start to crumble.
AI agents succeed when they’re scoped, sandboxed, and verifiable. Devin over-promised, under-specified, and hit a wall. Codex under-promises (no “build me Instagram”), gives the agent a test harness, and documents every step. That mindset — treat the AI like a junior dev who must show their work — is how agentic coding will stick in the short-to-mid term future.
Expect pricing to migrate toward “pay per compute” rather than all-you-can-eat. By year’s end, I would expect every IDE, CI pipeline and repo host to surface “spawn agent” buttons. And expect the winners to be the dev teams that invest in good tests, clear docs, and tight review loops.
Software engineering just got a yet another teammate. It works fast, complains never, and absolutely needs a code review. Use it wisely. Buyer beware.
Which image is real?
Login or Subscribe to participate in polls.
🤔 Your thought process:
Selected Image 1 (Left):
- “The reflections on the fuselage of the airplane in [the other image] seemed out of place, and the motion blur with the propellers didn't feel correct.”
- “I think this one is real because I have seen images like this in realtime on occasion. I have seen the moon during the day and I have seen it with an aircraft too. In Florida, especially, cloud formations are common and seeing all three have happened before.”
Selected Image 2 (Right):
- “Landing gear was open in the other pic, which put me off.”
- “[The other image’s] tail stabilizer is too high for an aircraft with underwing engines”
💭 A poll before you go
Will you let OpenAI's Codex in your codebase?
Login or Subscribe to participate in polls.
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!
If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
⚙️ Microsoft is building the "open agentic web"
Good morning. Swedish buy-now-pay-later giant Klarna is now making nearly $1 million in revenue per employee (up from $575K) after replacing 700 customer service workers with AI chatbots. The efficiency push comes just as the company filed for its much-anticipated US IPO... only to promptly postpone it after Washington’s tariff announcement sent markets into a tailspin.
— The Deep View Crew
In today’s newsletter:
- ♋ AI for Good: AI doing big things for equitable cancer care
- 📓 New NotebookLM app helps you understand anything, anywhere
- 📈 Microsoft ditches bing to build the open agentic web
♋ AI for Good: AI doing big things for equitable cancer care
Source: CancerNetwork
Up to 30% of breast-cancer cases flagged in screening are ultimately overdiagnosed, sending patients through surgery and chemo they never needed. Researchers say AI can shrink that number by spotting subtler tumor patterns and matching them to precision-medicine profiles.
Medical student Viviana Cortiana and physician Yan Leyfman lay out the roadmap in their April 2025 ONCOLOGY review, calling for population-specific models trained on diverse, high-quality data to curb false positives and tailor treatment, especially in low- and middle-income countries.
Their framework for ethical cancer AI rests on four pillars:
- Data privacy and security
- Clinical validation
- Transparency in model design
- Fairness through bias checks
Early roll-outs show the concept works:
- India’s Telangana state has begun an AI pilot across three districts to screen for oral, breast and cervical cancers, with instant triage to specialists—an approach aimed at easing its radiologist shortage.
- AstraZeneca + Qure.ai have processed five million chest X-rays in 20 countries, flagging nearly 50,000 high-risk lung-cancer cases and proving AI triage can scale in resource-strained settings.
“AI has the potential to fundamentally change how we detect, treat and monitor cancer, but realizing that promise… will require collaboration, validation, thoughtful implementation and a commitment to leaving no patient behind,” Leyfman said.
Big picture: Bringing these tools to scale will require collaboration. Health systems can supply de-identified scans, tech firms refine algorithms, NGOs underwrite training and governments streamline approvals. If those players sync, AI could deliver the same diagnostic confidence enjoyed in top clinics to every community, easing overtreatment costs and catching deadly cancers earlier, resulting in smarter care for all.
Master AI Tools, Set Automations & Build Agents – all in 16 hours (for free)
AI isn’t a buzzword anymore. It’s the skill that gets you hired, helps you earn more, and keeps you future-ready.
Join the 2-Day Free AI Upskilling Sprint by Outskill — a hands-on bootcamp designed to make you AI-smart in just 16 hours.
📅23rd May- Kick Off Call & Session 1
🧠Live sessions- 24th & 25th May
🕜11AM EST to 7PM EST
Originally priced at $499, but the first 100 of you get in for completely FREE! Claim your spot now for $0! 🎁
Inside the sprint, you'll learn:
✅ AI tools to automate tasks & save time
✅ Generative AI & LLMs for smarter outputs
✅ AI-generated content (images, videos, carousels)
✅ Automation workflows to replace manual work
✅ CustomGPTs & agents that work while you sleep
Taught by experts from Microsoft, Google, Meta & more.
🎁 You will also unlock $3,000+ in AI bonuses: 💬 Slack community access, 🧰 top AI tools, and ⚙️ ready-to-use workflows — all free when you attend!
Join in now, (we have limited free seats! 🚨)
📓 New NotebookLM app helps you understand anything, anywhere
Source: Google
NotebookLM just went mobile, giving you a smarter way to learn, organize, and listen. Anytime, anywhere.
After months of user feedback, NotebookLM is now available as a mobile app on both Android and iOS. The app brings key features from the desktop version to your phone or tablet, allowing you to interact with complex information on the go. Early users are already praising its ability to help with research, review, and multitasking in real time.
Whether you’re a student, professional, or knowledge enthusiast, NotebookLM now fits right in your pocket.
The details: NotebookLM is no longer tied to your desktop. With its new mobile release, Google is giving users more flexibility in how they process and interact with information.
- Available now on iOS 17+ and Android 10+
- Listen to Audio Overviews offline or in the background
- Ask questions by tapping "Join" while listening
- Share content directly from other apps into NotebookLM
- Ideal for managing information while commuting or multitasking
Google says this is just the start. More updates are on the way, including expanded file support and tighter integration with other Google products. Additional source types will be supported in future updates and annotation tools and editing options are expected soon. Feedback is being collected on X and Discord with future releases that may include deeper AI customization and smarter summaries.
Here’s a great video from a couple of weeks ago talking about NotebookLM turning everything into a podcast. Check it out
Big picture: NotebookLM is evolving into more than a research tool. By going mobile, it becomes a personal learning assistant you can use wherever inspiration hits. The shift is not just about convenience—it is about making high-level thinking mobile. Whether you’re reviewing documents on the train or summarizing sources between meetings, this update turns passive reading into active understanding.
Train and Deploy AI Models in Minutes with RunPod
What if you could access enterprise-grade GPU infrastructure—without the enterprise-grade price tag or complexity?
With RunPod, you can. Our platform gives developers, engineers, and startups instant access to cloud GPUs for everything from model training to real-time inference. No devops degree required.
Build, train, and deploy your own custom AI workflows at a fraction of the cost—without waiting in line for compute.
Launch your first GPU in 30 seconds → RunPod.io
- A new study reveals that most AI models still can’t tell time or read calendars
- Nvidia’s CEO calls Chinese AI researchers “world class” in a nod to global innovation
- Leaked specs point to a lightweight iPhone 17 Air with a 2,800mAh battery
- Why AI advancement doesn’t have to come at the expense of marginalized workers
- China begins assembling its supercomputer in space
- Google: Machine Learning Engineer, LLM, Personal AI, Google Pixel
- Deloitte US: AI Engineering Manager/Solutions Architect - SFL Scientific
- Drift: AI-powered chatbots that qualify leads and book meetings automatically.
- Regie.ai: AI tool for sales outreach, creating entire email sequences and call scripts in seconds
- Tiledesk: Combines live chat and conversational AI to automate customer support across multiple channels
📈 Microsoft ditches bing to build the open agentic web
Source: Microsoft
If you can’t beat them, host them. At Microsoft Build, Microsoft announced partnerships to host third-party AI models from xAI, Meta, Mistral, and others directly on Azure, treating these former rivals as first-class citizens alongside OpenAI’s ChatGPT models. Developers will be able to mix and match models via Azure’s API and tooling, all with Microsoft’s reliability guarantees and security wrapper:
- Meta’s Llama series – the open-source family of large language models from Meta, known for being adaptable and efficient.
- xAI’s Grok 3 (and Grok 3 Mini) – the new LLM from Elon Musk’s startup xAI, which Microsoft is now hosting in Azure in a notable alliance (Musk once co-founded OpenAI, now he’s indirectly back on Microsoft’s platform).
- Mistral – a French startup’s model focusing on smaller, high-performance LLMs.
- Black Forest – models from Black Forest Labs, a German AI firm.
This brings Azure’s catalog to 1,900+ models available for customers. Microsoft CEO Satya Nadella, who spoke via hologram with Elon Musk during the keynote, touted the multi-model approach as “just a game-changer in terms of how you think about models and model provisioning,” Nadella said. “It’s exciting for us as developers to be able to mix and match and use them all.” In effect, Microsoft is positioning itself as an impartial arms dealer in the AI race – happy to rent you any model you want, so long as you run it on Azure.
Go deeper: Microsoft introduced an automatic model selection system inside Azure (part of the new Azure AI Foundry updates). Dubbed the Model Router, it routes each AI query to the “best” model available based on the task and user preferences. This behind-the-scenes dispatcher can optimize for speed, cost, or quality – for example, sending a quick question to a smaller, cheaper model, but a complex query to a more powerful (and expensive) model. It also handles fallbacks and load balancing if one model is busy. For developers, it promises easier scalability and performance without manual model wrangling.
Yes, but: The catch? All this convenience further ties developers into Microsoft’s ecosystem. The Model Router makes Azure the brain that decides which model handles your requests – a useful service, but one that subtly increases dependency on Microsoft’s cloud. By making multiple models available under one roof (and even one API), Microsoft reduces any incentive for customers to shop around elsewhere. Choice is abundant – as long as Azure is the one providing it.
Another standout Build announcement was NLWeb, an open-source initiative aimed at turning every website into a model-callable endpoint that can talk back in plain language. Microsoft’s CTO Kevin Scott introduced NLWeb as essentially the HTML for the AI era.
The idea: with a few lines of NLWeb code, website owners can expose their content to natural language queries. In practice, it means any site could function like a mini-ChatGPT trained on its own data – your data – rather than ceding all search and Q&A traffic to external bots.
Each NLWeb-enabled site runs as a Model Context Protocol (MCP) server, making its content discoverable to AI assistants that speak the protocol. In one demo, food site Serious Eats answered conversational questions about “spicy, crunchy appetizers for Diwali (vegetarian)” and generated tailored recipe links – all via NLWeb and a language model, without an external search engine in the middle. Microsoft is pitching this as an “agentic web” future where AI agents seamlessly interact with websites and online services on our behalf.
In other Microsoft news, GitHub Copilot is graduating from autocomplete to autonomous agent. At Build, Microsoft previewed a new Copilot capability (a “coding agent”) that can take on full software tasks by itself. We talked about these AI powered dev tools in yesterday’s edition.
Microsoft is betting big on becoming the infrastructure layer for AI. After last week’s layoff of about 6,000 workers—the firm’s second-biggest cut ever—the company is plowing cash into GPUs, data centers and a catalog of 1,900+ models. The new Model Router lets Azure decide which model handles each query, tightening the lock-in loop.
Bing’s near-absence says it all. Search got only a footnote—mainly news that the standalone Bing Search APIs will be retired this summer, folded into Azure “grounding” services for agents. Microsoft doesn’t need to win consumer search if it can own the pipes every AI request flows through.
Agents stole the Build spotlight, but many reporters we’ve spoken to (for a role we’re hiring… click here to apply if you’re smart and like to write about AI :) call agent hype overblown. Microsoft is leaning in anyway—because agents will need a home, and Azure already has the keys.
Up next: Google I/O is happening today, and it’s a safe bet Sundar Pichai and team will have their own AI twists and turns to announce. We’ll cover how Google’s vision stacks up in our next edition. Stay tuned.
Which image is real?
Login or Subscribe to participate in polls.
🤔 Your thought process:
Selected Image 1 (Left):
- “I spent 5 minutes thinking about how donkeys/mules walk and decided that the guy in [the other Image] would have had both legs on the right hand side moving in the same direction, not oppositionally.”
- “The grass in the foreground looks duplicated and the tree line in the distance looks too uniform and obviously fake. I went with [this image] because the color cast is consistent throughout and not Ai optimized.”
Selected Image 2 (Right):
- “Oof, this was hard. It looked like it had more details, but the other one was a better picture.”
- “the donkey in [the other image]… what happened to his ear and the background is too distorted for the type of shot taken. Even though I am having trouble with the saddle sash on the first [this] one I still think the [other] one is AI.”
⚙️ Report: How AI will shape the future of energy
Nvidia CEO Jensen Huang's trip to Taiwan, after visiting the Middle East with Trump, has sparked "Jensanity" as adoring fans mob him for autographs on books, posters, and even baseballs. The Taiwan-born billionaire — whose company is now selling official Jensen-branded merch at a pop-up store — prompted confusion from his US-based colleagues (where he walks around fairly unnoticed).
Regulation
VIEW ALL⚙️ Does AI have a role in education?
Good morning. Earnings report season is among us. CoreWeave smashed Q1 earnings with $982M in revenue (wall street expected $853 M), causing an 11% after-hours jump, quickly followed by a cool off after announcing plans to invest up to $23B into AI data centers.
— The Deep View Crew
In today’s newsletter:
- 🔬 AI for Good: AI is speeding up drug development
- ✈️ Air Force opens AI Center of Excellence
- 🧠 Does AI have a place in education?
🔬 AI for Good: AI is speeding up drug development
Source: ChatGPT 4o
AI is helping pharmaceutical researchers find new treatments faster and cheaper by surfacing promising compounds buried deep in massive datasets. Dotmatics, a R&D software company, recently acquired by Siemens for $5.1B, is applying AI to identify potential drug candidates in a fraction of the time it used to take.
Phil Mounteney, VP of Science and Technology at Dotmatics, explains it like this: “The art of drug discovery is really finding drugs in these massive haystacks of data. AI is like a supercharged magnet that helps us sort through those haystacks and find the needle way more efficiently than before.”
Why it matters: Drug development is notoriously long and expensive. It can take up to 10 years and cost between $2 and $6 billion to bring a single drug to market. Of that, roughly six years are spent on early discovery—just identifying the compound that might work. Dotmatics is using AI to cut that phase down to as little as two years.
Faster discovery means earlier trials, quicker regulatory paths and lower costs for companies and patients alike. The company believes that AI could reduce the full research and clinical timeline by as much as 50 percent.
How it works: Dotmatics combines AI with scientific data platforms to accelerate each step of the R&D process:
- It scans huge chemical libraries to identify overlooked or repurposable compounds.
- It models how drug candidates interact with target proteins or diseases.
- It automates lab workflows that used to take researchers weeks.
- It pulls from historic datasets to inform present-day projects.
Mounteney says AI played a key role in accelerating the COVID mRNA vaccine rollout by leveraging years of stored research and rapidly analyzing it to guide development.
Big picture: Drug discovery may be one of the most direct ways AI can improve human health. Tools like Dotmatics are not replacing scientists but instead giving them the speed and precision to find answers faster. With over $300 million in projected revenue for 2025, the company is betting that faster cures can also mean a stronger business case.
Transform DevEx with AI & Platform Engineering – Join the Developer Experience Summit!
AI and Platform Engineering aren’t just buzzwords—they’re the key to unlocking developer productivity and satisfaction.
That’s why Harness, a leader in modern software delivery, is hosting the Developer Experience Summit on May 21st—a free virtual event designed to help you transform DevEx with AI and platform engineering.
Join top industry leaders as they share insights on navigating DevEx changes, optimizing work efficiency, and leading your DevEx future.
Featured speakers include:
- Prem Dhayalan – Sr. Distinguished Engineer at Capital One
- Blake Canaday – Director of Engineering at CrowdStrike
- Hasith Kalpage – Director, Platform Engineering & Innovation Division at Cisco
- Andrew Boyagi – DevOps Advocate at Atlassian
- James Governor – Analyst & Co-founder at RedMonk
- Nathen Harvey – DORA Lead and Developer Advocate at Google Cloud
- And more!
Can’t make it live? Register now, and we’ll send you the on-demand recording after the event.
✈️ Air Force opens AI Center of Excellence
Source: ChatGPT 4o
The Air Force just gave its scattered AI projects a home address. Announced by outgoing CIO Venice Goodwine at AFCEA’s TechNet Cyber on May 7, the new Department of the Air Force “Artificial Intelligence Center of Excellence” will expand on existing partnerships with MIT, Stanford and Microsoft.
Chief Data and AI Officer Susan Davenport will run the show, expanding on the service’s MIT accelerator and Stanford AI studio that recently put test pilots through an autonomous-systems boot camp. The center’s built on Microsoft’s secure Innovation Landing Zone, already field-tested by Air Force Cyberworx for rapid prototyping. Translation: teams can push an idea from laptop to live mission network without the usual procurement drag.
Why it matters: The Air Force bankrolls dozens of AI skunkworks – from predictive-maintenance bots to dogfighting algorithms – but commanders still complain they can’t find, scale or accredit finished tools. Centralising budgets, data and cloud access is meant to clear that bottleneck and prove AI actually moves sorties, satellites and supply chains.
How it works: The center will serve as a hub for AI collaboration, resource-sharing and deployment.
- It connects academic partners with military use cases, like autonomous aircraft and satellite operations.
- It gives contractors a clear entry point to test and scale AI tools within Air Force infrastructure.
- It consolidates current investments in AI and DevSecOps through Microsoft’s cloud systems.
- It supports applied training, such as Stanford’s 10-day course for AI test pilots.
Goodwine, delivering her valedictory, challenged contractors to ditch one-off demos and practice “extreme teaming” across land, sea, air and space. With budgets tightening, only tech that ships fleet-wide will survive.
AI Video Repurposing Tool: Turn One Video Into a Content Engine
Turn your videos into a content engine powerhouse with Goldcast’s Content Lab.
With Content Lab, you can automatically turn your long-form content (think podcasts, YouTube videos, webinars, and events) into a robust library of snackable clips, social posts, blogs, and more.
See why marketing teams at OpenAI, Hootsuite, Workday, and Intercom are using this AI video repurposing tool.
The best part?
It’s free to get started, so try Goldcast’s Content Lab for yourself right here.
- AI models are starting to talk like humans without being told how
- The Turing test might be broken and no one knows what to do next
- Harvey AI is chasing a $5 billion valuation to take over legal work
- Scientists may have actually turned lead into gold by accident
- US close to letting UAE import millions of Nvidia's AI chips
- The trade war is delaying the future of humanoid robot workers
- 🏠 Zillow: Senior Machine Learning Engineer - Decision Engine AI
- 📊 Amplitude: Staff AI Engineer, AI Tools
🧠 Does AI have a place in education?
Source: ChatGPT 4o
Billionaire philanthropist Bill Gates walked out of a Newark, N.J., classroom piloting Khanmigo and said the experience felt like “catching a glimpse of the future.” Across town, Northeastern senior Ella Stapleton demanded an $8,000 refund after spotting AI-written lecture notes, even as her professor banned students from using the same technology. One scene brims with optimism, the other with outrage, and together they capture the crossroads facing U.S. education as AI moves from novelty to necessity.
On April 23, President Donald Trump signed Advancing Artificial Intelligence Education for American Youth, an executive order that mandates the "appropriate integration of AI into education" to ensure the U.S. remains a global leader in the technology revolution. Its primary goals: teach K-12 students about AI and train teachers to use AI tools to boost educational outcomes.
What’s new: A White House Task Force on AI Education will launch public-private partnerships with tech companies to develop free online AI learning resources for schools. The Education Department is directed to reallocate funding toward AI-driven educational projects, from creating teaching materials to scaling "high-impact tutoring" programs using AI tutors.
While some educators applaud the focus, questions remain about implementation. As Beth Rabbitt, CEO of an education nonprofit, noted, the dawn of generative AI is "a bit like the arrival of electricity" – it could transform the world for the better, but "if we're not careful... it could spark fires."
Many schools began experimenting with AI before any executive orders. In some districts, AI-powered tutoring and writing assistants already supplement daily lessons.
Go deeper: Public-private partnerships are driving K-12 AI integration. The AI Education Project (aiEDU), backed by AT&T, Google, OpenAI and Microsoft, offers free AI curricula to public schools. It has partnered with districts serving 1.5 million low-income students, reaching 100,000 kids with introductory AI lessons.
Some educators have replaced take-home essays with in-class writing to prevent AI copying. As of January 2025, 25 states have issued official guidance on using AI in K-12 school, most stress protecting student data privacy, promoting equity, and ensuring AI assists rather than replaces teachers.
In higher education, students have embraced AI at remarkable rates. Estimates suggest over four-fifths of university students use some form of AI for schoolwork – from brainstorming to essay drafting.
Yes, but: Pushback is emerging, especially when educators over-rely on AI while restricting student use. The Northeastern case exemplifies this tension. Business major Ella Stapleton filed a formal complaint after discovering her professor used ChatGPT to generate class materials while the syllabus banned students from using AI. She spotted telltale signs:
- Oddly worded paragraphs
- AI-generated images with extra limbs
- An unedited AI prompt reading "expand on all areas. Be more detailed and specific."
"He's telling us not to use it and then he's using it himself," Stapleton told The New York Times. Though the university denied her refund request, the incident sparked nationwide debate about consistency in AI policies.
A recent study found college students who used ChatGPT heavily for assignments ended up procrastinating more, remembering less, and earning lower grades on average. Yet 51% of college students say using AI on assignments is cheating, while about 1 in 5 admit they've done it anyway.
In the big picture, the turbulent introduction of AI into American education may prove to be a historic turning point – perhaps even more impactful than the arrival of computers or the internet in the classroom. Yes, the past two years have seen plenty of missteps and valid concerns: cheating facilitated on an unprecedented scale, teachers and students alike occasionally abdicating effort to an automated helper and institutions caught flat-footed without policies in place.
However, it would be a profound mistake to focus only on the downsides and lose sight of the enormous opportunity at hand. I’d argue that education is not just another sector that AI will disrupt – it is possibly the most promising and crucial application of AI in the long run.
Why such optimism? Well, consider the challenge of providing truly personalized learning; human teachers, as dedicated as they are, can only do so much in a class of 25 or a lecture hall of 200. AI tutors offer the tantalizing prospect of 1-on-1 instruction for every student, anytime and on any subject – essentially democratizing the luxury of a personal tutor that was once available only to the wealthy.
The students in school today will graduate into a world pervaded by AI – in their workplaces, civic and personal lives. It is in our collective interest to ensure the next generation is AI-literate and AI-savvy.
The lesson plan for all of us is clear: proceed with care, but keep our minds – and classroom doors – open to the potential of AI.
Which video is real?
Login or Subscribe to participate in polls.
🤔 Your thought process:
Selected Image 1 (Left):
- “It has that “film look” of 35mm color negative (Kodak process C-41) camera film; and the resolution is too low to be medium format (120/220) film.”
- “This was mostly a guess, but the water movement in the fake one seemed off and the extended arm too long.”
Selected Image 2 (Right):
- “The water droplets in [the other image] seemed like something AI would add for realism. Give my regards to the photographer!”
- “I thought the water spray would put the position of the camera at an impossible position between the boat and surfer.”
💭 Thank you!
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!
If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
⚙️ Will AI double your lifespan?
Good morning and Happy Friday! Karen Hao's explosive Atlantic excerpt reveals the chaos behind Sam Altman's brief 2023 ouster, including how OpenAI's chief scientist once discussed building "bunkers" before releasing AGI. The $300 billion company that began as an idealistic nonprofit is now the centerpiece of an "empire of AI".
— The Deep View Crew
In today’s newsletter:
- 🌧️ AI for Good: AI-powered local weather forecasting model
- 🤯 Another week, another Google AI drop
- 🧠 Could AI double human lifespan by 2030?
🌧️ AI for Good: AI-powered local weather forecasting model
Source: YingLong
AI is helping forecast local weather faster and more precisely with a new model called YingLong.
Built on high-resolution hourly data from the HRRR system, YingLong predicts surface-level weather like temperature, pressure, humidity and wind speed at a 3-kilometer resolution (which means 3km x 3km coverage). It runs significantly faster than traditional forecasting models and has shown strong accuracy in predicting wind across test regions in North America.
Dr. Jianjun Liu, a researcher on the project, explains that “traditional weather forecasting solves complex equations and takes time. YingLong skips the equations and learns directly from past data. It’s like giving the model intuition about what’s likely to happen next.”
Why it matters: Local weather forecasting requires more precision than broad national models can offer. That’s where limited area models (LAMs) come in. While most AI research has focused on global weather systems, YingLong brings that power to cities and counties in a faster, more focused way.
- Traditional weather models can take hours or days to compute.
- YingLong delivers accurate local forecasts in much less time.
- Faster forecasts help cities and agencies respond to storms and plan ahead with greater confidence.
YingLong combines high-resolution local data with boundary information from a global AI model called Pangu-Weather. It focuses its predictions on a smaller inner zone to reduce computing power and improve speed. It predicts 24 weather variables with hourly updates and performs especially well in surface wind speed forecasts. Improvements in temperature and pressure forecasts are underway using refined boundary inputs.
Big picture: AI models like YingLong won’t fully replace traditional forecasting yet, but they’re already making forecasting faster and more efficient. By offering high-resolution predictions without the usual computing demands, these tools can help more people make better decisions about weather so you don’t get rained out at the next Taylor Swift concert.
Seamlessly connect your AI agents with external tools
Not-so-fun fact: Less than two-fifths of AI projects go into production.
Why? Simple. Because building real-world AI agents is hard – and that’s before you even start worrying about things like bespoke tool integrations. Lucky for you, there’s a simple and powerful solution… Outbound Apps from Descope.
- Connect your AI agent with 50+ external tools using prebuilt integration templates
- Request data and scopes from third-party tools on users’ behalf
- Store multiple tokens per user with different scopes, calling each token as needed
And best of all, it requires no heavy lifting from your developers. Start using Outbound Apps right here when you create a free Descope account – no credit card required.
🤯 Another week, another Google AI drop
Source: Google
Google marked Global Accessibility Awareness Day by rolling out new AI-powered accessibility features across Android and Chrome. The updates bring Google’s latest Gemini AI model into everyday tools.
- TalkBack + Gemini — Ask your screen reader what’s in an image and get an answer on the spot.
- Expressive Captions — Live Caption now supports stretched-out sounds like “gooooal” in a sports clip or noting background noises like whistling
- Page Zoom — A slider scales text up to 300% in Chrome on Android without wrecking layouts.
- Scanned‑PDF OCR — Chrome desktop automatically reads text in scanned PDFs so screen readers can copy or search it
Google is expanding its work with Project Euphonia by open-sourcing tools and datasets on GitHub. These tools help developers train models for diverse and non-standard speech. In Africa, Google.org is supporting the Centre for Digital Language Inclusion to create new speech datasets in 10 African languages and support inclusive AI development.
In other Google news, Google’s DeepMind research lab has unveiled AlphaEvolve, a Gemini-powered AI agent that autonomously evolves and tests code. The system combines Gemini 2.0 Flash and 2.0 Pro with automated code evaluation to iteratively improve algorithms. AlphaEvolve has already boosted the efficiency of Google’s data centers and chip design processes, and even discovered a faster method for matrix multiplication – solving a math problem untouched since 1969.
The continuous flow of announcements over the last couple of weeks underscores Google’s growing integration of AI into its entire $2T gambit of products.
Could This Company Do for Housing What Tesla Did for Cars?
Most car factories like Ford or Tesla reportedly build one car per minute. Isn’t it time we do that for houses?
BOXABL believes they have the potential to disrupt a massive and outdated trillion dollar building construction market by bringing assembly line automation to the home industry.
Since securing their initial prototype order from SpaceX and a subsequent project order of 156 homes from the Department of Defense, BOXABL has made substantial strides in streamlining their manufacturing and order process. BOXABL is now delivering to developers and consumers. And they just reserved the ticker symbol BXBL on Nasdaq*
BOXABL has raised over $170M from over 40,000 investors since 2020. They recently achieved a significant milestone: raising over 50% of their Reg A+ funding limit!
BOXABL is now only accepting investment on their website until the Reg A+ is full.
Invest now before it’s too late
- Philips turns to Nvidia to build AI model for MRI
- AI and genetics are changing the way farmers grow corn
- AI twins have the potential to solve many problems
- Hedra lands $32M to build digital character foundation models
- Huawei’s newest watch has several must-see features
- Howie: Email based assistant to handle your calendar (in beta)
- Goldcast: Marketers are sitting on a goldmine of untapped content. Goldcast’s Content Lab helps you turn one video into 30+ assets—blogs, clips, posts, and more. Try it free*
- Aomni: Agents that help with sales
- Supermemory: Give your AI have ALL the info it needs
- Lex: Cursor, but for writing
The right hires make the difference.
Scale your AI capabilities with vetted engineers, scientists, and builders—delivered with enterprise rigor.
- AI-powered candidate matching + human vetting.
- Deep talent pools across LatAm, Africa, SEA.
- Zero upfront fees—pay only when you hire.
🧠 Could AI double human lifespan by 2030?
Source: ChatGPT 4o
In 1824, the average American lived just over 40 years. Two centuries later, that number has nearly doubled. The leap in life expectancy was driven mostly by reduced infant mortality and breakthroughs in public health and medicine. But even with antibiotics, vaccines, and surgery, the idea of living to 150 still sounds like science fiction. Now, a wave of researchers believes AI could make that fiction real.
One of the boldest voices is Dario Amodei, CEO of the AI company Anthropic. In October 2024, Amodei published a blog post predicting that AI would help double human lifespans to 150 by the end of this decade. Just three months later, he doubled down on stage at the World Economic Forum in Davos, claiming AI could deliver the breakthrough in just five years.
His reasoning? Humans already know of drugs that extend rat lifespans by 25 to 50 percent. Some animals, like certain turtles, live more than 200 years. If AI can discover and optimize therapies faster than any human team could before, why not us? Amodei believes once we hit 150, we could reach “longevity escape velocity” – the point where life-extending treatments advance faster than we age. In theory, that could allow people to live as long as they choose (better start a retirement plan for that second century of life).
He is not alone. Futurist Ray Kurzweil has made similar claims, predicting AI could halt aging by 2032. He points to two pathways. First, AI-designed nanobots that patrol the body to repair cells and deliver drugs. Second, the ability to upload the human brain into the cloud, preserving identity beyond biology. Kurzweil has long predicted the coming of a technological singularity. Longevity, in his view, may be the first step.
Yes, but…
Even believers admit these ideas are speculative. Many scientists are calling for caution. S. Jay Olshansky, a leading aging researcher and professor at the University of Illinois Chicago, says there is simply no evidence that AI can slow or stop the biological process of aging. Around the same time Amodei released his blog, Olshansky published a rebuttal in Nature Aging, arguing that enthusiasm is racing ahead of science.
“The longevity game we’re playing today is quite different from the one we played a century ago,” Olshansky wrote. “Now aging gets in the way, and this process is currently immutable.” He warns that claims about radical lifespan extension are not supported by evidence and are, in many ways, indistinguishable from pseudoscience.
Go deeper: AI is already helping improve human health. Researchers are using large models to develop drugs, predict protein structures, and model complex disease systems. Projects like DeepMind’s AlphaFold and Insilico Medicine are promising early examples. But increasing the healthspan – the number of years someone stays healthy – is not the same as increasing the lifespan. So far, no AI system has proven it can delay or reverse aging in humans.
The next leap may depend not on medicine alone but on machines. It is tempting to believe that AI will uncover the secrets of longevity. But believing and proving that are two very different things.
The search for longer, healthier lives is one of the noblest goals of science. AI could very well accelerate drug discovery, unlock hidden mechanisms of disease, and give every person access to high-quality health advice. That alone would be a transformative legacy.
Maybe the real question isn’t whether AI can help us live to 150. It’s whether we’d want to live that long (I don’t think I want to live to 150…) – and if we’re willing to put in the decades of work to find out.
Which image is real?
Login or Subscribe to participate in polls.
🤔 Your thought process:
Selected Image 1 (Left):
- “There are real 'faults' in the grass patterns in [this] video. In the [other] video the arc of the horizon does not look correct”
- “Wow. Video is very hard! I picked [this video] because the detail of the reflections through the trees and off the roof of the car as the camera moved seemed accurate - and like something AI wouldn't have totally nailed.”
Selected Image 2 (Right):
- “There was an odd vertical shadow in the road of the spinning camera view, that made it look like it had a gap where an AI forgot to render the yellow dotted line. But I've become too cynical - this was the real video!”
- “Shadow in the [other] one put me off”
💭 Thank you!
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!
If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
*Indicates sponsored content
*Boxabl Disclosure: This is a paid advertisement for BOXABL’s Regulation A offering. Please read the offering circular here. This is a message from BOXABL
*Reserving a Nasdaq ticker does not guarantee a future listing on Nasdaq or indicate that BOXABL meets any of Nasdaq's listing criteria to do so.
⚙️ OpenAI introduces Codex
Good morning. Nvidia CEO Jensen Huang's trip to Taiwan, after visiting the Middle East with Trump, has sparked "Jensanity" as adoring fans mob him for autographs on books, posters, and even baseballs. The Taiwan-born billionaire — whose company is now selling official Jensen-branded merch at a pop-up store — prompted confusion from his US-based colleagues (where he walks around fairly unnoticed).
— The Deep View Crew
In today’s newsletter:
- 🩸 AI for Good: AI spots blood clots before they strike
- 🤖 Penn reimagines research with AI at its core
- 🧠 OpenAI introduces Codex
🩸 AI for Good: AI spots blood clots before they strike
Source: ChatGPT 4o
For heart patients, the first sign of a dangerous clot is often a heart attack or stroke. Now, researchers at the University of Tokyo have unveiled an AI-powered microscope that can watch clots form in a routine blood sample – no catheter needed.
The new system uses a high-speed "frequency-division multiplexed" microscope – essentially a super-fast camera – to capture thousands of blood cell images each second. An AI algorithm then analyzes those images in real time to spot when platelets start piling into clumps, like a traffic jam forming in the bloodstream.
In tests on over 200 patients with coronary artery disease, those with acute coronary syndrome – a dangerous flare-up of heart disease – had far more platelet clumps than patients with stable conditions. Just as importantly, an ordinary arm-vein blood draw yielded virtually the same platelet data as blood taken directly from the heart’s arteries via catheter.
Why it matters: This AI tool could make personalized treatment easier and safer:
- Traditional platelet monitoring relies on invasive or indirect methods
- The AI tool analyzes blood from a basic arm draw
- Real-time imaging allows doctors to observe platelet clumping directly
- The method may reduce reliance on catheter-based procedures
The team of researchers published its findings this week in Nature Communications.
✂️ Cut your QA cycles down from hours to minutes
If slow QA processes and flaky tests are a bottleneck for your engineering team, you need QA Wolf.
QA Wolf's AI-native platform supports both web and mobile apps, delivering 80% automated test coverage in weeks and helping teams ship 5x faster by reducing QA cycles to minutes.
With QA Wolf, you get:
✅ Unlimited parallel test runs
✅ 15-min QA cycles
✅ 24-hour maintenance and on-demand test creation
✅ Zero-flake guarantee
The result? Drata’s team of 80+ engineers saw 4x more test cases and 86% faster QA cycles.
No flakes, no delays, just better QA — that’s QA Wolf.
🤖 Penn reimagines research with AI at its core
Source: UPenn
The University of Pennsylvania has quietly built a human collider for AI.
Launched this spring by cosmologist Bhuvnesh Jain and computer scientist René Vidal, the AI x Science Fellowship unites more than 20 postdoctoral researchers from physics, linguistics, chemistry, engineering and medicine. Each fellow receives two faculty mentors, a modest research budget and campus-wide access to labs and high-performance computing. Weekly Tuesday lunches double as idea exchanges, while open seminars pull in curious researchers from every school.
The fellowship grew out of a 2021 data-science pilot in Arts & Sciences and now spans Engineering and Penn Medicine, with Wharton fellows due in the fall. Jain and Vidal—co-chairs of Penn’s AI Council—plan to scale it into a university-wide Penn AI Fellowship and create a “data-science hub” where roaming AI specialists spend a fifth of their time parachuting into other labs.
Why it matters: As AI research moves rapidly into the private sector, this initiative encourages collaboration on AI research questions that don’t yet have commercial applications. Industry labs chase near-term products. Penn is betting that open-ended, ethically grounded questions—trustworthy AI, machine learning for dark-matter hunts—still belong in academia. The fellowship gives young scientists a network, résumé-ready collaborations and a sandbox for ideas too early or risky for corporate funding.
The Fastest LLM Guardrails Are Now Available For Free
Fast, secure and free: prevent LLM application toxicity and jailbreak attempts with <100ms latency.
Fiddler Guardrails are up to 6x cheaper than alternatives and deploy in your secure environment.
Connect your LLM app today and run free guardrails.
- Google's AI mode replaces iconic ‘I’m Feeling Lucky’ button
- Satya Nadella ditches podcasts for AI-powered chatbot conversations
- Moonvalley raises $53M to expand ethical AI video tools
- Alibaba and Tencent boost shopping with AI-powered advertising
- CarPlay Ultra rolls out with next-gen features
- Tesla: AI Research Engineer, Model Scaling, Self-Driving
- Microsoft: Director - Responsible AI
- Together AI: A fast and efficient way to launch AI models
- Talently AI: A conversational AI interview platform (no more manual screening)
- RevRag: Automated sales via AI calling, email, chat, and WhatsApp
🧠 OpenAI introduces Codex
Source: OpenAI
Vibe coding might be all the rage – the trend of non-coders building apps through AI – but OpenAI's latest release is pointedly not for the casual "build me a website" crowd. The company just launched Codex, a cloud-based software engineering agent built to assist professional developers with real production code.
"This is definitely not for vibe coding. I will say it's more for actual engineers working in prod, and sort of throwing all the annoying tasks you don't want to do," noted Pietro Schirano, one early user, capturing the tool's intent in plain terms.
OpenAI is rolling out Codex as a research preview to ChatGPT subscribers (initially Pro, Team, and Enterprise, with Plus users to follow). Here’s Sam Altman’s tweet on response to the rollout so far.
What makes Codex unique is that it spins up a remote development environment in OpenAI's cloud – complete with your repository, files, and a command line – and can carry out complex coding jobs independently before reporting back. Once enabled via the ChatGPT sidebar, you assign Codex a task with a prompt (for example, "Scan my project for a bug in the last five commits and fix it").
Under the hood, Codex uses a specialized new model called codex-1, derived from OpenAI's latest reasoning model, o3, but tuned specifically for code work. Key capabilities include:
- Multi-step autonomy: Codex can write new features, answer questions about the codebase, fix bugs, and propose code changes via pull request – all by itself
- Parallel agents: You can spawn multiple Codex agents working concurrently (the launch demo showed several fixing different parts of a codebase in parallel).
- Test-driven verification: Codex repeatedly runs the project's test suite until the code passes, or until it exhausts its ideas and provides verifiable logs and citations of what it did.
- Configurable via AGENTS.md: You can drop an AGENTS.md file in your repo to guide the AI. This file tells Codex about project-specific conventions, how to run the build or tests, which parts of the codebase matter most, etc. Early users report this dramatically helps Codex avoid rookie mistakes.
OpenAI has been testing Codex with several early design partners to prove its value in actual development teams:
- Temporal uses Codex to debug issues, write and execute tests, and refactor large codebases, letting Codex handle tedious background tasks so human developers can stay "in flow" on core logic.
- Superhuman is leveraging Codex to tackle small, repetitive tasks, and have found that PMs (non-engineers) can use Codex to contribute lightweight code changes.
- Kodiak Robotics has Codex working on their self-driving codebase, writing debugging tools and improving test coverage.
The big picture: All this comes amid a broader frenzy to build agentic AI developers. Just months ago, startup Cognition released "Devin," branding it "the first AI software engineer." We immediately subscribed to the $500/month service when it launched to the public, drawn in by promises that it could write entire apps in minutes and solve complex coding issues with minimal help. However, we canceled within the first month after finding it didn't live up to the hyped announcements – a common theme in the current AI landscape where capabilities often lag behind marketing claims.
Cognition raised $21 million for Devin despite its early performance on the SWE-Bench coding challenge (an industry benchmark for fixing real GitHub issues) being modest – it solved about 13.9% of test tasks on its own. Hot on its heels, researchers at Princeton built SWE-Agent, an open-source autonomous coder using a GPT-4 backend that scored 12.3% on the same benchmark – nearly matching the venture-backed startup's AI dev agent with a fraction of the resources.
Big tech isn't sitting idle. Google is expected to unveil a major AI coding tool at tomorrow's I/O developer conference, and GitHub Copilot, the incumbent AI assistant, is evolving rapidly as Microsoft folds it into a broader Copilot X vision with chat and voice features inside the IDE.
It's becoming clear that in this new landscape, the advantage of simply owning a big codebase is evaporating. We previously dubbed this "the no-moat era" in our analysis – when an indie dev with AI tools can reimplement a competitor's core features over a weekend, traditional software moats based on headcount start to crumble.
AI agents succeed when they’re scoped, sandboxed, and verifiable. Devin over-promised, under-specified, and hit a wall. Codex under-promises (no “build me Instagram”), gives the agent a test harness, and documents every step. That mindset — treat the AI like a junior dev who must show their work — is how agentic coding will stick in the short-to-mid term future.
Expect pricing to migrate toward “pay per compute” rather than all-you-can-eat. By year’s end, I would expect every IDE, CI pipeline and repo host to surface “spawn agent” buttons. And expect the winners to be the dev teams that invest in good tests, clear docs, and tight review loops.
Software engineering just got a yet another teammate. It works fast, complains never, and absolutely needs a code review. Use it wisely. Buyer beware.
Which image is real?
Login or Subscribe to participate in polls.
🤔 Your thought process:
Selected Image 1 (Left):
- “The reflections on the fuselage of the airplane in [the other image] seemed out of place, and the motion blur with the propellers didn't feel correct.”
- “I think this one is real because I have seen images like this in realtime on occasion. I have seen the moon during the day and I have seen it with an aircraft too. In Florida, especially, cloud formations are common and seeing all three have happened before.”
Selected Image 2 (Right):
- “Landing gear was open in the other pic, which put me off.”
- “[The other image’s] tail stabilizer is too high for an aircraft with underwing engines”
💭 A poll before you go
Will you let OpenAI's Codex in your codebase?
Login or Subscribe to participate in polls.
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!
If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
⚙️ Microsoft is building the "open agentic web"
Good morning. Swedish buy-now-pay-later giant Klarna is now making nearly $1 million in revenue per employee (up from $575K) after replacing 700 customer service workers with AI chatbots. The efficiency push comes just as the company filed for its much-anticipated US IPO... only to promptly postpone it after Washington’s tariff announcement sent markets into a tailspin.
— The Deep View Crew
In today’s newsletter:
- ♋ AI for Good: AI doing big things for equitable cancer care
- 📓 New NotebookLM app helps you understand anything, anywhere
- 📈 Microsoft ditches bing to build the open agentic web
♋ AI for Good: AI doing big things for equitable cancer care
Source: CancerNetwork
Up to 30% of breast-cancer cases flagged in screening are ultimately overdiagnosed, sending patients through surgery and chemo they never needed. Researchers say AI can shrink that number by spotting subtler tumor patterns and matching them to precision-medicine profiles.
Medical student Viviana Cortiana and physician Yan Leyfman lay out the roadmap in their April 2025 ONCOLOGY review, calling for population-specific models trained on diverse, high-quality data to curb false positives and tailor treatment, especially in low- and middle-income countries.
Their framework for ethical cancer AI rests on four pillars:
- Data privacy and security
- Clinical validation
- Transparency in model design
- Fairness through bias checks
Early roll-outs show the concept works:
- India’s Telangana state has begun an AI pilot across three districts to screen for oral, breast and cervical cancers, with instant triage to specialists—an approach aimed at easing its radiologist shortage.
- AstraZeneca + Qure.ai have processed five million chest X-rays in 20 countries, flagging nearly 50,000 high-risk lung-cancer cases and proving AI triage can scale in resource-strained settings.
“AI has the potential to fundamentally change how we detect, treat and monitor cancer, but realizing that promise… will require collaboration, validation, thoughtful implementation and a commitment to leaving no patient behind,” Leyfman said.
Big picture: Bringing these tools to scale will require collaboration. Health systems can supply de-identified scans, tech firms refine algorithms, NGOs underwrite training and governments streamline approvals. If those players sync, AI could deliver the same diagnostic confidence enjoyed in top clinics to every community, easing overtreatment costs and catching deadly cancers earlier, resulting in smarter care for all.
Master AI Tools, Set Automations & Build Agents – all in 16 hours (for free)
AI isn’t a buzzword anymore. It’s the skill that gets you hired, helps you earn more, and keeps you future-ready.
Join the 2-Day Free AI Upskilling Sprint by Outskill — a hands-on bootcamp designed to make you AI-smart in just 16 hours.
📅23rd May- Kick Off Call & Session 1
🧠Live sessions- 24th & 25th May
🕜11AM EST to 7PM EST
Originally priced at $499, but the first 100 of you get in for completely FREE! Claim your spot now for $0! 🎁
Inside the sprint, you'll learn:
✅ AI tools to automate tasks & save time
✅ Generative AI & LLMs for smarter outputs
✅ AI-generated content (images, videos, carousels)
✅ Automation workflows to replace manual work
✅ CustomGPTs & agents that work while you sleep
Taught by experts from Microsoft, Google, Meta & more.
🎁 You will also unlock $3,000+ in AI bonuses: 💬 Slack community access, 🧰 top AI tools, and ⚙️ ready-to-use workflows — all free when you attend!
Join in now, (we have limited free seats! 🚨)
📓 New NotebookLM app helps you understand anything, anywhere
Source: Google
NotebookLM just went mobile, giving you a smarter way to learn, organize, and listen. Anytime, anywhere.
After months of user feedback, NotebookLM is now available as a mobile app on both Android and iOS. The app brings key features from the desktop version to your phone or tablet, allowing you to interact with complex information on the go. Early users are already praising its ability to help with research, review, and multitasking in real time.
Whether you’re a student, professional, or knowledge enthusiast, NotebookLM now fits right in your pocket.
The details: NotebookLM is no longer tied to your desktop. With its new mobile release, Google is giving users more flexibility in how they process and interact with information.
- Available now on iOS 17+ and Android 10+
- Listen to Audio Overviews offline or in the background
- Ask questions by tapping "Join" while listening
- Share content directly from other apps into NotebookLM
- Ideal for managing information while commuting or multitasking
Google says this is just the start. More updates are on the way, including expanded file support and tighter integration with other Google products. Additional source types will be supported in future updates and annotation tools and editing options are expected soon. Feedback is being collected on X and Discord with future releases that may include deeper AI customization and smarter summaries.
Here’s a great video from a couple of weeks ago talking about NotebookLM turning everything into a podcast. Check it out
Big picture: NotebookLM is evolving into more than a research tool. By going mobile, it becomes a personal learning assistant you can use wherever inspiration hits. The shift is not just about convenience—it is about making high-level thinking mobile. Whether you’re reviewing documents on the train or summarizing sources between meetings, this update turns passive reading into active understanding.
Train and Deploy AI Models in Minutes with RunPod
What if you could access enterprise-grade GPU infrastructure—without the enterprise-grade price tag or complexity?
With RunPod, you can. Our platform gives developers, engineers, and startups instant access to cloud GPUs for everything from model training to real-time inference. No devops degree required.
Build, train, and deploy your own custom AI workflows at a fraction of the cost—without waiting in line for compute.
Launch your first GPU in 30 seconds → RunPod.io
- A new study reveals that most AI models still can’t tell time or read calendars
- Nvidia’s CEO calls Chinese AI researchers “world class” in a nod to global innovation
- Leaked specs point to a lightweight iPhone 17 Air with a 2,800mAh battery
- Why AI advancement doesn’t have to come at the expense of marginalized workers
- China begins assembling its supercomputer in space
- Google: Machine Learning Engineer, LLM, Personal AI, Google Pixel
- Deloitte US: AI Engineering Manager/Solutions Architect - SFL Scientific
- Drift: AI-powered chatbots that qualify leads and book meetings automatically.
- Regie.ai: AI tool for sales outreach, creating entire email sequences and call scripts in seconds
- Tiledesk: Combines live chat and conversational AI to automate customer support across multiple channels
📈 Microsoft ditches bing to build the open agentic web
Source: Microsoft
If you can’t beat them, host them. At Microsoft Build, Microsoft announced partnerships to host third-party AI models from xAI, Meta, Mistral, and others directly on Azure, treating these former rivals as first-class citizens alongside OpenAI’s ChatGPT models. Developers will be able to mix and match models via Azure’s API and tooling, all with Microsoft’s reliability guarantees and security wrapper:
- Meta’s Llama series – the open-source family of large language models from Meta, known for being adaptable and efficient.
- xAI’s Grok 3 (and Grok 3 Mini) – the new LLM from Elon Musk’s startup xAI, which Microsoft is now hosting in Azure in a notable alliance (Musk once co-founded OpenAI, now he’s indirectly back on Microsoft’s platform).
- Mistral – a French startup’s model focusing on smaller, high-performance LLMs.
- Black Forest – models from Black Forest Labs, a German AI firm.
This brings Azure’s catalog to 1,900+ models available for customers. Microsoft CEO Satya Nadella, who spoke via hologram with Elon Musk during the keynote, touted the multi-model approach as “just a game-changer in terms of how you think about models and model provisioning,” Nadella said. “It’s exciting for us as developers to be able to mix and match and use them all.” In effect, Microsoft is positioning itself as an impartial arms dealer in the AI race – happy to rent you any model you want, so long as you run it on Azure.
Go deeper: Microsoft introduced an automatic model selection system inside Azure (part of the new Azure AI Foundry updates). Dubbed the Model Router, it routes each AI query to the “best” model available based on the task and user preferences. This behind-the-scenes dispatcher can optimize for speed, cost, or quality – for example, sending a quick question to a smaller, cheaper model, but a complex query to a more powerful (and expensive) model. It also handles fallbacks and load balancing if one model is busy. For developers, it promises easier scalability and performance without manual model wrangling.
Yes, but: The catch? All this convenience further ties developers into Microsoft’s ecosystem. The Model Router makes Azure the brain that decides which model handles your requests – a useful service, but one that subtly increases dependency on Microsoft’s cloud. By making multiple models available under one roof (and even one API), Microsoft reduces any incentive for customers to shop around elsewhere. Choice is abundant – as long as Azure is the one providing it.
Another standout Build announcement was NLWeb, an open-source initiative aimed at turning every website into a model-callable endpoint that can talk back in plain language. Microsoft’s CTO Kevin Scott introduced NLWeb as essentially the HTML for the AI era.
The idea: with a few lines of NLWeb code, website owners can expose their content to natural language queries. In practice, it means any site could function like a mini-ChatGPT trained on its own data – your data – rather than ceding all search and Q&A traffic to external bots.
Each NLWeb-enabled site runs as a Model Context Protocol (MCP) server, making its content discoverable to AI assistants that speak the protocol. In one demo, food site Serious Eats answered conversational questions about “spicy, crunchy appetizers for Diwali (vegetarian)” and generated tailored recipe links – all via NLWeb and a language model, without an external search engine in the middle. Microsoft is pitching this as an “agentic web” future where AI agents seamlessly interact with websites and online services on our behalf.
In other Microsoft news, GitHub Copilot is graduating from autocomplete to autonomous agent. At Build, Microsoft previewed a new Copilot capability (a “coding agent”) that can take on full software tasks by itself. We talked about these AI powered dev tools in yesterday’s edition.
Microsoft is betting big on becoming the infrastructure layer for AI. After last week’s layoff of about 6,000 workers—the firm’s second-biggest cut ever—the company is plowing cash into GPUs, data centers and a catalog of 1,900+ models. The new Model Router lets Azure decide which model handles each query, tightening the lock-in loop.
Bing’s near-absence says it all. Search got only a footnote—mainly news that the standalone Bing Search APIs will be retired this summer, folded into Azure “grounding” services for agents. Microsoft doesn’t need to win consumer search if it can own the pipes every AI request flows through.
Agents stole the Build spotlight, but many reporters we’ve spoken to (for a role we’re hiring… click here to apply if you’re smart and like to write about AI :) call agent hype overblown. Microsoft is leaning in anyway—because agents will need a home, and Azure already has the keys.
Up next: Google I/O is happening today, and it’s a safe bet Sundar Pichai and team will have their own AI twists and turns to announce. We’ll cover how Google’s vision stacks up in our next edition. Stay tuned.
Which image is real?
Login or Subscribe to participate in polls.
🤔 Your thought process:
Selected Image 1 (Left):
- “I spent 5 minutes thinking about how donkeys/mules walk and decided that the guy in [the other Image] would have had both legs on the right hand side moving in the same direction, not oppositionally.”
- “The grass in the foreground looks duplicated and the tree line in the distance looks too uniform and obviously fake. I went with [this image] because the color cast is consistent throughout and not Ai optimized.”
Selected Image 2 (Right):
- “Oof, this was hard. It looked like it had more details, but the other one was a better picture.”
- “the donkey in [the other image]… what happened to his ear and the background is too distorted for the type of shot taken. Even though I am having trouble with the saddle sash on the first [this] one I still think the [other] one is AI.”
⚙️ Report: How AI will shape the future of energy
Nvidia CEO Jensen Huang's trip to Taiwan, after visiting the Middle East with Trump, has sparked "Jensanity" as adoring fans mob him for autographs on books, posters, and even baseballs. The Taiwan-born billionaire — whose company is now selling official Jensen-branded merch at a pop-up store — prompted confusion from his US-based colleagues (where he walks around fairly unnoticed).
PODCASTS
ALL EPISODESNEWS
ALL NEWS
⚙️ Microsoft is building the "open agentic web"
Good morning. Swedish buy-now-pay-later giant Klarna is now making nearly $1 million in revenue per employee (up from $575K) after replacing 700 customer service workers with AI chatbots. The efficiency push comes just as the company filed for its much-anticipated US IPO... only to promptly postpone it after Washington’s tariff announcement sent markets into a tailspin.

⚙️ OpenAI introduces Codex
Good morning. Nvidia CEO Jensen Huang's trip to Taiwan, after visiting the Middle East with Trump, has sparked "Jensanity" as adoring fans mob him for autographs on books, posters, and even baseballs. The Taiwan-born billionaire — whose company is now selling official Jensen-branded merch at a pop-up store — prompted confusion from his US-based colleagues (where he walks around fairly unnoticed).

⚙️ Will AI double your lifespan?
Good morning and Happy Friday! Karen Hao's explosive Atlantic excerpt reveals the chaos behind Sam Altman's brief 2023 ouster, including how OpenAI's chief scientist once discussed building "bunkers" before releasing AGI. The $300 billion company that began as an idealistic nonprofit is now the centerpiece of an "empire of AI".

⚙️ Does AI have a role in education?
Good morning. Earnings report season is among us. CoreWeave smashed Q1 earnings with $982M in revenue (wall street expected $853 M), causing an 11% after-hours jump, quickly followed by a cool off after announcing plans to invest up to $23B into AI data centers.