Nat Rubio-Licht
Senior Reporter

Nat Rubio-Licht

Nat Rubio-Licht is a senior reporter at The Deep View. Nat previously led CIO Upside, a newsletter dedicated to enterprise tech, for The Daily Upside. They've also worked for Protocol, The LA Business Journal, and Seattle Magazine. Reach out to Nat at nat@thedeepview.ai.

OpenAI’s Sora app goes after TikTok, Meta

We’ve come a long way from Will Smith eating spaghetti.

On Tuesday, OpenAI debuted its second-generation Sora video model, which the company has called “the GPT‑3.5 moment for video,” with improved world simulation capabilities, a better understanding of physics and more user controllability.

[@portabletext/react] Unknown block type "twitter", specify a component for it in the `components.types` prop

Alongside the launch of Sora 2 came the Sora app, a social media platform designed for users to create and share AI-generated videos, potentially challenging TikTok’s dominance in short-form video. In addition to creating entirely AI-generated content, the app has a cameo feature that allows users to cast themselves and their friends in their AI creations.

The launch comes just days after the debut of Meta’s Vibes, a short-form AI video app that has been widely criticized for hocking AI slop. While Sora is currently invite-only, the user response has been kinder, at least so far.

“It is easy to imagine the degenerate case of AI video generation that ends up with us all being sucked into an RL-optimized slop feed,” CEO Sam Altman said on his personal blog. “The team has put great care and thought into trying to figure out how to make a delightful product that doesn’t fall into that trap.”

However, while CEO Sam Altman said on his personal blog that creativity is on the verge of a “cambrian explosion,” Sora so far has been the birthplace of quite a few dupes.

Reporter Alex Heath said in his newsletter that while he wasn’t able to render “Superman” when prompted, he created a lookalike with the prompt “flying superhero with a red cape.” A Twitter user was also able to create a Hamilton rip-off in Sora. 

Altman noted in his blog that one of the “mitigations” in the Sora app is deepfake and likeness misuse prevention. However, as several AI companies face legal battles with artists and creators over copyright infringement, these new and improved video models could add fuel to the fire.

CEOs are all in on agents

The C-suite is ready for agentic AI.

A study published on Thursday by the International Data Corporation, in partnership with Salesforce, found that CEOs are overwhelmingly bullish on implementing “digital labor” into their workforces, with 99% of more than 150 surveyed saying they’re prepared for the transformation.

These CEOs, who were all from organizations ranging from 100 to 10,000 employees, see agents as a key part of this vision:

[@portabletext/react] Unknown block type "embedLink", specify a component for it in the `components.types` prop

  • 65% said they are looking at AI agents as a means of transforming their business models entirely, and 73% said that digital labor would transform their company’s structure.
  • 72% of respondents believe that most employees will have an AI agent reporting to them in the next five years, and 57% of CEOs reported that digital labor would increase the need for workers in leadership roles.
  • Some of the departments that CEOs expect to see the most impact include security, software development and customer service.

Given the nascent nature of this technology, the fact that so many CEOs are sold on agentic AI is “striking,” said Alan Webber, research director at IDC. “They're looking at AI agents to reshape their business, to redo what it is they do, reform workflows and business processes.”

With this massive transformation, 80% of CEOs report that the future of work involves humans and agents coexisting, rather than complete displacement of jobs, with a projected 4 in 5 employees either remaining in their current roles or redeployed to new ones, according to the report.

While that theoretical figure still leaves 20% of workers out of luck, Webber noted that there are many roles where the impact of shifting to an “agentic enterprise” is still unknown. For example, Webber said, with the rise of AI-powered coding agents handling development, “we don't know exactly what that augmentation and what the human role there looks like yet.”

Meta to use AI to inform ads, content

Meta will soon use your AI chats to inform your scrolling.

The social media giant announced on Wednesday that it will start personalizing ads and content recommendations using user interactions with its generative AI features. Users will be notified starting October 7, before this change goes into effect in mid-December. This policy won’t apply to users in South Korea, the U.K. and the E.U. due to privacy laws in those regions.

To keep your ads from devolving into the Thanksgiving dinner table, Meta noted that topics like “religious views, sexual orientation, political views, health, racial or ethnic origin, philosophical beliefs, or trade union membership,” discussed with its AI bots, won’t be used in ad algorithms.

Meta claimed that its AI products garner more than 1 billion users monthly. Meta AI users will not have the option to opt out, the company confirmed.

A move like this is par for the course for Meta, which already relies on user interactions to hone its ad experiences and content algorithms. “Soon, interactions with AIs will be another signal we use to improve people’s experience,” the company said in its announcement.

With this move, Meta is operating by the same playbook that it always has: utilizing every tool at its disposal to target advertising to a T, and thereby rake in as much cash as possible. In the most recent quarter, the company’s ad revenue totaled nearly $46.6 billion, representing an increase of over 18% from the same quarter in the previous year.

With fears of an AI bubble creeping in, the stakes are growing higher. Companies are investing billions in developing massive AI models, with little evidence of return on investment. Meta noted in its July earnings call that its expenses are expected to range between $114 billion and $118 billion for 2025 and increase above this for 2026, primarily due to investments in AI. 

This isn’t the first time Meta has sought to incorporate AI into its digital ad playbook. The company began rolling out some generative features in its Ads Manager back in 2023, and said this past summer that it’s working on an AI tool to create entire ad campaigns from scratch. Still, it’s unclear whether these additions will be fruitful enough to make these models worth the price tag.

Legal AI frenzy grows as Eve hits $1B

Legal AI startup Eve has joined the unicorn club, raising $103 million in Series B funding, led by Spark Capital and reaching a $1 billion valuation. 

The investment was supported by Andreessen Horowitz, Lightspeed Venture Partners and Menlo Ventures.

Eve’s platform specializes in plaintiff-side law, managing and automating tasks at all parts of a case’s life cycle, including case intake, collecting records, drafting documents, legal research and discovery.

With the sheer amount of documents and information law firms have to handle, the legal field is ripe for AI automation. Eve joins several startups aiming to bring AI into the law field, with legal tech investments reaching $2.4 billion this year, according to Crunchbase.

Jay Madheswaran, CEO and co-founder of Eve, said in the announcement the company’s “AI-Native” law movement has attracted more than 350 firms as partners, which have used the tech to process more than 200,000 documents.

Eve’s tech has helped firms recover upward of $3.5 billion in settlements and judgments, including a $27 million settlement won by Hershey Law and a $15 million settlement by the Geiger Legal Group last year.

“AI has fundamentally changed the equation of plaintiff law,” Madheswaran said in the announcement. “For the first time, law firms have technology that can think with them.”

Despite its potential, deploying AI in legal contexts poses several risks. For one, AI still faces significant data security challenges, which can cause trouble when handling sensitive documents or confidential information. Hallucination and accuracy issues also present a hurdle – one that Anthropic’s lawyers already faced earlier this year after the company’s models hallucinated an incorrect footnote in its legal battle with music publishers.

ChatGPT gets parental controls

AI and teenagers have something in common: They can be unpredictable.

Looking to reign in both, OpenAI on Monday launched parental controls for ChatGPT, allowing parents and teens to link their accounts to limit, monitor and manage how the chatbot is used. The AI giant launched these controls in partnership with Common Sense Media and other advocacy groups, as well as the attorneys general of California and Delaware.

Parents now can control a number of settings on their teens’ accounts, including:

  • Setting quiet hours, removing voice mode and image generation capabilities, turning off chatGPT’s ability to save memories and opting out of model training.
  • OpenAI will also automatically limit “graphic content, viral challenges, sexual, romantic or violent role play, and extreme beauty ideals” for teen accounts.

[@portabletext/react] Unknown block type "twitter", specify a component for it in the `components.types` prop


If OpenAI’s tech detects something is “seriously wrong,” such as recognizing signs of self harm or “acute distress,” parents will be notified immediately unless they have opted out. In more serious cases, such as signs of imminent danger, OpenAI is working on a process to contact emergency services.

These guardrails come on the heels of a lawsuit alleging that OpenAI’s ChatGPT is responsible for the death of a 16-year-old boy, whose parents claim he was using the chatbot to explore suicide methods.

These safeguards highlight that an increasing amount of teens turn to AI for companionship. A July Common Sense Media survey of more than 1,000 teens found that 72% reported using AI companions, with 33% relying on these companions for emotional support, friendship or romantic interactions. 

Robbie Torney, senior director of AI programs at Common Sense Media, said in a statement that safeguards like these are “just one piece of the puzzle” in safe AI use.

In its announcement, OpenAI said these measures will “iterate and improve over time,” noting that it’s working on an age prediction system that it announced in mid-September. “Guardrails help, but they’re not foolproof and can be bypassed if someone is intentionally trying to get around them.”

Lufthansa leans on AI, cuts 4,000 Jobs

Lufthansa is cutting 4,000 jobs as it leans on AI to set higher profitability targets, the company announced on Monday.

The job cuts would primarily impact administrative roles in Germany, focusing on positions that “will no longer be necessary in the future” due to the duplication of work, the company noted.

“The profound changes brought about by digitalization and the increased use of artificial intelligence will lead to greater efficiency in many areas and processes,” the company said in its announcement.

Lufthansa is far from the first company to lean into AI to automate certain positions. Klarna and Salesforce both cut thousands of staff this year, with their CEOs confirming that AI was the reason those jobs weren’t replaced. Accenture said last week that it would “exit” staff who couldn’t be reskilled on the tech, and that 11,000 were already cut.

The string of cuts signals that companies are looking to AI as a means of automating administrative, repetitive and routine tasks. Research from Microsoft published in July found that positions such as customer service, telephone operators and sales representatives are among those that are particularly vulnerable to AI automation. 

As companies seek to prove returns on their AI investments, they may be looking to headcount as a way to fulfill those promises.

AI use among developers surges

Developers are becoming more confident in their AI coworkers.

A recent survey of nearly 5,000 software developers revealed that 90% are using AI tools, according to Google’s State of AI-assisted Software Development report. These developers are using these tools for a median of two hours per day.

According to the report, some of the top reported use cases included:

  • Writing new code (71%), modifying existing code (66%) and writing documentation (64%)
  • Additionally, maintenance tasks, such as debugging (59%), code review (56%) and maintaining legacy code (55%) were commonly reported uses of the tech

Around 65% reported “heavily” relying on AI for software development, while 37% reported a “moderate amount” of usage, despite the fact that roughly 30% of respondents said they don’t entirely trust the technology.

While the majority of respondents reported creating better quality code at a faster rate, developers face the “speed versus reliability tradeoff,” said Bob Rogers, chief product and technology officer of Oii.ai and co-founder of BeeKeeper AI. The more these tools scale, the harder it can be to track when things go wrong.

For now, however, letting AI take the reins entirely is still not very popular: Around 61% of respondents reported never interacting with AI tools in an agentic mode, likely reflecting the fact that agents are still relatively nascent.

Still, as companies seek to cut costs, tools like this threaten to flip the employment landscape on its head, Rogers said. AI tools can enable startups to operate with leaner workforces and allow larger enterprises to avoid filling certain roles after reorganization. Rather than building tools themselves, software developers’ roles may shift to helping facilitate where these AI-built tools fit in, he said.

“There's going to be a lot of need for people with software engineering understanding to be making sure all the pieces sew together properly,” Rogers said.

UN aims to shape global AI rules

Every region wants to do AI differently.

On Thursday, the United Nations announced the Global Dialogue on AI Governance, an initiative aimed at building “safe, secure and trustworthy AI systems” grounded in human rights, oversight and international law.

The initiative is part of the Global Digital Compact, an agreement introduced by the UN last year focusing on AI governance. Some of its goals include enabling interoperability between governance regimes, encouraging “open innovation” and allowing every nation “a seat at the table of AI.”

Additionally, the UN announced the creation of the International Independent Scientific Panel on AI,” a group of 40 experts to provide an “early warning system” on AI’s risks.

“The question is whether we will govern this transformation together – or let it govern us,” said António Guterres, secretary-general of the UN, in his remarks.

The problem, however, is that three of the biggest contributors to the AI transformation – the U.S., EU and China – have very, very different approaches to regulating it. 

These different approaches are representative of the “fundamental differences” to governing that already exist within these regions, said Brenda Leong, director of the AI division at law firm ZwillGen.

“AI is going to show up in each of those contexts, in alignment with that context,” said Leong. “Every country is going to use AI as a tool and as political leverage.”

Given that the UN can’t enact or enforce laws itself, the closer it gets to prodding actual regulation of AI systems, “the less and less influence they’re going to have,” said Leong.

However, the UN can still influence areas where there’s “convergence” between regions, said Leong. For example, creating technological standards, setting definitions and promoting interoperability are things that can make “everybody’s lives better.”

Additionally, the UN can represent the interests of the regions that aren’t at the forefront of the AI race, she said, to “keep that gap from growing too big.”

While these three major markets have very different ideas on how to govern their models, the impacts of this on the market are still playing out. It’s possible that the EU’s large marketplace could influence enterprises and model developers to adhere to its particularly stringent rules for matters like privacy and ethics. As Leong noted, “it’s easier to comply with one standard than many, and they're the tightest.”

OpenAI’s $400B week; massive investments, expansion

OpenAI is keeping busy.

Last week, the AI industry darling announced billions of dollars worth of deals and partnerships, signaling a massive push toward building out its infrastructure, strengthening its cloud capabilities and shoring up its push into the enterprise market.

In case you missed it, here’s the recap.

[@portabletext/react] Unknown block type "embedLink", specify a component for it in the `components.types` prop

Though some fear that all of this hype may be heading toward a bubble burst (a narrative pushed by Altman himself), “what’s really happening is a massive infrastructure buildout that signals long-term commitment, not short-term froth,” said Jason Hardy, chief technology officer of AI for Hitachi Vantara. 

The race to build out this tech “resembles an arms race,” said Hardy, one that OpenAI sits at the forefront of. Although the market will likely experience corrections, Hardy noted that we are unlikely to see another dot-com bust. “There is too much enterprise momentum, and infrastructure is committed to its success.”

Despite its current frontrunner status, OpenAI’s moves could also highlight “significant overextension risks,” said Daniel Derzic, head of AI investments at Hartmann Capital.

“If funding slows, partnerships falter or regulatory scrutiny intensifies, the massive scale could become a liability, potentially squeezing out smaller competitors,” Derzic said.

or get it straight to your inbox, for free!

Get our free, daily newsletter that makes you smarter about AI. Read by 450,000+ from Google, Meta, Microsoft, a16z and more.