Articles

AI brings wealth building to non-millionaires
The wealth management business is pretty simple: Help clients manage their finances in exchange for a fee, which is often based on the client’s assets under management.
The problem with this setup, according to Joe Percoco, co-founder of investment management startup Titan, is that it incentivizes wealth managers to take on only high-net-worth clients, leaving wealth management services out of reach for younger or less wealthy people (wealth management account minimums can run into the millions).
But AI can give wealth managers “super powers,” making it feasible to service more than just wealthy customers, Percoco said. AI can handle the client life cycle by collecting client information, understanding the client’s situation, considering solutions, and making recommendations. These efficiency gains could unlock the kind of broad access typically available only through brokerages, rather than registered investment advisors (RIAs).
“The tools of AI actually enable that to truly democratize for the first time ever, meaning a kid coming out of college can actually have the same quality of advice, capabilities, and price point of a Goldman Sachs private wealth manager,” Percoco said.
Still, Percoco doesn’t foresee the human being taken out of the equation.
“We're not too optimistic of people who just throw AI slop at, in theory, one of the highest trust use cases in all of consumer [business],” Percoco said. “We actually don't believe that's gonna work, nor do we think the capabilities are there for consumers to truly trust it.” The question, for Percoco, isn’t whether a human will be the one managing clients’ wealth. The question is how many clients can each human take on — 100 or 5,000?
Titan’s vision of modernized wealth management earned it the backing of the venture capital firm Andreessen Horowitz. And fellow VC giant Sequoia recently bet on another AI wealth management firm, Nevis. Vertical AI startups have taken off in the legal and medical worlds, and a race is shaping up in the personal finance vertical.
Can you trust AI for financial advice?
As AI becomes increasingly integrated into consumers’ lives, more people are turning to AI tools for financial information or advice.
An Intuit Credit Karma survey showed 66% of generative AI users used chatbots for financial advice, and the figure rose above 80% for millennial and Gen-Z users. But while people are asking AI about their finances, they don’t always trust what they hear. In the Intuit Survey, 80% of respondents said they did additional research before acting on AI’s advice. And a Northwestern Mutual survey found that people trusted people more than AI to perform a variety of financial planning and budget management tasks.
Jake Eaton, senior partnership manager at Circle, said he views AI as powerful for his personal finances, but only in a few ways. AI can formalize his thoughts and financial goals, but Eaton said he wouldn’t want the AI to act on the information. He would also use AI to implement an automated trading strategy he wouldn’t want to do himself, offering an automated Polymarket strategy as an example.
Finally, Eaton said, he would accept AI advising and automated trading if performed under the auspices of a reputable company like Robinhood. Notably, many investment platforms already perform versions of this, where robo-advisers factor in user risk appetites and financial goals to automatically rebalance portfolios.
So while AI has become a financial tool, particularly for young people, people still seem hesitant to take humans out of the driver’s seat. As Titan co-founder Joe Percoco pointed out, finance is arguably the highest-trust sector in the consumer economy, so the bar is high.
For some young people, AI is the ‘daily driver’
For young people using AI to think through their finances, retail trading is increasingly likely to come up in conversation.
Retail investing flows grew by 50% between 2023 and 2025, according to JPMorgan data, and investment platform adoption has seen a sharp uptick among people in their twenties. Increasingly, these young investors form their trading strategies with the help of chatbots.
“We're seeing people using Surf as their daily driver for investment advice for how they want to find opportunities in crypto markets,” Ryan Li, co-founder and CEO of AI crypto trading platform Surf, said. He added that Surf uses custom models in which the data input is curated, so the AI produces higher-quality outputs for users — and, in theory, hallucinates less.
If Harvey and Open Evidence can be valued in the billions for domain-specific AI businesses, then a similar business could be sold to traders, Li said. He also argued that Surf’s AI model can surface insights and strategies that users couldn’t otherwise find.
And if AI trading companions are going to have legs, fine-tuned vertical startups may be needed, because whether it’s trading or running a vending machine, frontier models aren’t always the best with money.

Prediction markets: Google will win 2026 AI race
During 2025, we saw an explosion in hype surrounding prediction markets, which are platforms for placing wagers on future events like elections, sports games, or how much it’s going to snow in Chicago next month.
The shifting odds on AI-focused prediction markets, which are kind of like sports betting for people who use Cursor, illustrate how popular perceptions on the AI race changed over the course of this year. Perhaps no chart is more telling than this one:
When Kalshi’s prediction market for the best AI model (as measured by the LMArena leaderboard) opened in November 2024, there was broad agreement that ChatGPT would take the cake. However, trust in OpenAI’s model waned throughout the year, while belief in Google Gemini slowly picked up steam.
Interestingly, xAI’s Grok surged to 35% odds to be the best AI model by the end of the year for two weeks in March, surpassing both ChatGPT and Gemini. This was around the time that Grok-3 was released, which outperformed rivals on multiple benchmarks. The lab continued to push improvements for the model over the next couple weeks, but after DeeperSearch and an image editing tool were released, Grok fell behind Gemini — although it interestingly traded more or less in lockstep with ChatGPT’s odds for the rest of the year.
As you can see, prediction markets were bullish on Gemini far before Gemini 3’s release led OpenAI boss Sam Altman to declare a code red at the company. However, Gemini’s chances of victory leaped to over 91% shortly after Gemini 3 was made public.
Other AI-themed prediction markets tell their own little stories. After crossing 40% in the wake of DeepSeek-R1’s release, the odds that a Chinese AI model claims the #1 spot are now around 2%. Bettors were caught off guard by OpenAI’s decision to restructure as a for-profit company, giving the company’s odds of ditching its non-profit structure just a 24% chance. The chances Congress passes an AI law have slowly declined, as have the odds that OpenAI announces it has achieved AGI by 2030.
There is one set of odds that have been increasing though: The New York Times’ chances of winning its lawsuit against OpenAI.
Are prediction markets the future of AI news?
For many AI-adjacent folks, relying on prediction markets for truth instead of antagonistic legacy news is a compelling idea. But while prediction markets are useful tools for gauging public sentiment, AI-themed betting markets also create significant potential for corruption and trickery.
A case in point came recently: When a group of prediction market users collectively cashed out over $10,000 on the bet that OpenAI would release a new model by December 13, suspicions arose that the winners had access to privileged information about OpenAI’s product plans, according to The Information.
While it can be suspicious to see bettors win big on somewhat-niche propositions, it’s a bit hard to think anything untoward actually happened. If someone either worked at or had close access to OpenAI, why risk being fired or cut out from one of the most valuable companies of all time for a few thousand bucks?
Either way, for some prediction market boosters, insider trading is actually a feature, rather than a bug, because it guides the market to the truth as efficiently as possible (since prediction markets are not securities, insider trading laws are a little murkier than with the stock market).
Still, these tools are undeniably useful in the right contexts. Both Google and Anthropic have built internal prediction markets where employees can bet with fake money on things like when a given team will finish a project, per The Information.
Prediction markets favor Google over OpenAI
Google had a banner 2025, and prediction markets think that momentum will carry into 2026.
Despite expectations that OpenAI’s "code red" will end in January, bettors aren’t counting on Gemini’s downfall. Polymarket gives Google a 74% chance of having the best AI model by the end of March and a 65% chance of leading the race by the end of June. Prediction market bettors’ Google bullishness extends to the public markets. Polymarket gives Alphabet a 36% chance of being the world’s largest company by market capitalization at the end of next year, tied with Nvidia.
The markets don’t see things moving very quickly over at Google’s main chatbot rival, OpenAI. Bettors give the next-generation GPT-6 model a 49% chance of launching by the end of June. They also assign only a 33% chance of OpenAI pulling off an IPO by the end of 2026. And for those jazzed up about Jony Ive’s new consumer device, patience may be required. Polymarket gives OpenAI just a 35% chance of launching a consumer hardware product by the end of 2026.
Interestingly, bettors see a non-trivial chance of AI M&A over the next couple of years. Polymarket gives Perplexity, Anthropic, and OpenAI a 41%, 36%, and 29% chance of being acquired by the end of next year. Apparently, they haven't looked too closely at the eye-watering valuations of those companies, which make an acquisition very difficult.

AI firms line up for US govt's 'Genesis Mission'
The US Department of Energy enlisted the support of 24 organizations, including OpenAI, Anthropic, Google, and Microsoft, for its Genesis Mission, an effort to accelerate science, national security, and energy innovation through AI.
The Trump Administration unveiled the Genesis Mission in late November, likening it to a Manhattan Project for AI. The big names involved seem to signal that all hands are on deck in helping the US outpace China in the global AI arms race.
The past few weeks have been busy for Trump’s AI team:
- The president issued an executive order to limit states’ oversight of AI.
- The administration has been touting its “Tech Force,” an “elite corps of top engineering talent building the future of American government technology.”
- Pete Hegseth’s Department of War rolled out a US military chatbot.
The Genesis Mission initiative builds on the Trump administration’s AI action plan, which called on the DoE, along with other organizations, to monitor the national security implications of frontier models. Involved organizations are expected to contribute in a variety of ways, with Nvidia and Oracle chipping in compute, Microsoft and Google giving cloud infrastructure and AI tools, OpenAI deploying frontier models for scientific research, and Anthropic developing Claude-based tech for national labs.

Meta’s open-source era may be over
After spending billions and enduring significant staff turnover to revamp its AI unit, Meta is weighing a move to make its new model, nicknamed Avocado, closed source, multiple outlets reported.
Since open-source Llama 4 received mixed reviews following its April release, Meta CEO Mark Zuckerberg has embarked on an ongoing crusade to reverse the company’s AI fortunes. Meta aqui-hired Scale CEO, Alexandr Wang, and co. went on an AI hiring spree to build, in Zuckerberg’s telling, “the most elite and talent-dense team in the industry.” Wang’s team, one of a few AI teams within Meta, is known internally as TBD Lab. While pay packages reportedly stretching up to nine figures have attracted talent, the group has reportedly not cohered.
Meta’s AI restructuring led some researchers to depart for other AI labs. Meta’s chief AI scientist, Yann LeCun, left in November to found his own startup. CNBC has reported that LeCun was rankled by the 600 layoffs that Meta Superintelligence Labs (MSL) underwent in October. Just this week, MSL employees Sang Michael Xie and Vitaliy Chiley decamped from the company.
Those still at Meta have reportedly clashed on direction. Senior executives said some of Meta’s AI efforts should be oriented toward improving the company’s social media and advertising businesses, while Wang argued Meta should catch up with frontier models offered by OpenAI and Google before focusing on products, per The New York Times. The Times also reported that budget cuts to Meta’s metaverse team were rerouted to Wang’s unit.
TBD Lab’s new frontier model, being closed source, would represent a major strategic shift for Meta, which has long touted its open-source AI efforts. The risks of open-source development became clear when DeepSeek’s R1 model incorporated components of Llama’s architecture, which angered some at Meta, CNBC reported.
Open-source cuts both ways, though. TBD Lab is using Alibaba’s Qwen model as part of the training process for Avocado, Bloomberg reported.
Our Deeper View
Meta has fallen behind other open-source AI models, so it’s pivoting to the even more competitive field of closed source AI models. Keeping Llama open-source could have created a wedge for Meta to differentiate itself from other hyperscalers, but as things currently stand, it’ll be up to Wang to lead Meta’s proprietary model past those offered by OpenAI, Google, and Anthropic. In the words of Wang’s unit within Meta, the odds of that happening are very much TBD.

Where does Apple's AI journey go from here?
Apple recently parted ways with its AI head, John Giannandrea, marking the latest setback in the iPhone maker’s quest to match other tech giants’ progress in AI.
As the firm hits something of a soft reset on its AI strategy, Google and Microsoft veteran Amar Subramanya will become Apple’s new head of AI. However, the root causes behind Apple’s AI woes likely run deeper than the actions of a single executive.
Much of the criticism of Apple’s AI efforts under Giannandrea, who Apple poached from Google in 2018, centers on the company’s failure to ship a promised overhaul to the Siri voice assistant. Interestingly, Siri was first integrated into iPhones in 2011, giving Apple the potential for a real head start on other big tech firms in integrating voice-enabled chatbots into devices.
But Apple squandered its early lead and saw its AI efforts surpassed by rivals. This reality was crystallized when Apple cut a deal with OpenAI to integrate ChatGPT into Apple devices in 2024. More recently, Apple has reportedly been testing Google’s Gemini to power its new version of Siri. For a company known to prefer building all parts of its products in-house, the outside AI integrations may be a tacit admission that Apple’s AI efforts aren’t up to snuff. Even low-hanging fruit like AI news summaries were disabled by the company following user complaints of inaccurate news alerts.
As Apple’s AI head, Giannandrea “emphasized a research-driven culture that was relatively unusual for Apple [but] never articulated a coherent vision that would help the company catch up in AI,” the Wall Street Journal reported, citing anonymous sources. Indeed, one of Apple’s more well-received AI releases was not a product but a paper: In June, it published findings about AI’s tendency to collapse when faced with difficult puzzles.
Apple’s AI efforts have fallen short for a variety of reasons, Krazimo founder and CEO Akhil Verghese told The Deep View. He pointed to Apple’s privacy-first approach that minimizes its access to valuable data, its limited research pedigree and unwillingness to compensate senior engineers as highly as other tech firms, Apple’s closed ecosystem making research more difficult, and a general lack of clear direction. He added that Apple’s senior leaders were skeptical about AI for “way too long.”
“This is the bit I find most inexplicable,” Verghese said. “[Apple] just did not take [AI] seriously for an inexcusable amount of time.”
Apple generally didn't use the world "AI" until 2024, preferring "machine learning" until then, and it wasn't until its WWDC conference in June 2024 that it finally articulated its product vision for AI.
In charge of righting the ship will be Subramanya, who spent 16 years at Google, eventually rising to become the head of engineering at Gemini. He was named Microsoft’s corporate vice president of AI in July of this year, just five months before taking on this new role at Apple. Subramanya’s experience in integrating AI and ML research into products and features will be important to Apple, the company said in a statement Monday.
For now, investors seem to be more or less shrugging off Apple’s AI shortcomings. The stock is up nearly 20% over the past 3 months after the success of its iPhone 17 redesign, and its year-to-date performance has been roughly the same as Microsoft’s on a percentage basis.
or get it straight to your inbox, for free!
Get our free, daily newsletter that makes you smarter about AI. Read by 450,000+ from Google, Meta, Microsoft, a16z and more.
