Aaron Mok
Contributing Writer

Aaron Mok

Aaron Mok is a freelance journalist covering how technology is reshaping work, business, and society. His work has been featured on CBS Mornings, NPR’s Marketplace, and Politico. Aaron's stories have been cited by the Supreme Court, the FTC, and McKinsey. He previously worked as a technology reporter at Business Insider, where he covered artificial intelligence and the future of work.

China’s AI IPO wave tests a new growth model

China’s push to take its AI startups public is gaining momentum.

MiniMax Group, a Chinese developer of multimodal, large language models, raised $618.6 million USD in its Hong Kong IPO after pricing shares at the top of its range, Reuters reported. The company sold 29.2 million shares and plans to allocate most of the proceeds to research and development over the next five years. Trading is expected to begin this week.

The listing is one of the first major public debuts for a Chinese AI startup building frontier models. While U.S. players like OpenAI and Anthropic remain private, China is pushing its AI firms into public markets earlier. Hong Kong is becoming a key venue for Chinese AI companies due to its business-friendly environment, a shift unfolding amid U.S. export controls on Nvidia GPUs to train the models.

“The wave of IPO approvals does suggest a shift in accelerating AI startup development through capital market access,” Lian Jye Su, chief analyst at tech research firm Omdia, told Reuters in late December. “While the U.S. maintains a lead in frontier compute and model performance due to chip superiority, access to public funding helps China build a more self-sufficient AI ecosystem.”

The momentum started with chips. In January, Chinese chip firms Biren, OmniVision, and Shanghai Biren Technology all completed IPOs in Hong Kong and saw strong debuts, helping set a floor for investor appetite. Kunlunxin, Baidu’s AI chip unit spinoff, filed for a Hong Kong IPO that could raise up to $2 billion, while Montage Technology is preparing its own offering that could reach $1 billion.

Chinese chip firms are also turning to their home turf. In December, Moore Threads, often described as China’s Nvidia, and MetaX Integrated Circuits both held IPOs on the Shanghai Stock Exchange, drawing strong interest from retail and institutional buyers.

OpenAI launches ChatGPT Health: Hype vs. fear

OpenAI wants ChatGPT to be your new personal health assistant.

On Wednesday, the company announced ChatGPT Health, a feature that gives users personalized health and wellness insights. As a separate tab inside ChatGPT, Health lets users upload medical records and sync data from apps like Apple Health, Function, and MyFitnessPal. From there, it can evaluate test results, prep users for doctor visits, and suggest tweaks to workouts or diets.

The launch comes as healthcare systems strain under pressure. Short appointments and overworked providers often lead to rushed care and missed context. Disjointed patient data makes it harder for doctors to get a full view of someone’s health. OpenAI claims Health could help ease that burden.

“With more than 230 million health-related queries per week globally, we’re starting to see how AI can help address the shortcomings of the current healthcare system,” Fidji Simo, CEO of applications at OpenAI, wrote in a recent blog post. “To make an even greater impact, we need to make it much easier for anyone to discover what’s possible with ChatGPT and get the full value out of it for their health.”

Still, the feature naturally raises serious questions about privacy. Sensitive medical data, if leaked, can pose serious risks to a person's livelihood and well-being. OpenAI says it built Health with enhanced safeguards and that users’ data won’t be used to train its models.

The rollout comes as a wave of AI tools flood the healthcare sector. Earlier this week, the state of Utah unveiled an AI system that can renew medical prescriptions for patients without a doctor's intervention. At CES 2026, companies exhibited a vast array of new health products and the organizers of CES touted health tech as one of the fastest-developing areas of this year's show.

For now, ChatGPT Health is available to a limited group of early users. Free and paid ChatGPT users outside the European Economic Area, Switzerland, and the UK can now join the waitlist.

AI add-ons steal chat data from 900K users

Looking for an AI extension for your web browser? You may want to think twice.

In late December, cybersecurity firm OX Security identified two Google Chrome plug-ins that secretly siphoned user conversations with popular AI chatbots to attacker-controlled servers. The extensions — “Chat GPT for Chrome with GPT‑5, Claude Sonnet & DeepSeek AI” and “AI Sidebar with Deepseek, ChatGPT, Claude and more” — add a sidebar to Chrome that lets users interact with multiple frontier models directly on their browser. The malware ran silently in the background, extracting browser activity and chatbot conversations every 30 minutes, an attack known as data exfiltration.

Together, these extensions have been downloaded more than 900,000 times, exposing a trove of sensitive chatbot conversations, including personal information, company secrets, and customer details, to an unknown threat actor.

“Threat actors holding this information can use it for a variety of purposes like stalking, doxxing, selling information, corporate espionage, and extortion,” Moshe Siman Tov Bustan, a security researcher team lead at OX Security, told The Deep View.

Once labeled as “Featured” in the Chrome Web Store, the two extensions are impostors that mimic a legitimate AITOPIA extension with a nearly identical name.

According to OX Security’s assessment, the AITOPIA extension keeps user queries private and processes them on Amazon-hosted infrastructure as part of its normal operations. The malicious lookalikes, however, claim to collect “anonymous, non-identifiable analytics data,” but instead exfiltrate user conversations with ChatGPT and DeepSeek.

OX Security reported the extensions to Google on December 29. As of January 6, they remain available on the Chrome Web Store. Bustan urges users to uninstall them immediately.

To avoid malware, he recommends being cautious about extensions that request broad permissions and checking metadata — the developer’s email, website, and privacy policy — to spot anything that doesn’t pass a gut check.

Lawsuits accuse ChatGPT of fueling psychological distress

penAI is facing a wave of lawsuits accusing ChatGPT of driving users into psychological crises.

Filed last week in California state court, seven lawsuits claim ChatGPT engaged in “emotional manipulation,” “supercharged AI delusions,” and acted as a “suicide coach,” according to legal advocacy groups Social Media Victims Law Center and Tech Justice Law Project. The suits were filed on behalf of users who allege the chatbot fueled psychosis and offered suicide guidance, contributing to several users taking their own lives.

The groups allege OpenAI released GPT-4o despite internal warnings about its potential for sycophancy and psychological harm. They claim OpenAI designed ChatGPT to boost user engagement, skimping on safeguards that could’ve flagged vulnerable users and prevented dangerous conversations—all in pursuit of profit.

“These lawsuits are about accountability for a product that was designed to blur the line between tool and companion—all in the name of increasing user engagement and market share,” Matthew P. Bergman, founding attorney of the Social Media Victims Law Center, wrote in a release.

The lawsuits come as OpenAI wrestles with making its AI safer. The company says that about 0.15% of ChatGPT conversations each week contain clear signs of suicidal planning, equivalent to roughly a million users.

Younger users are particularly at risk. In September, OpenAI rolled out parental controls to let caregivers track their kids' interactions with the chatbot.

Other AI companies are also rethinking safety. Character.AI said it will ban users under 18 from “open-ended” chats with AI companions starting November 25. Meta made a similar move in October, allowing parents to disable their children’s access to chats with AI characters.

In Empire of AI, journalist Karen Hao reveals OpenAI sidelined its safety team to move faster: decisions these lawsuits now show come with real human costs.

or get it straight to your inbox, for free!

Get our free, daily newsletter that makes you smarter about AI. Read by 450,000+ from Google, Meta, Microsoft, a16z and more.