Steven J.  Vaughan-Nichols
Contributing Writer

Steven J. Vaughan-Nichols

Steven J. Vaughan-Nichols is a freelance writer and technology analyst. He contributes to publications including ZDNET, Foundry (Formerly IDG), The Register, The New Stack, and Cathey Communications.

Open-source community gets a Claude-sized gift

Anthropic wants to recruit the top open-source developers and maintainers to its side. Unless they're in China, of course.

Anthropic has launched a new “Claude for Open Source” program that gives qualifying open-source maintainers six months of free access to its highest-tier, $200-a-month, Claude Max 20x plan. The AI powerhouse is framing the move as both a thank-you to the open-source community and a way to harden the software ecosystem with AI-assisted development.

According to program descriptions circulating in the developer community, Anthropic is targeting primary open-source maintainers and core contributors of major projects that meet certain scale and activity thresholds. The eligibility criteria include projects with at least 5,000 GitHub stars or over 1 million monthly npm downloads, along with recent, ongoing activity such as commits, releases, or pull-request reviews in the last few months.

That said, when Matt Mullenweg, co-founder of WordPress, asked if he and WordPress's top ten developers were eligible, Lydia Hallie, a member of Anthropic technical staff, replied on X that "We also accept maintainers for projects that don’t quite fit the criteria but still make a big impact."

In addition, Anthropic says maintainers of “critical infrastructure” projects that may not hit the headline metrics should "apply anyway and tell us about it."

The launch follows a string of moves by Anthropic to deepen its engagement with the open-source world.

Mind you, Anthropic LLMs remain some of the most closed-off models. Sure, its Model Context Protocol (MCP) is open, but the company doesn't offer any open-source versions of its flagship models. In short, don't mistake this for Anthropic getting ready to open up its models.

Opening its LLMs is simply not in the cards. This comes as no surprise since Anthropic has accused the Chinese open-source companies DeepSeek, Moonshot, and MiniMax of carrying out “industrial-scale” model distillation campaigns, exfiltrating Claude’s capabilities to improve their own models.

While Anthropic hasn't expressly said their new offer isn't available to mainland Chinese developers, it’s unlikely that they would be welcomed, as Anthropic banned “Chinese-controlled companies” from using Claude in September.

Our Deeper View

While this initiative may signify an olive branch by Anthropic, it also adds fuel to the ongoing debate over how frontier AI companies should repay the open-source projects on which their models are built. By underwriting AI access to open-source developers, these programmers get a taste of high-end frontier AI. Simultaneously, Anthropic is positioning Claude for Open Source as a tangible, albeit time-limited, attempt to sell open-source programmers on their LLM so Claude will become the open source community's go-to AI.

IBM warns AI spend fails without AI literacy

AI literacy is a must going forward. But that means far more than just knowing how to write LLM prompts.

At a meeting at IBM Raleigh, IBM distinguished engineer Phaedra Boinodiris and Rachel Levy, North Carolina State University's Executive Director of the Data Science and AI Academy, described a future in which AI literacy is no longer a specialist concern but a baseline competency that universities, companies, and governments must build for everyone.

If they don't, the AI-savvy pair observed organizations will keep pouring money into AI systems that fail to deliver value or worse, cause real-world harm.​ As Boinodiris pointed out, the famous MIT report, State of AI in Business, showed that "95% of organizations are getting zero return" from their AI investment. This cannot stand.

Levy opened by saying that part of the problem is that many people still treat AI as a monolith rather than a suite of distinct technologies embedded in everyday tools. “People are talking about AI as if it’s one thing, and maybe we should be talking about AI technologies plural, so that people are more aware that AI is a suite of things." For example, she said that grammar checkers, medical image analysis, and farm planning tools are all now quietly AI‑driven.

That means the NC State academy’s mandate is “data science and AI literacy writ large, not just for our population on campus, but also…government agencies, industry, K‑12, all the way to C‑suite.” Levy said she is “absolutely” advocating “AI literacy for all.” She argued that societies cannot safely or productively adopt AI if understanding remains confined to engineers and computer scientists. Certainly, that seems to be the case with C-level executives who want AI but aren't clear on how to capitalize on their AI investments.

Interdisciplinarity is key because, Levy said, "the hardest problems of the future will need to be solved by interdisciplinary teams of humans working with each other in collaboration with technologies.”

Both speakers repeatedly stressed that AI systems are only as good as their data, objectives, and constraints, and that non‑technical experts are central to getting those right. “Every model has to have an objective and constraints,” Levy observed. “Imagine now you have a rare disease instead of a common disease. How much data will you have on your rare disease compared to a common disease? The outlier, the unusual point, is the one that’s the most precious.”

Levy continued, “That’s why statisticians are super important in an age of AI. The statisticians, the mathematical modelers, and your librarians are the ones who’ve always looked at data and said, ‘What is the most relevant data to a specific situation. Boinodiris put it more bluntly: “Domain experts who understand the data and the context of the data are now more important than ever,” calling data “an artifact of the human experience.”

To illustrate why context and relationships matter, Boinodiris offered a model: a single torn notebook page on a sidewalk is data; finding the whole notebook provides context and turns it into information; realizing it is a sibling’s diary adds relationships and turns it into knowledge. “Just having data is not enough,” she said. “Understanding the context and the relationships of the data is key, which, again, is why you need to have an interdisciplinary approach to who gets to decide what data is correct.”

Circling around to the business AI ROI failure, Boinodiris commented, "AI is failing to bring a return on investment,” largely because projects do not solve real problems, organizations either over‑trust or under‑trust AI outputs, and there is “a total lack of literacy.” She continued, " That doesn't just mean literacy in terms of how to use AI to be more efficient, more productive in your job. It’s, you know, how do you design, develop, deploy, procure, govern AI in a way that brings business value,” she said.

Boinodiris characterized AI as a “socio‑technical” challenge where “the hardest part…is always the social part,” and argued that responsible deployment requires humility and a much wider range of people at the table. “You have to have a lot of different human brains with a lot of different skills and competencies and lived world experiences coming to the table to be thinking about…what are those human values, those principles that we would want to see reflected in an AI?”

Asked who is accountable for responsible AI outcomes, Boinodiris said the answers she hears from large audiences are “terrible.” “The top answer I get is no one. The second common answer I get is, 'We don’t use AI,' which is hilarious, considering it’s already embedded. The third is 'everyone.' And I worry about that the answer is the most because if everyone is accountable, no one is accountable.​

Her prescription is formal governance and explicit literacy mandates. “Typically, there is a senior body that forms up an ethics board or a governing council of some sort,” she said, but it must be “funded” and backed by a mandate from the CEO and board. Those accountable for AI must secure “value alignment” across leadership, build an inventory of AI systems, know how to audit them, track a shifting regulatory landscape, and go “beyond lawful” into ethics, she added. Levy added, “If you don’t even know what you can use and for what purposes and under what conditions, you’re dead in the water.”

Despite the risks, both speakers framed the current moment as an opportunity to reinvent education around human judgment and interdisciplinary thinking. Indeed, Boinodiris called it “a Phoenix moment for the Humanities,” arguing that schools need to teach “human judgment,” creativity, and accountability in an AI‑saturated world. For her, the central literacy questions students should grapple with are: “What are those human principles we want to see reflected in AI? Is this even the right use for artificial intelligence? Is this solving the right kind of problem?”

Without AI literacy for everyone and everyone having a seat at the table, AI in business will continue to flounder, and neither corporations nor society will reap its potential benefits.

or get it straight to your inbox, for free!

Get our free, daily newsletter that makes you smarter about AI. Read by 750,000+ from Google, Meta, Microsoft, a16z and more.