Steven J.  Vaughan-Nichols
Contributing Writer

Steven J. Vaughan-Nichols

Steven J. Vaughan-Nichols is a freelance writer and technology analyst. He contributes to publications including ZDNET, Foundry (Formerly IDG), The Register, The New Stack, and Cathey Communications.

IBM warns AI spend fails without AI literacy

AI literacy is a must going forward. But that means far more than just knowing how to write LLM prompts.

At a meeting at IBM Raleigh, IBM distinguished engineer Phaedra Boinodiris and Rachel Levy, North Carolina State University's Executive Director of the Data Science and AI Academy, described a future in which AI literacy is no longer a specialist concern but a baseline competency that universities, companies, and governments must build for everyone.

If they don't, the AI-savvy pair observed organizations will keep pouring money into AI systems that fail to deliver value or worse, cause real-world harm.​ As Boinodiris pointed out, the famous MIT report, State of AI in Business, showed that "95% of organizations are getting zero return" from their AI investment. This cannot stand.

Levy opened by saying that part of the problem is that many people still treat AI as a monolith rather than a suite of distinct technologies embedded in everyday tools. “People are talking about AI as if it’s one thing, and maybe we should be talking about AI technologies plural, so that people are more aware that AI is a suite of things." For example, she said that grammar checkers, medical image analysis, and farm planning tools are all now quietly AI‑driven.

That means the NC State academy’s mandate is “data science and AI literacy writ large, not just for our population on campus, but also…government agencies, industry, K‑12, all the way to C‑suite.” Levy said she is “absolutely” advocating “AI literacy for all.” She argued that societies cannot safely or productively adopt AI if understanding remains confined to engineers and computer scientists. Certainly, that seems to be the case with C-level executives who want AI but aren't clear on how to capitalize on their AI investments.

Interdisciplinarity is key because, Levy said, "the hardest problems of the future will need to be solved by interdisciplinary teams of humans working with each other in collaboration with technologies.”

Both speakers repeatedly stressed that AI systems are only as good as their data, objectives, and constraints, and that non‑technical experts are central to getting those right. “Every model has to have an objective and constraints,” Levy observed. “Imagine now you have a rare disease instead of a common disease. How much data will you have on your rare disease compared to a common disease? The outlier, the unusual point, is the one that’s the most precious.”

Levy continued, “That’s why statisticians are super important in an age of AI. The statisticians, the mathematical modelers, and your librarians are the ones who’ve always looked at data and said, ‘What is the most relevant data to a specific situation. Boinodiris put it more bluntly: “Domain experts who understand the data and the context of the data are now more important than ever,” calling data “an artifact of the human experience.”

To illustrate why context and relationships matter, Boinodiris offered a model: a single torn notebook page on a sidewalk is data; finding the whole notebook provides context and turns it into information; realizing it is a sibling’s diary adds relationships and turns it into knowledge. “Just having data is not enough,” she said. “Understanding the context and the relationships of the data is key, which, again, is why you need to have an interdisciplinary approach to who gets to decide what data is correct.”

Circling around to the business AI ROI failure, Boinodiris commented, "AI is failing to bring a return on investment,” largely because projects do not solve real problems, organizations either over‑trust or under‑trust AI outputs, and there is “a total lack of literacy.” She continued, " That doesn't just mean literacy in terms of how to use AI to be more efficient, more productive in your job. It’s, you know, how do you design, develop, deploy, procure, govern AI in a way that brings business value,” she said.

Boinodiris characterized AI as a “socio‑technical” challenge where “the hardest part…is always the social part,” and argued that responsible deployment requires humility and a much wider range of people at the table. “You have to have a lot of different human brains with a lot of different skills and competencies and lived world experiences coming to the table to be thinking about…what are those human values, those principles that we would want to see reflected in an AI?”

Asked who is accountable for responsible AI outcomes, Boinodiris said the answers she hears from large audiences are “terrible.” “The top answer I get is no one. The second common answer I get is, 'We don’t use AI,' which is hilarious, considering it’s already embedded. The third is 'everyone.' And I worry about that the answer is the most because if everyone is accountable, no one is accountable.​

Her prescription is formal governance and explicit literacy mandates. “Typically, there is a senior body that forms up an ethics board or a governing council of some sort,” she said, but it must be “funded” and backed by a mandate from the CEO and board. Those accountable for AI must secure “value alignment” across leadership, build an inventory of AI systems, know how to audit them, track a shifting regulatory landscape, and go “beyond lawful” into ethics, she added. Levy added, “If you don’t even know what you can use and for what purposes and under what conditions, you’re dead in the water.”

Despite the risks, both speakers framed the current moment as an opportunity to reinvent education around human judgment and interdisciplinary thinking. Indeed, Boinodiris called it “a Phoenix moment for the Humanities,” arguing that schools need to teach “human judgment,” creativity, and accountability in an AI‑saturated world. For her, the central literacy questions students should grapple with are: “What are those human principles we want to see reflected in AI? Is this even the right use for artificial intelligence? Is this solving the right kind of problem?”

Without AI literacy for everyone and everyone having a seat at the table, AI in business will continue to flounder, and neither corporations nor society will reap its potential benefits.

or get it straight to your inbox, for free!

Get our free, daily newsletter that makes you smarter about AI. Read by 450,000+ from Google, Meta, Microsoft, a16z and more.