The Literacy Gap in the Boardroom
A decade ago, if you sat on a company board without understanding basic financial statements, you were a liability. Nobody said it that bluntly, but everyone knew it. You could not govern what you did not understand.
AI fluency is reaching that same threshold. Not next year. Now.
I speak with founders and directors across Asia regularly, and the gap is striking. Many board members can talk about AI in broad strokes — they have read the headlines, they might even have approved an AI budget. But ask them to explain the difference between a large language model and robotic process automation, and the conversation stalls.
Directors who cannot distinguish between AI technologies cannot evaluate the opportunities, costs, or risks those technologies carry. And increasingly, those are the decisions landing on boardroom tables.
You Do Not Need to Code. You Need to Think Clearly.
Let me be direct about what AI fluency does not mean. It does not mean directors need to build models or write Python scripts.
AI fluency for directors means understanding what each technology does, where it creates value, and what can go wrong. That baseline understanding changes the quality of every AI-related conversation in the boardroom.
Having spent 14 years building businesses that increasingly depend on AI-driven tools, I find it helpful to break AI into three categories that directors encounter most often.
Category One: Large Language Models
Large language models — the technology behind tools like ChatGPT and Claude — are essentially very sophisticated text prediction systems trained on massive datasets. They can summarise documents, draft communications, answer questions, analyse text, and generate content that reads like a human wrote it.
For directors, the value proposition is real. LLMs can compress hours of briefing material into digestible summaries. They can help legal teams draft initial contract language. They can power customer service interfaces that handle routine queries without human intervention.
But here is what directors must understand: LLMs hallucinate. That is the industry term for when the model generates information that sounds confident and plausible but is factually wrong. It is not a bug that gets fixed with the next update. It is a fundamental characteristic of how these systems work.
The governance implication is significant. Any process where an LLM output is taken as fact without human verification carries risk. A board that approves deploying LLMs into customer-facing roles without understanding hallucination risk is making an uninformed decision.
The question directors should be asking is not “Should we use LLMs?” It is “Where in our workflow is it safe for outputs to be occasionally wrong, and where is it not?”
Category Two: Robotic Process Automation
RPA is the least glamorous AI technology and arguably the most reliable. It automates repetitive, rule-based tasks — data entry, invoice processing, report generation, system-to-system data transfers. There is no creative intelligence here. RPA follows instructions precisely, which is exactly what makes it valuable.
The business case for RPA is usually straightforward and measurable. Companies that deploy it well see dramatic efficiency gains. Thermo Fisher Scientific, for example, reduced invoice processing time by roughly 70 percent through RPA implementation. That is not a moonshot experiment. It is a clear operational improvement with a calculable return.
For directors, RPA sits in a comfortable governance zone. The risks are lower. The outputs are predictable. The ROI is tangible.
Where boards go wrong with RPA is ignoring it in favour of flashier AI investments. I have seen companies pour resources into generative AI pilots while their back-office teams are still manually copying data between spreadsheets. The unsexy automation often delivers the fastest, most reliable returns — and directors should be asking whether those opportunities have been captured before funding more experimental work.
Category Three: Generative AI Beyond Text
Generative AI extends beyond language models into image creation, video production, code generation, music composition, and more. Tools like Midjourney, DALL-E, and GitHub Copilot fall into this category. This is the creative frontier — high potential, but high variability in quality and high exposure to new kinds of risk.
The governance considerations here are genuinely complex. Intellectual property is the most immediate concern. If your marketing team uses a generative AI tool to create campaign imagery, who owns that image? The legal frameworks are still evolving, and the answer varies by jurisdiction. A board that has not discussed this is exposed.
Brand risk is another factor. Generative AI can produce outputs that are off-brand, inappropriate, or culturally insensitive in ways that are difficult to predict. The speed that makes GenAI attractive is the same speed that can create reputational problems.
Directors do not need to become experts in every generative tool. But they need to ensure the company has clear policies on IP ownership, brand review processes for AI-generated content, and guidelines on which use cases are approved.
Matching Governance to Risk
Understanding the three categories is step one. Step two is building a governance approach that matches the risk profile of each.
For LLMs, establish verification protocols. Any LLM output that informs a business decision or reaches a customer should pass through human review. Define which use cases are approved and revisit this quarterly as the technology improves.
For RPA, focus on audit and documentation. Governance here is about ensuring rules are correct, documented, and updated when underlying processes change.
For Generative AI, set clear boundaries. Define what can and cannot be created using GenAI tools. Establish IP review processes and create a feedback loop where the policy evolves with real-world experience.
The organisations getting this right — from the Bank of England’s AI governance frameworks to AstraZeneca integrating AI oversight into existing clinical structures — share one trait. They built governance alongside adoption, not after problems emerged.
Start With Three Questions
If you are a director reading this and unsure where to begin, bring these three questions to your next board meeting:
First, where are we currently using AI — and where are we planning to? You would be surprised how often the board does not have a clear, comprehensive answer. Shadow AI adoption is real, and governance starts with visibility.
Second, for each AI deployment, what is the failure mode? If the LLM hallucinates, what happens? If the RPA process breaks, what is the fallback? If the GenAI tool produces something problematic, how quickly can we catch it? Understanding failure modes is the foundation of meaningful risk oversight.
Third, do we have the right talent to govern this? AI governance requires people who understand both the technology and the business context. If your board lacks that expertise, consider an advisory role or a dedicated AI committee with the right composition.
This Is Not Optional
The pace of AI adoption across every industry I work in — from financial services to e-commerce to healthcare — means that AI fluency is no longer a nice-to-have for directors. It is a fiduciary responsibility.
You do not need to become a technologist. But you need to understand enough to ask the right questions, evaluate the right risks, and ensure your organisation is capturing value without exposing itself unnecessarily.
Financial literacy took decades to become a boardroom expectation. AI fluency does not have that kind of runway. The technology is moving too fast and the stakes are too high. Start now.