Mala Kumar on AI, Agency, and African Realities
The African Innovators Series(TAIS): Tech, Data, and AI Changing the Game
Welcome to Issue #28 of TAIS, where every Friday we spotlight visionary changemakers reshaping Africa’s tech, data, and AI landscape, one breakthrough at a time.
In today’s issue, I’m making a deliberate departure from my usual focus. Mala Kumar is the first non-African changemaker featured in TAIS, not because her voice needs amplification in global AI conversations (it already has considerable reach), but because her work sits at the exact pressure points where African realities collide with global AI systems.
As Interim Executive Director of Humane Intelligence and former Director of AI Safety at MLCommons, Mala has spent 15+ years navigating a specific tension: how do you build participatory AI systems in contexts where resources are scarce, timelines are compressed, and the infrastructure itself was designed elsewhere? Her work includes readiness analysis for sub-Saharan Africa and partnerships with organizations like Zindi.

What compelled me to feature Mala is her refusal of easy answers. She doesn’t traffic in techno-optimism or total inclusion mandates. Instead, she articulates something African practitioners live with constantly: the pragmatic middle ground between ideals and constraints, between moving forward and ensuring those most affected aren’t left behind. Crucially, she states plainly that she has “learned way more from the African continent” than she’s imparted, a humility that distinguishes her approach.
This conversation surfaces questions adjacent to ours: When is linguistic coverage in AI models empowering versus extractive? How do you balance speed with ethical structures when needs are urgent? What does responsible AI actually mean when the same system can educate or weaponize? Her frameworks around evaluation, bias bounties, and deployment ethics offer tools that could shift who gets to interrogate AI systems before they’re deployed in African communities.
TAIS remains committed to centering African voices. But centering doesn’t mean isolating. It means being strategic about which global perspectives we engage and which collaborations might strengthen rather than dilute African agency in shaping African technological futures.
Happy reading!
Defining Responsible AI and Impact
Q: You transitioned from MLCommons to become the Head of Impact and now the Interim Executive Director at Humane Intelligence. How did this move evolve your vision for AI evaluation and social good?
A: There’s a lot I could say here. One of the most interesting things about stepping into my roles at Humane Intelligence was finally finding a home for the work I want to do to responsibly deploy AI models and systems. In this video, I walk through how I define “Responsible AI” and “AI for (social) good”. Humane Intelligence has focused more on the “responsible” side, which I define as ensuring that AI models and systems don’t violate civil and political rights. It has also done important work on AI for good, which I define as AI supporting economic, social and cultural rights. I’m an expert in AI / tech for social good, so I’m excited to further strengthen the intersection of “responsible” and “for good” in AI systems and models. Humane Intelligence is a great place to lead that work.
Q: Your podcast highlights include concerns around “participation-washing,” AI literacy, and the need to slow down for ethical structures. How do these insights inform your current approach to advocacy or AI impact measurement?
A: I wouldn’t say the ethical structures need to be slowed, rather that the technical must be better intertwined with the policy. The number of people who claim to work in AI policy has exploded in the past five years, but many, if not most, are not working on technical solutions to make those ethical and policy ideas a reality. Humane Intelligence is one organization that does.
Editorial Commentary: Mala’s distinction between “responsible AI” (protecting civil and political rights) and “AI for good” (supporting economic, social, and cultural rights) offers a useful analytical framework that maps onto how international development and human rights work have traditionally been organized. But African contexts reveal something interesting about this separation: on the ground, these categories don’t operate independently.
When a farmer in Niger encounters an AI advisory system, both dimensions activate simultaneously. A biased crop recommendation system doesn’t just raise ethical concerns about fairness; it directly threatens livelihoods and food security. The civil-political and economic-social-cultural dimensions aren’t separate tracks, but they’re entangled. This isn’t a critique of Mala’s framework but an observation about what happens when frameworks developed in well-resourced institutional contexts meet environments where such institutional differentiation doesn’t exist.
In contexts where regulatory infrastructure is nascent and technological deployment moves faster than policy can keep pace, the intersection Mala wants to strengthen at Humane Intelligence isn’t just desirable but essential. You can’t protect against rights violations in AI systems if the infrastructure to detect and remedy those violations doesn’t exist yet. You can’t pursue economic empowerment through AI if the systems themselves encode biases that deepen existing inequities.
Her critique of the “technical-policy gap” resonates strongly with African realities. The explosion of AI policy expertise without corresponding technical implementation capacity creates these aspirational governance frameworks that sound compelling but lack the mechanisms to make them operational. She’s identified a pattern that repeats across technology waves: policy prescriptions detached from implementation pathways.
This gap is particularly acute in African contexts, where we don’t have the luxury of sequential development, establish policy first, then deploy technology. Systems arrive before regulations. Harms manifest before recourse mechanisms exist. Models trained elsewhere get deployed in Africa with evaluation standards designed for other contexts. Humane Intelligence’s approach of intertwining technical solutions with ethical frameworks addresses precisely this challenge: building the accountability mechanisms alongside the systems themselves, rather than assuming they already exist.
The question her work raises for African contexts is generative: if responsible and beneficial AI can’t be separated here, what does that mean for how we design both technical systems and governance frameworks? What different approaches emerge when we start from the assumption of entanglement rather than separation?
The Low-Resource Language Paradox
Q: Your prior work at MLCommons included a readiness analysis for sub-Saharan Africa as part of AI safety benchmarks. What critical lessons about regional disparities and preparedness would you want global AI communities to recognize?
A: A: Language is a top thing to consider. I gave a talk about language inclusion in the MLCommons’ AI safety/risk benchmark we built – see here. Some issues that constantly come up with written and spoken languages in large language models (LLMs) are:
Some languages and/or dialects that are highly developed in terms of grammar, bodies of literature, number of speakers, age, expressiveness, etc. are considered “low resource” languages in AI. This is because the corpus of accessible data to train the LLMs is scarce, even if written and oral histories are long and rich.
There is a real tension between strengthening LLM language completeness/coverage and the downstream effects. An LLM that can generate text like a native speaker of a language can be used to help people learn the language, or it can be weaponized.
Since the only native written alphabet in sub-Saharan Africa is Amharic (not counting languages that are written in an Arabic script), assessing language coverage of most languages spoken in the continent is hard. If everything in a prompt response is written in Roman letters, can the AI model or system figure out if a given word is Swahili, Yoruba, Xhosa, etc?
Q: Your past focus on multilingual AI, like Hindi prompt generation, echoes the linguistic diversity in Africa. How do you imagine similar approaches transforming AI accessibility across African languages?
A: Similar to the above, the challenges are different in sub-Saharan Africa and South Asia, in part because South Asia languages have different written alphabets. Hindi is written with Devanagari. Telugu is a Dravidian language written in a Brahmic script, for example.
Editorial Commentary: The term ‘low-resource language’ can obscure the richness and complexity of African languages in AI. Languages with centuries of oral literature, sophisticated grammatical structures, and millions of speakers are classified as “low-resource” solely because they lack digitally accessible training data. Mala’s framing from her MLCommons work exposes this perfectly: languages “highly developed in terms of grammar, bodies of literature, number of speakers, age, expressiveness” get labeled low-resource because “the corpus of accessible data to train the LLMs is scarce, even if written and oral histories are long and rich.”
The classification centers AI system needs rather than linguistic community realities. The resource scarcity isn’t in the language but in the extraction infrastructure. We’ve inverted the problem: instead of asking how AI systems should adapt to linguistic diversity, we’ve classified languages by their usefulness to existing AI architectures. So it is not neutral terminology but a value judgment about which languages matter, encoded as technical description.
But Mala identifies an even sharper tension: improving LLM coverage for African languages simultaneously enables education. A model fluent in Yoruba could revolutionize literacy programs or generate misinformation at unprecedented scale in communities with limited fact-checking infrastructure. The dual-use dilemma compounds when you consider power asymmetries, who controls the models, who profits from their deployment, who bears the costs?
Then there’s the alphabet problem. Most sub-Saharan African languages are written in Roman script due to colonial imposition. When AI systems process text, they can’t distinguish whether a romanized word is Zulu, Lingala, or Hausa, the orthography flattens linguistic distinctiveness. Mala’s contrast with South Asian languages reveals what’s been lost. Hindi uses Devanagari, Telugu uses a Brahmic script. These writing systems encode linguistic identity in ways that Roman script for African languages cannot. This isn’t merely a technical constraint. It’s evidence of how colonial language policies in Africa succeeded in ways they didn’t elsewhere by severing the connection between linguistic identity and written form. When European missionaries and colonial administrators imposed Roman script, they didn’t just change how languages were written; they made those languages less legible to future technologies that rely on orthographic distinctiveness. AI systems now inherit and amplify that severance, collapsing distinct cultures into undifferentiated data, exactly the flattening effect colonialism achieved through imposed orthographies.
What gets classified as a technical problem in AI is often a political legacy. The “low-resource” designation hides a history. The alphabet problem reveals whose languages were allowed to maintain their written forms and whose were compressed into colonial scripts. And now, as LLMs become infrastructure, these historical compressions determine whose languages can be “included” and on what terms, whether as full participants in AI systems or as afterthoughts, awkwardly retrofitted into architectures designed around other linguistic assumptions entirely.
Participatory Design and Learning Flows
Q: On the Humanitarian AI Today podcast, you discussed building rigorous evaluation practices and community around AI tools like “Signpost.” How do those real-world insights influence your current strategies, especially for projects involving African stakeholders?
A: A lot of the strategies I have are similar to other technology disciplines in which I’ve worked, like UX research and design and open source software. With every new wave of technology, there will be a set of people saying that total inclusiveness is critical, without defining what that means and without understanding what software, tooling, or in this case – models – are needed. Then on the other side are the tech utopia people, who believe that most problems can be solved with digital technologies, and that inclusive design is mostly a waste of time. This always becomes especially obvious in low-income countries in sub-Saharan Africa, as resources, funding, capacity, and stakeholder engagement is relatively scarce compared to high-income countries.
For me, the key is to be realistic and find the right middle ground. It won’t always be possible to design, test, evaluate, implement and deploy a solution with input from African subsistence farmers or community health workers at every stage of a product or program development. But that’s usually not possible here in the United States with American commercial farmers, either. The key is understanding when and how to get feedback and who can best represent the stakeholders given the project constraints. If we hold out for total inclusion, nothing gets built. If we involve no one affected, nothing gets solved.
Q: You’ve emphasized participatory, Global South–centered AI systems. What African-led innovation or policy initiative in AI-for-good do you feel deserves broader recognition or emulation?
A: Masakhane is doing some really great work. We’re also partnering with Zindi.
Q: Over your career (from UN to GitHub to Humane Intelligence) have there been collaborations with African innovators or change-makers that shaped your worldview or strategies?
A: Of course! I have probably learned way from the African continent than I’ve imparted. I’ve given a lot of talks about cross-cultural and cross-continent learnings in tech.
Editorial Commentary: Mala’s pragmatism about participatory design speaks to a tension familiar to anyone working across resource asymmetries: the impossibility of “total inclusion” versus the harm of complete exclusion, with the solution being a “realistic middle ground.” Her analogy, “African subsistence farmers can’t be involved at every stage, but neither can American commercial farmers,” usefully highlights a universal constraint on stakeholder engagement.
Yet the analogy also opens a deeper question. American commercial farmers participate in agricultural technology within systems where their knowledge is presumed to be expertise, where the infrastructures, economic assumptions, and technical standards have been shaped with them in mind. African subsistence farmers, by contrast, are often invited into AI systems designed within epistemologies that don’t automatically recognize their expertise as expertise. The issue isn’t just “how much” participation is feasible; it’s also “what kind” of participation counts, and on whose terms.
Mala’s reminder that perfectionism can paralyze innovation is important, holding out for “total inclusion” can mean nothing gets built. But we also need to name what makes African contexts “resource-scarce”: not just limited funding or capacity but histories of extraction and development frameworks that position Africa as recipient rather than architect. Participation frameworks designed within that history risk making African input legible only when it fits predetermined constraints.
Her acknowledgment that she’s “learned way more from the African continent” than she’s imparted is significant; it hints at a reversal of the assumed learning direction. The remaining challenge is whether global AI institutions are capable of absorbing those lessons deeply enough to reshape their own foundations.
This is why her references to Masakhane and Zindi matter. These aren’t just examples of promising projects but proof of concept: when African communities control the infrastructure rather than simply provide feedback on someone else’s timeline, different possibilities emerge. Masakhane isn’t merely participating in someone else’s NLP agenda but building language technology on its own terms. Zindi isn’t just consulted about AI challenges; it defines them.
What’s innovative here isn’t just the technology but the governance model. Participation without authority can make existing systems less harmful, but it can’t fundamentally reorient whose logic structures the system in the first place. Mala’s “middle ground” points to a pragmatic way forward inside current constraints. Masakhane and Zindi point to what becomes possible when the starting assumptions themselves shift.
Evaluation Infrastructure
Q: As Interim Executive Director, you’re advancing transparent AI evaluation techniques like open-source “evaluation cards” and bias bounties. How could these tools support African development, local NGOs, and researchers to assess AI systems more effectively?
A: Here’s where it gets a little more complicated. There are currently two organizations called “Humane Intelligence”. One is the nonprofit I currently run, and the other is a public benefit corporation (a startup). Both organizations were (co-)founded by Dr. Rumman Chowdhury. Rumman stepped down from leading the nonprofit in August 2025 so she could focus on the startup, though she remains a Distinguished Advisor to the nonprofit.
The evaluation card idea is something that is emerging from the new startup. It’s a great idea that, if done well, could vastly open up the field of real-world (in-context) AI evaluations at scale. The cards would be open-source software/data. That said, it’s still in development, so I won’t say more for now.
Bias bounties are contextual data science challenges that Rumman pioneered at Humane Intelligence, the nonprofit. The current team and I are deepening their impact by focusing more on lived experience and partner organizations. You can read more about bias bounties here. Please reach out if your organization wants to sponsor one!
In the coming weeks, I hope to have a big announcement about the future of the program.
Editorial commentary: Mala’s work on evaluation frameworks takes aim at a power gap that is rarely named: the capacity to scrutinize AI systems is itself a scarce resource, concentrated in the Global North. Evaluation requires compute, technical expertise, institutional support, and time conditions that shape who can demand accountability before deployment versus who encounters harms afterward with little recourse.
If designed with these asymmetries in mind, open-source evaluation cards could help shift that dynamic. They hold real promise for giving African researchers, NGOs, and communities the ability to interrogate AI systems before they are deployed, not just record damage after the fact.
Bias bounties reveal both the promise and the limits of current approaches. Borrowing a “bug bounty” metaphor mobilizes attention and resources, but also frames bias as a discrete flaw. In many African contexts, bias is less a bug than the predictable result of training data, annotation practices, and deployment models that privilege scale over situated accuracy. A bounty can expose problems inside a system; it’s less equipped to question the assumptions that produced them.
Mala’s emphasis on “lived experience” and partnerships suggests she is aware of this tension. By centring partner organisations, these efforts can move beyond identifying harm to helping redefine what “fair” and “fit” mean in local contexts. That is the larger opportunity for evaluation infrastructure: to evolve from detecting bias to enabling communities to surface and reshape the frameworks themselves.
Deployment Realities and the Augmentation Question
Q: Given your global leadership in AI-for-good, how do you balance global agendas with local realities?
A: Particularly in AI, this can be challenging, as an AI model or system built in one geography can be used in many other places and contexts. So what is culturally acceptable, or who is affected by the technology is inconsistent. Since telemetry is not always precise and we don’t always know where a person is, who they are, and what their needs are, it’s hard to know how to normalize policies around AI use and access. There’s no simple solution to this. I won’t pretend otherwise.
Q: Reflecting on your journey what values or mindset do you hope to see carried forward by future leaders shaping responsible AI ecosystems in Africa?
A: In any country in the world, and certainly in the African continent, there are many situations in which capacity and labor are inadequate to meet a need. When I worked for UNICEF, it was very quickly apparent where I was based that there were simply not enough maternal community health workers to help women have safe pregnancies, deliveries, and post-natal care. Building the medical facilities, finding money for salaries, and training more skilled personnel is complicated, expensive and takes time. That leaves behind the pregnant people who don’t have access to good maternal care in the meantime.
Will AI solve this? No, of course not. But it can certainly augment existing capacity in ways we could have only dreamed up just ten years ago. Imagine building a (highly factually accurate!) generative AI chatbot that can triage patients, so the most complex and urgent cases are prioritized for a highly specialized human healthcare professional to respond to first. The positive outcomes could be huge if the AI system helps address a human labor gap that is unlikely to be resolved soon.
So I hope more funding, resources and attention goes into use cases where the needs are high and market forces are unlikely to provide a purely human-powered solution.
Editorial commentary: Mala’s candour about the difficulty of balancing global agendas with local realities is instructive in itself. By naming the messiness rather than offering neat solutions, she models the kind of humility that responsible AI work requires. Her maternal-health vignette then shifts the conversation from abstract “AI for good” rhetoric to a concrete, life-and-death use case where augmentation could genuinely matter.
Seeing AI this way (less as a silver bullet, more as a tool for triage in high-need settings) also surfaces the larger policy questions her example hints at. If AI can buy time and extend scarce human expertise, how do we ensure that “buying time” doesn’t become “accepting scarcity” as a permanent condition? How do we design augmentation projects so they are paired with commitments to build the human infrastructure they temporarily supplement? Those are questions Mala’s framing invites rather than avoids.
This is why her emphasis on funding high-need, low-market-viability use cases is so important. It signals a shift from using AI to chase commercial scale toward using AI to meet public needs where private markets won’t go. For African leaders and funders, the opportunity is to take that impulse further: treat AI pilots not as substitutes for human capacity but as catalysts tied to parallel investments in training, retention, and policy reform. In that light, augmentation becomes a bridge to structural change, not a reason to defer it.
Closing remarks
Mala’s reflections underscore a central lesson for AI in African contexts: the technical, ethical, and social dimensions of AI are inseparable. Bias, participation, evaluation, and augmentation are not just discrete challenges; they intersect, overlap, and amplify one another. Recognizing this entanglement is the first step toward designing AI that responds to lived realities rather than abstract models.
Her work reminds us that agency matters. African languages, communities, and innovators have long histories and rich expertise that AI architectures often overlook. Projects like Masakhane and Zindi demonstrate what becomes possible when African actors shape the infrastructure itself rather than being invited only to react. Authority, not just inclusion, reshapes what AI can achieve.
At the same time, Mala’s pragmatism models how to work constructively within constraints. Augmentation, participatory design, and evaluation frameworks are tools to navigate scarcity and complexity but they are most powerful when paired with long-term commitments to human capacity, structural investment, and policy engagement. Temporary fixes can save lives or open doors, but they must also catalyze systemic change rather than normalize the status quo.
The broader insight is generative: African AI cannot simply inherit frameworks built elsewhere. It must start from the realities on the ground, integrate ethical and technical considerations from the outset, and center those whose knowledge and experience have historically been marginalized. Doing so does more than mitigate harm, it creates new possibilities for African communities to define both the problems and the solutions.
For leaders, funders, and practitioners, the challenge is clear: treat AI as both a mirror and a lever. Reflect on whose knowledge shapes systems, whose priorities guide deployment, and whose futures are being imagined. And then, design with authority, accountability, and aspiration in mind, building systems that not only function but also empower and endure.
Thank you for reading!
Join the mission
This newsletter is independently researched, rooted in community, and crafted with care. Its mission is to break down walls of complexity and exclusion in innovation (including tech, AI, and data) and instead build bridges that amplify African innovation for global audiences.
If you’d like to support this work, there are a few meaningful ways to do so:
Fuel the writing → Ko-fi me or Buy me a coffee (though I’ll always choose hot chocolate!). Every cup helps keep this work independent and community-rooted.
Invest in the next generation → Pick up a copy of my daughter’s children’s book on data literacy, created to spark curiosity and critical thinking in young minds.
Pay it forward → Sponsor a copy for a child who needs it most, or nominate a recipient. Your gift could be the spark that opens a lifelong door to learning.
Amplify African perspectives in global AI conversations → I contributed to Karen’s new book, Everyday Ethical AI: A Guide For Families & Small Businesses on AI ethics, bringing an African perspective to a global conversation about AI. Grab a copy!
Your support is appreciated.


