Be WEIRD or Get Left Out of AI
Building on Crooked Foundations
In conversations about deploying AI in African contexts, a familiar wisdom often emerges. Practitioners acknowledge that achieving total inclusiveness is unrealistic, we can’t involve every stakeholder at every stage of development. The pragmatic solution? Identify the right people who can represent the broader community. Get their feedback. Understand project constraints. Find a middle ground between idealistic demands for universal participation and reckless tech-solutionism that ignores local voices entirely.
This sounds reasonable. It acknowledges real constraints, limited time, scarce resources and the impossibility of consulting everyone. It resists both naive idealism and Silicon Valley hubris. But the question we rarely ask is: Who decides who “the right representatives” are? What makes representation “good”? And crucially, according to whose standards are we measuring effective stakeholder engagement?
Recent research suggests the answer is uncomfortable: WEIRD standards. WEIRD, a term from cross-cultural psychology stands for Western, Educated, Industrialized, Rich, and Democratic societies. These are the very standards embedded in the AI systems we’re trying to adapt through stakeholder engagement.
A Harvard study titled “Which Humans?” reveals that LLMs exhibit psychological patterns closely matching WEIRD populations, with responses diverging dramatically as you move away from these societies, a correlation of r = -.70.
So these aren’t just language or translation issues; the models carry fundamental assumptions about how decisions get made, how relationships work, how knowledge gets validated, how success is defined. These assumptions as we all know, are baked into the architecture, present before deployment begins.

But the deeper problem here is that the “pragmatic solution” of representative stakeholder engagement operates within the same WEIRD framework as the biased AI itself. We’re using WEIRD institutions to select representatives according to WEIRD criteria, evaluating their input through WEIRD frameworks, measuring success by WEIRD benchmarks, all to address WEIRD bias in AI systems.
The pragmatism is circular. And we rarely ask the key question: In what settings have we actually seen this work? Where’s the evidence that finding “the right representatives” genuinely bridges epistemological gaps rather than creating the appearance of inclusion while reinforcing the underlying bias?
The Representation Apparatus is WEIRD-Structured
Consider how representative stakeholder engagement actually happens in AI deployment projects. An organization (usually Western or Western-funded) identifies a problem, develops or adapts an AI solution, then seeks stakeholder input. The process seems straightforward. But let’s examine each step more closely.
Who gets identified as a stakeholder? Usually people who interface with Western institutions: NGO partners, government officials, “community leaders” recognizable to external organizations, local professionals who speak English or other colonial languages, individuals with formal education credentials. These are precisely the people most likely to have absorbed WEIRD frameworks through their institutional interactions.
What format does “consultation” take? Workshops, surveys, focus groups, interviews, all methods developed within WEIRD social science traditions. These formats privilege certain communication styles: direct verbal feedback, individual opinions, written responses, linear problem-solving discussions. They’re less suited to collective deliberation, oral knowledge transmission, consensus-building through extended dialogue, or decision-making embedded in community relationships.
How is input processed? Feedback gets documented, categorized, analyzed according to frameworks like “user needs,” “pain points,” “feature requests,” “usability issues.” These categories assume individuals have needs, experience problems individually, want features that increase individual efficiency. They don’t easily capture communal needs, collective experiences, or desires for technology that strengthens community bonds rather than individual productivity.
Who evaluates if representation was adequate? Usually funders, organizational leadership, or external evaluators overwhelmingly based in WEIRD institutions, using WEIRD metrics: number of stakeholders consulted, diversity of demographic representation, stakeholder satisfaction scores, adoption rates. These metrics assume that counting participants measures inclusion, that demographic diversity equals epistemological diversity, that satisfaction can be surveyed using standardized instruments, that adoption indicates appropriateness.
At every stage, WEIRD assumptions structure the process. The apparatus itself is WEIRD-designed, WEIRD-operated, WEIRD-evaluated.
The Selection Paradox: WEIRD Criteria for Anti-WEIRD Goals
The paradox becomes sharpest when we examine how “the right representatives” get selected. Organizations want people who can meaningfully represent community perspectives. But “meaningful representation” gets evaluated through WEIRD lenses.
Good representatives are expected to: Articulate clear positions, speak on behalf of others, provide actionable feedback, engage with technical concepts, participate in formal meeting structures, respond to time-bound schedules, translate local knowledge into categories legible to external organizations.
But these expectations privilege: Individuals comfortable with WEIRD professional norms, people who’ve had formal education in WEIRD systems, those experienced in interfacing with NGOs or development organizations, individuals whose authority comes from positions recognizable to external institutions rather than traditional or community-based legitimacy.
The very qualities that make someone “effective” in stakeholder engagement processes, fluency in development discourse, comfort with WEIRD meeting formats, ability to translate local contexts into external frameworks, are qualities developed through WEIRD socialization. We select for WEIRD-compatibility while claiming to address WEIRD bias.
Meanwhile, who gets excluded? Community elders whose authority comes from traditional structures but who haven’t interfaced with development organizations. People whose knowledge is embedded in practice rather than articulable in abstract terms. Those who make decisions collectively rather than offering individual opinions. Individuals whose expertise is oral, seasonal, spiritual, or otherwise difficult to document in formats legible to external evaluators.
The pragmatist might respond: “But we need people who can actually engage with the process. We can’t redesign the entire development apparatus for each consultation.” Precisely. That’s the admission. The process requires conformity to WEIRD frameworks as a condition of participation. We’re not adapting to local epistemologies, we’re selecting for local people who’ve already adapted to ours.
What “Good Representation” Obscures
Organizations often report successful stakeholder engagement: “We consulted with 50 community members across 5 regions.” “We incorporated feedback from local partners.” “Our stakeholder satisfaction score was 4.2 out of 5.” These metrics create confidence that representation happened adequately.
But what do these measures actually capture? That you consulted 50 people, selected by WEIRD criteria, participating in WEIRD formats, providing feedback filtered through WEIRD frameworks. That you incorporated feedback, the feedback you could understand, categorize, and implement within your existing technical constraints and conceptual frameworks. That stakeholders reported satisfaction, measured through survey instruments designed in WEIRD contexts, asking questions that assume WEIRD notions of satisfaction.
What remains invisible? Whether the 50 people genuinely represented community epistemologies or just represented the WEIRD-compatible segment of the community. Whether the feedback you couldn’t incorporate was actually more important than what you could. Whether “satisfaction” measured through surveys captures what matters in local contexts, maybe community harmony matters more than individual satisfaction, maybe oral approval given privately to trusted community members matters more than written survey responses.
The metrics make representation legible to WEIRD institutions, which is precisely the problem. Legibility requires conformity to WEIRD frameworks. What’s illegible, what can’t be easily measured, categorized, reported to funders disappears from view.
Here’s the question that should haunt every discussion of pragmatic stakeholder engagement: In what settings have we actually seen this approach genuinely bridge the gap between WEIRD AI and non-WEIRD contexts? Not: where have we achieved high stakeholder satisfaction scores. Not: where have we consulted many people. Not: where have we implemented feedback. But: where have we seen evidence that representative stakeholder engagement addressed fundamental epistemological incompatibility between AI systems built on WEIRD psychology and communities operating from different frameworks?
The honest answer is: we don’t know. We lack the evidence because we lack the evaluation frameworks to even ask the question properly. Our metrics measure WEIRD-legible success: adoption, satisfaction, engagement numbers. They don’t measure epistemological alignment, cultural appropriateness of underlying reasoning patterns, or whether the AI undermines or supports local knowledge systems.
What evidence would actually demonstrate success?
AI advice aligning with rather than contradicting local expertise.
Community knowledge systems strengthened rather than undermined.
Collective decision-making processes enhanced rather than bypassed.
Local epistemologies validated rather than marginalized.
Traditional authorities empowered rather than displaced.
These outcomes would require different evaluation approaches including long-term ethnographic studies, assessments by local knowledge holders using their own criteria, measures of community cohesion and knowledge transmission, evaluation of whether AI reinforces or disrupts existing successful practices. But such evaluation is rare because it’s expensive, time-consuming, requires deep cultural expertise, produces results not easily reportable to funders, and might reveal that “successful” deployments actually caused harm to dimensions we weren’t measuring. So we continue using WEIRD metrics that show WEIRD-legible success, declare stakeholder engagement effective, and remain blind to whether we’ve actually addressed the fundamental problem.
Three Domains Where the Paradox Manifests
The circular logic becomes concrete when we examine specific deployment contexts.
Healthcare AI: Organizations consult with “community health representatives”, often people who’ve received formal training in Western biomedical frameworks, work with NGO partners, speak development language. They provide valuable feedback about implementation logistics. But do they represent the community members who primarily consult traditional healers? Who understand illness through spiritual frameworks? Who make health decisions collectively with extended families? The “representatives” selected are precisely those most aligned with the WEIRD biomedical model the AI embodies.
Agricultural Advisory Systems: Stakeholder engagement involves “progressive farmers,” “lead farmers,” or agricultural extension workers, people interfacing with formal agricultural systems, comfortable with market-oriented farming, familiar with development initiatives. But do they represent farmers whose primary goal is food security over profit maximization? Who farm using traditional agro-ecological knowledge? Who make decisions through communal land tenure systems? The selection process privileges those already operating in frameworks compatible with the AI’s market-oriented, individualistic assumptions.
Educational Technology: Consultation happens with teachers, school administrators, education officials, people embedded in formal education systems designed on WEIRD models. But do they represent communities where education happens through apprenticeship and oral knowledge transmission? Where learning is assessed through community contribution rather than individual testing? Where knowledge belongs to elders and is transmitted according to traditional protocols? The “representatives” are selected from within the very system whose assumptions the AI reinforces.
In each domain, the pragmatic selection of accessible, articulate, institutionally-connected representatives systematically excludes the voices most likely to surface fundamental epistemological incompatibility.
If the pragmatic solution operates within the problem it claims to solve, what does that mean for current practice?
It means much of what’s called “stakeholder engagement” is actually a legitimation ritual. Not intentionally, most practitioners genuinely want meaningful inclusion, but structurally, the process functions to create the appearance of responsiveness while maintaining WEIRD frameworks intact. We consult, document, incorporate feedback, measure satisfaction, all while the fundamental epistemological orientation of the AI, the evaluation frameworks, and the engagement process itself remain WEIRD-structured.
It means the middle ground between “total inclusion” and “no inclusion” might be a false choice. The real choice is between WEIRD-structured inclusion (which is what we have) and radically different approaches to developing AI that begin from non-WEIRD epistemologies (which we barely attempt).
It means the question “who can best represent stakeholders given project constraints?” contains a hidden premise: that representation within WEIRD-structured processes is possible and valuable. But what if the constraints themselves (the project timelines, the funding structures, the organizational forms, the evaluation metrics) are WEIRD-designed and incompatible with genuine epistemological diversity?
What Would Actually Be Different?
If we took this critique seriously, practice would change fundamentally.
Different starting questions: Not “how do we get stakeholder input on this AI?” but “should AI built on WEIRD foundations be deployed here at all?” Not “who can represent the community?” but “what would AI look like if built from the ground up on local epistemologies?”
Different power structures: African institutions defining research agendas, not consulting on Western-led projects. African researchers developing AI from Ubuntu principles, oral knowledge frameworks, communal decision-making models. African communities deciding if and how AI fits rather than adapting to AI designed elsewhere.
Different evaluation frameworks: Long-term assessment by local knowledge holders using their criteria. Measures of whether AI strengthens or weakens indigenous knowledge systems. Evaluation of compatibility with community decision-making processes. Assessment by traditional authorities using their frameworks for legitimate knowledge.
Different funding structures: Resources flowing to African-led fundamental AI research, not just deployment projects. Patient funding timelines that allow deep community engagement, not rapid deployment. Budgets for building local AI development capacity, not just stakeholder consultations.
Different honesty: Explicit acknowledgment that current AI is WEIRD-biased and may be inappropriate for many contexts. Documentation of fundamental incompatibilities, not just implementation challenges. Willingness to say “this technology isn’t suitable” rather than finding representatives who’ll validate deployment.
These approaches look nothing like current practice. That’s the point. Current practice operates within WEIRD frameworks while claiming to address WEIRD bias. Genuine change requires stepping outside those frameworks entirely.
Conclusion
The perspective that we can’t include everyone so we must find the right representatives contains a fatal flaw. It assumes representation within WEIRD-structured processes can address problems that exist at the epistemological level of the AI itself. But the entire apparatus is circular: WEIRD institutions using WEIRD criteria selecting WEIRD-compatible representatives who provide WEIRD-legible feedback evaluated by WEIRD metrics to validate that WEIRD-biased AI has been adequately adapted through stakeholder engagement.
The pragmatism isn’t wrong because it acknowledges constraints. It’s wrong because it treats those constraints as natural and inevitable rather than as products of WEIRD-structured development systems. The middle ground isn’t between total inclusion and no inclusion but between WEIRD inclusion and no inclusion, presented as if WEIRD inclusion equals meaningful inclusion.
The unpopular truth is that we lack compelling evidence that current approaches to stakeholder engagement genuinely bridge epistemological gaps. We have metrics that measure WEIRD-legible success. We lack evaluation that would reveal whether we’re actually addressing fundamental incompatibility or just creating sophisticated legitimation for imposing WEIRD AI on non-WEIRD contexts.
Until we’re willing to ask and honestly answer ”who represents the ‘who,’ according to whose criteria, measured by whose standards?”, we’re just elaborating the same circular logic with better documentation. Pragmatism would acknowledge that AI built on WEIRD foundations, deployed through WEIRD processes, evaluated by WEIRD metrics, with stakeholder engagement structured by WEIRD assumptions about representation, probably can’t serve non-WEIRD contexts well, no matter how many representatives we consult.
So can existing structures ever produce what we claim to want: AI that genuinely serves epistemologically diverse contexts rather than imposing WEIRD frameworks with a consultative veneer? The answer might be no. And that answer, uncomfortable as it is, might be the starting point for something actually different.
Thank you for reading!
Note: This article reflects themes explored during my recent “Beyond the Hype” workshop. Part 1 of the write-up (covering the behind-the-scenes process) was published last week. Part 2, with participant insights and practical frameworks, is forthcoming, stay tuned!
Join the mission
This newsletter is independently researched, rooted in community, and crafted with care. Its mission is to break down walls of complexity and exclusion in innovation (including tech, AI, and data) and instead build bridges that amplify African innovation for global audiences.
If you’d like to support this work, there are a few meaningful ways to do so:
Fuel the writing → Ko-fi me or Buy me a coffee (though I’ll always choose hot chocolate!). Every cup helps keep this work independent and community-rooted.
Invest in the next generation → Pick up a copy of my daughter’s children’s book on data literacy, created to spark curiosity and critical thinking in young minds.
Pay it forward → Sponsor a copy for a child who needs it most, or nominate a recipient. Your gift could be the spark that opens a lifelong door to learning.
Amplify African perspectives in global AI conversations → I contributed to Karen’s new book, Everyday Ethical AI: A Guide For Families & Small Businesses on AI ethics, bringing an African perspective to a global conversation about AI. Grab a copy!
Your support is appreciated.



Excellent review of the issues here, Rebecca.
It seems the only way to achieve this is to build a home-grown system and train AI from the ground up. I discussed this with AI, and does this summary contrast seem reasonable to you on the core dimensions of AI? I’m just testing how well the Western AI model understands context in Africa. From what I’ve learned so far, this seems like its in the ballpark.
AI Dimensions
...........
Knowledge Source
Western AI: Based on written, data-driven sources such as academic texts, code, and statistics.
African AI: Grounded in oral, communal, and experiential knowledge, including stories, proverbs, and shared memory.
...........
Logic
Western AI: Uses binary, linear reasoning — something is true or false, efficient or inefficient.
African AI: Uses contextual, relational reasoning — meaning and truth depend on relationships, setting, and consensus.
...........
Ethics
Western AI: Emphasizes individual rights, autonomy, and objectivity.
African AI: Emphasizes communal harmony, collective responsibility, and restoration of balance.
...........
Language
Western AI: Dominated by English and other major global languages.
African AI: Multilingual and fluid, moving naturally among local dialects and languages, respecting oral nuance.
...........
Goal or Optimization Principle
Western AI: Aims for efficiency, accuracy, and innovation.
African AI: Aims for balance, fairness, sustainability, and human continuity.
...........
As for who gets involved, as you’ve laid out so well in the piece, that’s a major challenge — finding the right mix of representation that includes most voices, since including all would be a stretch.
This is not a new problem, but boy is it ever still fundamentally problematic. Funny timing, because I was working today on a piece about the pitfalls of relying on existing (recently defined) laws and governance model as reference points as African nations move forward with AI governance. It's EXACTLY this issue around consultation and accurate representation where the breakdown in applicability is rooted. By the way -- my most recent note is looking for further recommendations for reading on African moral and ethical and moral traditions. We (westerners/global north) have to get beyond the tendency to throw around the word "ubuntu." Thank you, as always, for your writing! I always learn something and get ideas on what else to learn,