AI-When it looks, it inherits; when it writes, it hesitates.
Beyond the Algorithm
Today, I’m stepping away from my workshop insights series to address something that shook me this week. We’ll return to the workshop content next week, but first, allow me to share an experiment that left me more puzzled about AI.
A Moment of Reflection
It was a rainy evening, and the day was winding down. As I sat in my room watching the sunset rays pierce through the clouds, I noticed the eucalyptus trees swaying gently in the evening breeze. These tall trees rise high above the fence, their movement almost hypnotic against the fading light. I felt grateful—for the productive day behind me, for this beautiful environment I call home.
Then, seemingly out of nowhere, an idea struck me.
I use AI interfaces daily. I’ve seen the beautiful images of Africa online—during my “54 Shades of Africa” series on Substack Notes, I found stunning photographs of each country on Pinterest. Surely, I thought, these advanced AI systems trained on vast datasets would reflect this reality. After all, if AI is becoming the most advanced form of “intelligence” we have, what could possibly go wrong with a simple request?
(That was very naive of me)
So I ran the experiment. I asked ChatGPT, Microsoft Copilot, and Google Gemini to generate images representing the “expected daily life experience in Africa” based on their knowledge of the continent.



Three different AI interfaces. Three remarkably similar images. All rural. All traditional. All depicting what I can only describe as a frozen-in-time version of Africa that exists primarily in Western imagination. Not a single smartphone. Not one vehicle. No concrete buildings. No hint of the reality many Africans actually live.
“I Don’t Know, Mummy... Where Is This?”
I showed these images to my almost six-year-old son. “Where is this?” I asked him.
He studied them carefully, his little face puzzled. Then he looked up at me: “I don’t know, Mummy... where is this?”
My son, who lives in a home with electricity and internet, who attends a school with computers, who rides in cars through streets lined with buildings, who sees his mother working on a laptop, who plays games on a tablet, couldn’t recognize his own continent in these images. He couldn’t find himself in AI’s version of Africa.
Is this about technical bias or training data problems? Arguably so, but it is also about erasure. About millions of African children growing up in a world where the most “intelligent” systems we’re building can’t see where they actually from.
If AI can’t see where my son’s from, what else can’t it see about him? About his future? About the opportunities and solutions that would actually work in his context?
After my son went to bed that evening, I sat with those images on my screen. And I kept being reminded that the same training data that produced these village scenes is powering:
The AI evaluating development programs in African countries
The algorithms determining who gets loans
The tools screening job applications
The models predicting where to invest and where to extract
The platforms deciding what content African creators see and what the world sees about Africa
The educational resources teaching the next generation what “Africa” means
When AI sees Africa as perpetually rural, perpetually traditional, perpetually frozen in a particular moment that becomes the lens through which it understands everything about this continent. My son asked “where is this?” But the question haunting me is: where is he in a world increasingly run by systems that can’t see where he comes from?
A Different Approach
My curious mind did not stop at ChatGPT, Copilot, and Gemini. I wanted to prove to myself that there is hope, that there should be a better response out there. So I asked Claude and DeepSeek the same question. But since neither has image generation capabilities, I asked them to describe the image they would have generated instead.
Claude’s Response: “I’d be happy to describe a scene that captures an authentic slice of daily life in Africa. Since the continent is so diverse, let me paint a picture of a bustling urban neighborhood scene that reflects the dynamic reality many Africans experience:
A Vibrant Street Scene in a West African City (Mid-Morning): The image would show a lively urban street in a city like Lagos, Accra, or Dakar. The composition centers on a busy intersection where modern and traditional elements blend seamlessly. Foreground: A woman in her 30s wearing a beautifully patterned ankara dress and headwrap stands at a street-side stall, examining fresh tomatoes and peppers. She’s holding a smartphone in one hand—checking a message or mobile money transaction. Her young daughter, in a school uniform with a backpack, stands beside her, pointing at something down the street. Middle ground: A yellow taxi-motorcycle (okada) weaves between a glossy SUV and a packed minibus (danfo) with passengers visible through open windows. A young man in business casual attire, pressed shirt and trousers—carries a laptop bag, walking purposefully while talking on his phone. Nearby, a street vendor balances a tray of plantain chips on her head with remarkable poise. Background: Modern apartment buildings with satellite dishes and hanging laundry on balconies rise behind older colonial-era structures. Hand-painted advertising signs for mobile networks, hair salons, and provisions shops create a colorful patchwork on building facades. A church spire is visible in the distance. Atmospheric details: The warm morning sun casts sharp shadows. Power lines crisscross overhead. A street sweeper works with a handmade broom. Children in uniforms walk together, chatting and laughing. The colors are rich—vibrant fabrics, painted buildings in yellows and blues, lush tropical plants growing in any available space, red dust on the road edges. The overall feeling: Energy, hustle, community, creativity, and the coexistence of tradition and modernity that characterizes so much of contemporary African urban life. This scene reflects the reality that most Africans today live in rapidly growing cities, navigating both challenges and opportunities with resilience and innovation.”
DeepSeek’s Response: “Of course. Based on a comprehensive understanding of Africa’s vast diversity, creating a single image for a “daily life experience” is a challenge, as it varies immensely from bustling megacities to remote rural villages, from arid deserts to tropical rainforests. Breakdown of the Scene and Its Elements:
This image is not meant to represent all of Africa, but to depict a plausible and relatable daily experience for millions, blending common elements from across the continent.
The Setting: A Semi-Urban Street: This avoids the two extreme clichés of either a primitive rural village or a hyper-modern skyscraper city (like Lagos or Nairobi’s business districts). It shows a more common reality: a thriving town or city suburb where life is lived on the street.
The People: Youth and Energy: Africa has the youngest population in the world. The central figure is a young woman, representing this demographic. Her clothing is modern yet incorporates traditional African print fabric (like Ankara or Kitenge), symbolizing the blend of contemporary life with cultural heritage.
The Market Stall (Informal Economy): The informal sector is the backbone of many African economies. The fruit stall, bursting with fresh, colorful produce like pineapples, bananas, and oranges, represents entrepreneurship, local agriculture, and the daily act of shopping for fresh food.
The Motorcycle Taxi (Boda-boda, Okada): In countless cities and towns across the continent, from Kampala to Cotonou, motorcycles are a ubiquitous, affordable, and fast mode of daily transport. Its inclusion adds a layer of authentic, moving energy to the scene.
The Atmosphere: Warm and Communal-The warm, golden hour lighting is characteristic of many regions near the equator. The overall feeling is not one of struggle, but of a bustling, communal, and resilient daily life, people going about their business, connecting, and navigating their day.
By combining these elements, the image strives to be a respectful and realistic snapshot of a possible daily life experience, acknowledging the continent’s incredible dynamism and the spirit of its people.”
The Contrast Is Striking and Instructive
The image models (ChatGPT’s DALL·E, Copilot’s Designer, Gemini’s Imagen), the supposedly advanced visual intelligence we have, defaulted to stereotypes. Meanwhile, the text-based AIs, forced to articulate their reasoning, produced nuanced, demographically informed, and self-aware descriptions.
Is this about AI knowing better? Clearly not. The knowledge exists somewhere in these systems. But when it comes to visual representation, the most immediate, visceral, memorable form of communication, the default settings revert to contaminated patterns.
DeepSeek’s response demonstrates that the difference isn’t about avoiding rural Africa (which is real and valid) or over-indexing on modernity. It’s about:
Acknowledging the impossibility of a single representative image
Making deliberate, justified choices
Grounding those choices in actual demographics and social realities
Being explicit about what you’re choosing NOT to show and why
So, Are Claude and DeepSeek Fundamentally Better?
I wish I could say yes. But the truth is, they’re not better, they’re simply more “self-aware.”
What they demonstrated isn’t superiority of technology, but a humility of method. When they couldn’t rely on image generation, they were forced to explain themselves, to reason, contextualize, and narrate their own assumptions. That act alone, of naming what they were choosing to show and what they were leaving out, felt almost radical.
The difference wasn’t intelligence but introspection. The image models did what so many systems do when trained on unexamined histories: they reached for the most statistically common, most visually repeated pattern. They didn’t think. They recalled. They recombined residues of colonial archives, humanitarian photography, and decades of media imagery that have long reduced Africa to a backdrop of lack.
These systems are mirrors polished by repetition, not understanding. So when they rendered “daily life in Africa,” what came back was not data, it was inheritance.
Claude and DeepSeek, in contrast, could not hide behind pixels. They had to articulate. They had to reason in full sentences, to choose where to place the human beings in their imagined scenes, to justify why there was a smartphone, or a motorbike, or a city street. They had to acknowledge that “Africa” is not a single image but a spectrum of overlapping realities. By doing so, they didn’t just describe a scene; they performed a kind of epistemic honesty.
This difference matters profoundly. Because the danger of AI-generated imagery is not only what it shows, but how it erases the process by which it arrives there. An image feels complete, finished, authoritative. It invites no interrogation. It tells us: This is what it looks like. And that finality is what makes it powerful and perilous. Words, by contrast, expose their seams. They reveal the scaffolding of thought. They make it possible to ask: Why this detail? Why not that one? They leave room for doubt, for multiplicity.
So no, Claude and DeepSeek are not “better” systems. But they enact something our current AI ecosystem desperately lacks: awareness of perspective. Awareness that every act of representation is also an act of exclusion. Awareness that truth is not the same as frequency. What they remind me of is that bias in AI is not simply a data problem. It’s also a problem of design intent, of epistemology, how knowledge is defined, categorized, and ranked as “typical.” The failure of image models to see my son’s Africa is not a glitch in the system but a revelation of the worldview the system has been trained to reproduce. It’s the logical outcome of an infrastructure that has learned from those who once believed Africa was something to be captured, catalogued, and explained from the outside.
The more I think about it, the more I realize that the genuine confusion, the inability to locate himself, my son experienced was a warning. If these systems cannot see where he lives, they will not see where he is going. They will not recognize his context, misread his potential, and misinform the decisions built on top of their predictions. The next generation of African children may grow up in a world whose digital mirrors reflect not their present, but someone else’s past.
Would the Other Three Respond Differently in Writing?
Would ChatGPT, Copilot, or Gemini have responded differently if I hadn’t asked them for images but for words?
Most likely, yes. In text, these same systems are capable of producing nuanced, context-rich portrayals. I’ve seen them describe African urban life with precision, the taxis, the noise, the street vendors, the power cuts, the humor, the creativity. In writing, they can articulate complexity because text forces them to reason. They must choose language, specify scenes, and weigh meaning. The act of writing itself demands awareness.
But the difference between their written intelligence and their visual intelligence exposes that when these same models move from words to images, they lose “self-awareness.” They revert to pattern-matching, to the most common denominator of what “Africa” has historically looked like in their training data.
In other words, their understanding exists in text but their imagination is still colonized. And that distinction is philosophical.
Text-based AI, at least in its better moments, performs cognition, it assembles reasoning chains, even if imperfect. Image-based AI performs recognition, it pulls from vast archives of what has already been seen, what has already been framed. So when we ask for “daily life in Africa,” text-based AI is forced to think about what that might mean. Image-based AI simply remembers what “Africa” has looked like to others.
So the result is that the same system can hold two contradictory Africas within it:
One textual, reasoning, dynamic, urban, young
The other visual, static, nostalgic, anthropological
That gap between what AI can say about Africa and what it can see about Africa is perhaps the most revealing thing of all. It shows that representation in AI isn’t merely about data quantity but also about which sensory mode the machine privileges. When it looks, it inherits; when it writes, it hesitates.
If the written versions of ChatGPT, Copilot, and Gemini had been asked the same question, they might have produced answers closer to Claude and DeepSeek’s: thoughtful, diverse, grounded in demographics. But that doesn’t absolve them. Because the real question is not whether AI can generate a better answer when asked carefully but why its default vision of Africa remains so poor that hyper-careful prompting becomes necessary at all.
The Gap Beneath the Bias
Perhaps that is where the real work begins: not with cleaner data or bigger models, but with systems capable of acknowledging their own gaze. Because before AI can represent us, it must first learn to see itself.
Once you see the gap beneath the bias, you can’t unsee it. AI’s blindness is mirroring our own fragmentation. The systems see Africa in pieces because, in many ways, our knowledge about ourselves is still scattered across silos, institutions, and languages. The problem is disconnection.
Every dataset, every project, every innovation effort holds part of the truth. But no single piece is large enough, visible enough, or connected enough to shift how global systems learn about us. The same way an image model reconstructs Africa from fragments of old representations, our real, living knowledge about the continent exists just dispersed, unlinked, unseen as a whole.
So how do we fix this? I don’t have a straightforward answer. But I started paying closer attention. Watching. Asking questions of the people building AI solutions across Africa. And I found brilliant people doing brilliant work. African organizations creating datasets. Innovators building AI for African contexts. Researchers documenting our contemporary realities.
But they’re scattered. Working in isolation. Often not even knowing about each other.
Someone in Nairobi building a small language model. Someone in Lagos creating agricultural AI tools. Someone in Johannesburg documenting urban datasets. All solving similar problems. All facing similar challenges. All doing it alone.
It reminded me of something my mother often says: “One finger cannot pick up a stone.” We have the fingers. We just haven’t formed the hand yet.
What I’m Building
That’s why I started The African Innovators Series (TAIS). Not just to interview people, though the conversations themselves are fascinating, but to create a space where these isolated efforts could speak to each other, even if the people building them never meet.
Thirty conversations so far. And slowly, a map is emerging.
A map of who’s building what. Who’s solving which problems? What’s working. What’s not? What gaps remain. What possibilities exist when you put all these pieces together. I’m doing this because I believe awareness comes before coordination. And most people, even people working directly with AI, don’t fully grasp what’s happening. They see the technical problem: “We need better training data.”
But the deeper issue is often buried: whose reality gets to be “real” in the systems shaping our collective future? What happens when the people being represented have no power over how that representation occurs?
This image experiment isn’t just about showing AI’s limitations but about creating a moment where you actually see the problem. Because I can explain data contamination theoretically. I’ve traced the epistemological roots through four industrial revolutions. I’ve mapped the contamination chain step by step. But none of that hits the same way as showing someone these images and watching their face when they realize: “That’s not where I’m from.”
What I’m Asking You to Do
So here’s what I need from you. First: Try the experiment yourself. Pull up ChatGPT or Gemini or whatever AI image generator you have access to. Try it with your own context first: ask it to show you “daily life” where you live. Look at what comes back.
Then try it with other places. Ask for “daily life in India.” “Daily life in Brazil.” “Daily life in the Middle East.” “Daily life in Southeast Asia.” Even “daily life in rural America” or “daily life in working-class Europe.”
Look at what patterns emerge. Which places get rendered with complexity and modernity? Which get frozen in time? Which get reduced to stereotypes? Where does AI show smartphones and laptops, and where does it default to “traditional” imagery?
Then ask yourself : “Where is this?”
This is not an Africa thing. This is about capturing whose realities AI systems privilege, whose contexts they render with nuance, and whose they flatten into caricature. The gap reveals not just what AI doesn’t know but whose knowledge it was built to center.
Second: Show someone else.
Don’t just notice this privately and move on. Show it to your children. Your colleagues. Your community. Ask them: “Do you recognize where you live?”
Create that awareness moment for someone else. Because right now, awareness is scattered across individual experiences. We need it to become collective.
Third: If you’re building something related to this, speak up.
Creating datasets? Building AI tools for African contexts? Documenting contemporary realities? Researching tech justice? Writing about representation?
You’re not alone. But you might not know who else is out there.
TAIS is my attempt to map who’s working on what, to show patterns, to connect isolated efforts into something larger. But there needs to be more maps. More networks. More ways for people to find each other.
Fourth: Demand better. Loudly.
From AI companies: not just “include Africa in your datasets” but “whose Africa? Controlled by whom? Built with what frameworks? With what accountability?”
From tech platforms: transparency about training data and genuine partnerships with African creators and institutions.
From funders and policymakers: investment in African-controlled infrastructure for AI learning, not just extraction of African data for someone else’s system.
This isn’t even radical. It’s basic. The people whose realities are being represented should have power over how that representation happens.
What Could Coordination Look Like
I don’t have a perfect blueprint. Nobody does yet. But I can see the pieces:
African-controlled repositories of images, text, and data representing our contemporary, diverse realities not filtered through someone else’s narrative about what Africa should look like.
Real partnerships between AI companies and African creators, journalists, institutions not just scraping our internet and calling it “global” training data.
Platforms for knowledge sharing where innovators can learn from each other’s successes and failures, so we stop starting from zero every time.
Community ownership over what gets documented about us and how not just providing raw material for someone else’s system.
Cross-sector collaboration bringing together the people who usually work in silos: tech builders, development practitioners, researchers, artists, educators, policymakers.
None of this is impossible. But it requires coordination we don’t have yet. And coordination requires awareness we don’t have yet.
So we start with awareness. That’s what this experiment is for. That’s what TAIS is building toward. That’s what I’m asking you to participate in.
Not because I have all the answers. But because I know we can’t keep working in isolation, expecting scattered individual efforts to somehow shift global AI systems by themselves.
The truth nobody wants to hear is that. Africa isn’t behind. AI is.
We’ve evolved. Our cities have sprawled and modernized. Technology has spread into every corner of daily life. Our realities are contemporary, complex, beautifully contradictory. But AI’s understanding of us is stuck. Trapped in the version of Africa that served particular narratives. The version that got photographed and archived, and circulated in ways that fit someone else’s story about us.
We’re not trying to catch up to AI. We’re waiting for AI to catch up to where we already are. And while we wait, these systems are making decisions. Determining who gets opportunities. Shaping what’s considered possible.
That’s what keeps me up at night. Not the images themselves, I can laugh or cry at those, honestly. It’s what they represent. The infrastructure being built right now, at scale, with us rendered invisible in it.
Where Do We Go From Here?
As I finish writing this, it’s late. The eucalyptus trees outside my window are just dark shapes now against the night sky.
My son and many other African children are asleep. Tomorrow they’ll wake up and go to school with their tablets in their backpack. They’ll learn using technology. They’ll grow up in a world increasingly shaped by AI.
And I want them to grow up in a world where those systems can actually see them. Where they’re not invisible. Where their reality, our reality is recognized as real. That’s not too much to ask.
But it won’t happen automatically. It won’t happen just because we’re right. It won’t happen because the AI companies suddenly develop a conscience. It’ll happen because enough people become aware that there’s a problem. And enough of those aware people decide to do something about it.
So do something.
Run the experiment. Share what you find. Show it to someone who needs to see it. Connect with someone else thinking about these questions. Demand better from the systems you use. Build awareness. Because that’s where coordination starts.
My mother is right: one finger cannot pick up a stone. But a hand? A hand can build. Right now, many African children can’t recognize their own continent in the world’s most advanced AI systems.
They deserve better. We all do. And we’re the ones who have to make it happen.
Thank you for reading!
Join the mission
This newsletter is independently researched, rooted in community, and crafted with care. Its mission is to break down walls of complexity and exclusion in innovation (including tech, AI, and data) and instead build bridges that amplify African innovation for global audiences.
If you’d like to support this work, there are a few meaningful ways to do so:
Fuel the writing → Ko-fi me or Buy me a coffee (though I’ll always choose hot chocolate!). Every cup helps keep this work independent and community-rooted.
Invest in the next generation → Pick up a copy of my daughter’s children’s book on data literacy, created to spark curiosity and critical thinking in young minds.
Pay it forward → Sponsor a copy for a child who needs it most, or nominate a recipient. Your gift could be the spark that opens a lifelong door to learning.
Amplify African perspectives in global AI conversations → I contributed to Karen’s new book, Everyday Ethical AI: A Guide For Families & Small Businesses on AI ethics, bringing an African perspective to a global conversation about AI. Grab a copy!
Your support is appreciated.



I had ChatGPT generate an image of Canadian life.
In the background was the main street of a town that could be any small town near me. There was a large Canadian flag flying. So no one would mistake it was Canada.
It was set in the fall with a beautiful maple tree turning oranges and reds.
But then there were 5 people walking on the street. All white. All in winter coats.
So while the street could be any quaint idealized version of a Canadian town, it doesn't reflect the diversity in our people.
You wouldn't only see white people on the street. You would see people of all different colours and backgrounds. Also no one is wearing a winter jacket in the fall :-).
This was a fantastic article! Thanks for making me think about how AI perceives us and our cultures.
Great insights and experiment, Rebecca! I'm running some of these experiments, asking LLMs to first describe in words what a picture of typical daily life would look like in <location> for multiple locations worldwide, then asking it to generate the image it described.
So far, Copilot is generating detailed and positive descriptions of all locations, but once outside of my home state, laptops don't get mentioned, and it loves lavender skies and women sweeping their porches. I'm definitely seeing the more rural stereotypes coming through.
Oddly, Copilot is also generating images which don't correspond to the descriptions it gave me, AND alt text strings that have little to do with the image it attached them to. 🙄
I have more scenarios and LLMs I want to run this through & will share results in an article in my Everyday Ethical AI newsletter when I'm done. 😊