I don't want to be a party pooper. Truly, I would rather write about something else today, but I feel the need to address the excitement around OpenAI's latest move: the release of its GPT-OSS models : GPT-OSS-20B and GPT-OSS-120B. These models have been widely hailed as a win for the open-source community, celebrated across tech Twitter, and positioned as a breakthrough for democratizing AI
But are they really open-source or only open-weight?…
The Great Illusion
I feel compelled to call this phenomenon The Great Illusion. We live in an era where “open” no longer means open and yet, it still gets counted as such. Today, many models are labeled “open” when all they offer are open weights: the finished cake, ready to be sliced, served, maybe even slightly re-iced. You can run it, fine-tune it, experiment with it, but you have no idea how it was made.
True open source, by contrast, gives you the full recipe: the training code, the data sources, the methodology, and every decision along the way. It allows you to reproduce the model from scratch, to audit its ingredients, to ask not just “what can it do?” but “what was it built to do and for whom?”
What OpenAI gave us is access to their neural network's final parameters, but not the kitchen where it was baked. So why is everyone calling it "open-source"?
Because the language around "open" has been strategically diluted and OpenAI isn't the first to do this. Meta, Google DeepMind, and others have all played this game. They release model weights (sometimes under open-source licenses) but don't disclose full training data or methods, then let everyone celebrate it as a victory for openness.
But OpenAI's release is particularly significant, they're the first major US LLM company to make this move, and they've done it with maximum fanfare. They named their models GPT-OSS (OSS = Open Source Software), released the weights under Apache 2.0, and let the media run with the "open-source" narrative. The tech community is gradually accepting this halfway point as "good enough," which has diluted what open-source used to mean. The confusion is not accidental but industry-wide strategy.
What We Actually Got (And What We Didn't)
OpenAI released:
GPT-OSS-20B: A 21B-parameter model optimized for devices with ~16GB of RAM
GPT-OSS-120B: A sparse mixture-of-experts model with 117B parameters, designed for high-end GPUs
These models are powerful, useful, and licensed openly. But let's not confuse access with transparency. Here's what's missing:
The training code that shaped these models
The datasets used to train them
The methodology behind their development
The ability to reproduce them from scratch
Open-weight is a strategic halfway point that looks generous but maintains power. This has become the industry standard. Meta with Llama, Google with Gemma, and now OpenAI with GPT-OSS. They all follow the same pattern: release the weights, claim openness, but keep the recipe locked away. There are calculated reasons why they all stop at this halfway point:
Legal risks: Their training data likely includes copyrighted content. Full disclosure could open legal floodgates.
Competitive edge: Their training methods are proprietary gold. Releasing them would help rivals catch up.
Safety optics: Full openness might enable misuse or so the narrative goes.
Strategic control: Controlled openness maintains market dominance while appearing generous.
Why This Matters More Than We Think
When we start celebrating curated openness like it's full transparency, it tells us something troubling about how low the bar has become. Maybe we've gotten so used to black-box AI that any glimpse behind the curtain feels like a gift.
But what are really applauding? access for private gain, or openness for public good?
The people rejoicing loudest are already launching courses, monetization schemes, and AI businesses overnight. The minority asking "why not release the full pipeline?", those concerned with ethics, equity, and the systems beneath the surface are drowned out by the celebration. We are rewarding strategic generosity that fuels capitalism instead of transparency.
The African Perspective
This article probably won't be taken seriously by many, not because it lacks clarity but because credibility is coded in this space, and rarely in favor of those outside its center. But the questions remain: What exactly are we applauding? And who benefits when we mistake curated generosity for genuine openness?
If you've read my work before, you know I always center the African lens in these discussions and this very moment hits differently when you're not at the center of the tech ecosystem.
Africa has long been positioned as the "end user" of global AI, not a co-creator. When OpenAI releases something under the banner of "openness," the assumption is that it's now accessible to everyone. But accessibility and equity aren't the same thing.
The constraints are real:
Most African AI students don't have laptops with the GPU power needed to run these massive models, and many research labs lack the stable connectivity and compute infrastructure
Without the full pipeline, we remain dependent on prepackaged intelligence and the values embedded in it
We can't interrogate how African data was used, or audit for bias against our languages and contexts
Curated openness gives us the illusion of inclusion while quietly deepening dependency. How is that empowering?
The Real Question We Should Be Asking
Some of the "authorized voices" in tech know the difference between open-source and open-weight, yet still push the narrative. Others are genuinely oblivious, just loud and riding the hype. But us? We amplify. We repost. We celebrate. Sometimes without reading past the headline. Sometimes because it's trending. Sometimes because we just want to believe the tech world is getting more open.
If we can't control the pipeline, we must control how we position ourselves around it. We don’t need permission to flip the script, so we should (amongst others):
Build layered literacy: Don't stop at "Wow, it works." Ask what it was trained on, what it doesn't understand, what contexts it misrepresents. Tech literacy becomes cultural power.
Design ethical business models: Turn usage into ownership through community-led AI co-ops that prioritize public good over profit.
Redefine innovation: In African contexts, innovation isn't disruption, our foundations were already shaped by too much of that. Silicon Valley loves "disruption": breaking things to rebuild them. Africa has experienced centuries of disruption (colonialism, exploitation, extraction). Therefore, more disruption isn't what we need, we need healing, building on what exists, and respectful adaptation
What works best here is translation (making global tools work locally), mediation (bridging different systems and ways of knowing), and customization(adapting to specific cultural and practical needs). Making tools legible to grandmothers, creating interfaces that don't assume literacy, embedding indigenous ethics into model design.
This moment is a test of whether we'll just consume or whether we'll reshape. Let's applaud technical advances, sure. But let's also ask:
Who has the power to reproduce intelligence?
Who decides what counts as "open"?
Who benefits from curated access, and who's left outside looking in?
Because if we keep mistaking a doorway for a seat at the table, we'll never realize we're still outside. Half-open isn't half-good. It's fully strategic. And maybe it's time we stopped applauding and started building meaningfully.
Thank you for reading!
This newsletter is independently researched, community-rooted, and crafted with care. Its mission is to break down walls of complexity and exclusion in tech, AI, and data to build bridges that amplify African innovation for global audiences.
It highlights how these solutions serve the communities where they're developed, while offering insights for innovators around the world.
If this mission resonates with you, here are ways to help sustain this work:
📩Become a partner or sponsor of future issues → reambaya@outlook.com
→ 🎁Every child deserves to be data literate. Grab a copy of my daughter's data literacy children's book, created with care to spark curiosity and critical thinking in young minds. (Click the image below to get your copy!)
You can also sponsor a copy for a child who needs it most or nominate a recipient to receive their copy. Click here to nominate or sponsor.
→ 🧃Fuel the next story with a one-time contribution. Click the image below to buy me a coffee (though I'd prefer a cup of hot chocolate!)
These stories won't tell themselves, and mainstream tech media isn't rushing to cover them. Help ensure this reaches the audience it deserves.
Let’s signal what matters together.
Thank you for being part of this journey!
Everyone needs to read this.
Rebecca Mbaya just dismantled the illusion behind so-called “open” AI. While the cartel of major labs releases open-weight models wrapped in open-source branding, Rebecca shows exactly how this curated generosity hides structural control, and deepens global dependency.
She doesn’t just critique the tech; she reveals the power dynamics, the strategic ambiguity, and the linguistic sleight of hand that props up this illusion. And she does it through a lens the industry too often ignores.
For contrast, Google’s Gemma 3 (my personal favorite open‑weight model) ships not only open weights but also source code, inference tools, and technical reports under their custom Gemma license. You can audit the architecture, fine-tune it, and build on it, though there are some usage conditions and enforcement rights retained by Google. That’s access with transparency, not just brand polish.
But as Rebecca powerfully points out, even this level of openness remains inaccessible to most. Where she lives in Congo, no one has a 16GB GPU. The models may be “open,” but the door only opens for those with the resources to walk through. In that light, so-called democratization becomes another layer of exclusion, a luxury openness, gated by poverty.
This is the kind of writing that makes you see differently.
Read it. Share it. Then ask yourself:
Who gets to build intelligence, and who just gets to borrow it?
Rebecca, the thesis brilliantly laid out. It sounds like open source snake oil. Open AI is already being monetized with subscriptions, and I can’t see ads being far off.
How Africa gets in on this AI revolution remains an unanswered question in my mind. As you point out true open source isn’t likely to happen, at least not anytime soon.
Excellent, well researched, insightful read. Thank you.