Uncomfortable "AI" Ethics
Beyond the Algorithm
Before we talk about AI ethics, we need to ask a question most people skip: Whose definition of ethics are we using?
This isn’t academic hair-splitting. It matters because different ethical frameworks lead to radically different conclusions about what’s “good” and what’s “harmful.”
Western philosophical ethics (Kant, Mill, Rawls) emphasizes individual rights, rational principles, and universal rules. Good action is about respecting autonomy, maximizing happiness, or ensuring fairness through consistent application of principles.
Ubuntu ethics from Southern African philosophy would say, “I am because we are.” Good action is what strengthens community, maintains relationships, and ensures collective wellbeing. The individual exists through and for the collective.
Islamic ethics grounds goodness in submission to divine will. Ethical action aligns with what God commands, filtered through scholarly interpretation and community practice.
Feminist ethics of care emphasizes relationships, context, and responsibility. Good action responds to particular needs in particular situations, prioritizing care over abstract principles.
Each of these:
Defines “good” differently
Makes different things matter most
Excludes someone or something from moral consideration
Creates different obligations and permissions
When a Western AI ethicist talks about “fairness,” they often mean equal treatment, applying the same rules to everyone. When someone operating from Ubuntu talks about fairness, they might mean ensuring everyone’s needs are met, even if that requires unequal treatment at some point. When a feminist ethicist talks about fairness, they might reject the whole framework of rules and focus on responding to specific vulnerabilities. These aren’t just different paths to the same destination. They’re different destinations, and there’s no neutral starting point.
Choosing which ethical framework to use is itself an ethical decision with real consequences. When we say “AI ethics,” we’re usually smuggling in Western philosophical assumptions without naming them. So when we talk about AI ethics, we should be working across frameworks, aware that each has blind spots, trying to surface tensions rather than resolve them neatly. And the tensions get really uncomfortable when you look honestly at what ethics actually requires.
The Ethics of Complicity
The first thing most AI ethics conversations won’t say directly: “You are complicit in harm. So am I. We all are.”
I know I am about to sound like a broken record here, but stay with me. You’re reading this piece on a device that most likely contains cobalt, probably mined by children under exploitative conditions in the DRC. The device contains rare earth minerals extracted at environmental and human costs. It was manufactured in facilities where labor practices range from questionable to horrifying.
If you’re using AI tools (and you probably are, whether you realize it or not), you’re participating in systems built on data worker exploitation, energy consumption that accelerates climate change, and training data extracted without consent or fair compensation. You can try to minimize harm, you can choose less exploitative options where they exist, but you cannot achieve purity. There is no ethical consumption under these conditions. There is no way to completely opt out while remaining functional in modern society.
So what do ethics mean when purity is impossible?
The politically correct answer is: “Do the best you can. Choose the least harmful options. Advocate for change while participating in flawed systems.”
The human answer is: Maybe ethical action isn’t about being clean. Maybe it’s about which harms you’re willing to be accountable for.
There’s a difference between unknowing participation and eyes-open complicity. Between “I didn’t know” and “I know, and I’ve decided this harm is one I’ll carry.” Between pretending your hands are clean and acknowledging they’re not and acting anyway. That’s harder and means sitting with the discomfort of knowing you’re causing harm. It means not getting to feel virtuous. It means asking “which harms am I willing to be responsible for?” and not having a satisfying answer.
But complicity is not evenly weighted. Responsibility is not distributed equally.
A person using a device built on extraction is not morally equivalent to a company structuring supply chains that rely on it. A data worker under contract pressure is not responsible in the same way as an executive who designs incentive systems, sets wages, or decides whose labor is disposable. Complicity may be shared, but culpability scales with power, decision-making authority, and exit options.
When I use AI tools despite knowing about data worker exploitation, I’m making a choice. I can’t pretend I’m innocent. I can only decide whether I’m willing to be accountable for that choice and whether I am using whatever leverage I have, however limited, to change the conditions that make it the choice I’m facing. In this sense, ethics under complicity don’t offer innocence but responsibility proportional to capacity.
And most people don’t want that, because responsibility without innocence feels terrible. Especially when it comes without absolution. Doesn’t it?
Ethics Without Optimism
What if we do everything “right” and still lose?
What if we build awareness, foster coordination, organize collective action, recognize leverage, demand structural change, and it doesn’t work?
What if the companies are stronger, the systems more entrenched, the resistance too fragmented?
What if the children keep mining cobalt in the DRC, the data workers in Kenya stay exploited, and the extraction continues?
What if ethical action doesn’t lead to justice?
What if you sacrifice and coordinate and refuse participation and nothing changes?
Is ethics still meaningful without hope of winning?
The optimistic answer is: “Yes, because even small changes matter. Because we plant seeds we won’t see bloom. Because the arc of history bends toward justice.”
The uncomfortable answer is: maybe, but only if you’re willing to detach ethical obligation from outcome. You have to decide if acting ethically matters to you, even when you can’t know if it bends anywhere, or if it bends at all.
Some ethical traditions answer this directly. Religious ethics say you act rightly because God commands it, regardless of outcome; your job is submission, not results. Kantian ethics says you act according to principle because that’s what rational beings do, not because you’re guaranteed success.
But most contemporary AI ethics assumes progress. It assumes that if we identify problems and take action, things will improve. That’s a very modern assumption. It might not be true. And when ethics is justified only by expected success, it collapses the moment success is no longer plausible.
So what remains when optimism fails? Not resignation, but authorship.
Ethics without optimism is not about believing your actions will win. What we refuse is to let the shape of the world be decided entirely without our consent. It’s about choosing how we participate or refuse to, knowing the outcome may be unchanged, but the responsibility is still ours.
What if the ethical choice is to refuse participation, recognize leverage, organize collectively, not because it will work, but because refusing to try hands moral authority entirely to those with “power”? That’s much harder than “be ethical and create change.” That’s “be ethical even when change isn’t coming.” It replaces the question “will this succeed?” with “who gets to decide what harm is acceptable if I do nothing?”
I don’t have an answer to whether that’s comforting. I do know it’s not nihilism. It’s ethics stripped of guarantees, acting without the promise of redemption, progress, or moral cleanliness. And I know most ethics frameworks don’t prepare us for that kind of commitment.
Whose Ethics Count as Resistance?
The same act of refusal is interpreted very differently depending on who performs it and under what conditions. When workers in Silicon Valley organize for better conditions, we call it labor rights. When they push back on unethical projects, we call it principled refusal. When they withhold labor to demand change, we call it a strike.
When African communities refuse to allow mining under certain conditions, we call it resistance or barriers to development. When they demand control over their own resources, we call it resource nationalism or political instability. When they withhold cooperation from extractive industries, we call it obstruction. These are not identical situations. They carry different risks, different stakes, and different exposures to violence but they are related acts of refusal, judged through radically different power lenses.
The difference in framing is not accidental. “Ethics” tends to be coded as something people with institutional power do to restrain themselves. It’s an internal virtue, safely exercised within recognized systems. “Resistance,” by contrast, is how refusal by people without institutional legitimacy gets named. They are named as disruption, threat, or instability. We moralize one and securitize the other.
When a Google engineer refuses to work on Project Maven (military AI), that refusal is legible as ethical because it occurs within a context of legal protection, professional mobility, and public legitimacy. When a Kenyan data worker refuses to label certain content, the same refusal is more likely to be read as insubordination, breach of contract, or failure to perform, not as ethics, but as a problem to be managed.
The ethics you’re performing might be someone else’s obstruction to progress. The resistance you’re dismissing might be someone else’s ethical practice, and which framing gets applied often has less to do with the substance of the refusal than with who has the power to name it, absorb its costs, and survive its consequences.
If you’re from the Global North, refusing to participate in extractive AI is often framed as ethical leadership. If you’re from the Global South, refusing to allow extraction is framed as a barrier. The refusal is not the same, but the moral logic is comparable. What differs is the distribution of risk and legitimacy.
This matters because if we only recognize refusal as ethical when it comes from people already protected by institutions, we are not talking about ethics in any meaningful sense. We are focusing on which refusals the powerful are willing to tolerate.
True ethics may require recognizing that the Kenyan data worker refusing to participate and the African community resisting resource extraction are practicing ethics under constraint, even when their refusal is inconvenient, destabilizing, or punished by the systems that depend on their cooperation.
And we don’t usually frame it that way, do we?
Ethics as Privilege
Can you afford to be ethical?
Let’s say you’re a data worker in Kenya. You know the AI training you’re doing will be used for systems that won’t benefit your community. The ethical choice seems clear: refuse the work.
But refusing the work means:
No income for your family
No school fees for your children
Possible homelessness
Other workers will do it anyway, so your refusal changes nothing structurally
Or let’s say you’re a developer at a company building AI with serious ethical problems. The ethical choice seems clear: quit, blow the whistle, refuse to participate.
But refusing participation means:
Losing your visa status if you’re an immigrant
Losing healthcare in a country without public health
Industry blacklisting that makes future employment difficult
Your replacement will continue the work anyway
What does ethics demand when taking the ethical stance means you and your family suffer?
The comfortable answer is: “Ethics is about what you do despite cost.” Or: “Find collective ways to reduce the personal cost.” Or: “Sometimes sacrifice is required.”
The human answer is: Demanding ethical purity from people under constraint is itself unethical.
If you have savings, alternative job prospects, social safety nets, citizenship in a wealthy country, your ethical choices cost less. You can afford to refuse participation. You can afford to quit. You can afford to take stands.
If you’re living paycheck to paycheck, supporting family, lacking citizenship protections, without safety nets, the same ethical choices cost exponentially more.
So is ethics a luxury good available primarily to those who can afford it?
Most ethical frameworks don’t address this directly. They assume you have enough agency to make choices. They assume your survival isn’t constantly threatened. They assume you can afford consequences.
But for many people, “ethical AI practice” as it’s usually framed is functionally inaccessible. Not because they don’t care, but because they can’t afford to care in the ways that frameworks demand. When we celebrate ethical refusal, we might be celebrating privilege as much as principle. When we condemn participation in harmful systems, we might be condemning people whose choices are constrained by survival needs.
This doesn’t mean ethics are meaningless under constraint. It means we need ethics that acknowledge constraint, that don’t demand martyrdom, that recognize some people are being asked to pay infinitely higher prices for ethical stances than others. And it means recognizing that someone participating in harmful AI work while feeding their family is not less ethical than someone refusing participation from a position of financial security. Ethics under privilege look different than ethics under constraint. We can pretend it’s the same but it’s not.
When Ethics Requires Harming Someone
Sometimes there is no win-win. Sometimes ethical action requires choosing who to harm.
You’re a data labeler. You know the content you’re moderating is training AI that will be used to suppress speech in authoritarian contexts. The ethical choice seems clear: refuse. But if you refuse:
You lose income your family needs
Your coworkers lose work too as the company cuts the contract
The work goes to another country with less labor protection
The AI gets trained anyway, just with lower-quality data that might cause more harm
These are not purely individual constraints. They are structural ones but they are often felt first, and most sharply, at the individual level.
Or you’re a researcher. You’ve developed an AI technique that could help diagnose disease in under-resourced areas. But you also know it could be used for surveillance and control. The ethical choice seems...what? Publish and enable both uses? Don’t publish and prevent the beneficial use? Who decides?
There is no clean answer. In both cases, someone will get harmed regardless of what you choose. If you label the content, you’re participating in systems of control. If you refuse, you’re harming your coworkers and family. If you publish, you’re enabling surveillance. If you don’t publish, you’re preventing beneficial applications.
Yes, collective action can sometimes reopen these choices. Workers can organize. Researchers can coordinate norms, slow release, or conditional publication. Communities can resist together rather than alone. But collective action itself has costs, risks, and uneven accessibility, and it is often unavailable precisely to those under the greatest constraint. Most ethics frameworks pretend you can find the solution that minimizes harm to all parties. But sometimes you can’t. Sometimes the choice is: harm the company or harm the community. Harm your family or harm distant strangers. Enable misuse or prevent beneficial use. So what does ethics mean when every option includes harm you’re responsible for?
Is the answer: “Choose the option that causes least total harm.” Or: “Apply principles consistently.” Or: “Do a cost-benefit analysis.”?
Is it possible that ethics might mean owning that you’re choosing who to harm, being honest about that choice, and being accountable for it. Not “I minimized harm” but “I chose this harm over that harm, knowing both are real.” Not “I followed principles” but “I picked principles that justify the harm I was willing to cause.”
Not “I did a rational analysis” but “I made a judgment call about which suffering matters more to me, knowing that judgment reflects my position and my biases.”
That’s harder. It means you don’t get to feel like the good guy. It means acknowledging that your ethical choice is also a choice to harm—just harm directed one way rather than another.
When we choose to use AI tools despite data worker exploitation, we’re not minimizing harm; we’re deciding that harm is one we’ll cause for our own benefit. When we choose not to use them, we may be harming our organizations, our careers, and people who depend on our work. Neither choice is clean. Both involve responsibility. There’s no position outside harm. There are only choices about which harm to cause and whether you’ll be honest about making that choice.
Most people don’t want this framing. They want ethics to provide clean answers, justify their choices, and make them feel good. But that’s not what ethics offers when we look honestly at situations shaped by power, constraint, and irreversibility.
So What?
I realize this piece is relentlessly uncomfortable. I haven’t offered solutions or provided frameworks that make things easier. I haven’t told you how to be ethical and feel good about it.
That’s intentional. Because most AI ethics discourse offers false comfort and pretends:
You can achieve purity (you can’t)
Ethical action leads to good outcomes (it might not)
Your ethics are universal principles (they’re positioned choices)
Ethics is available equally to everyone (it’s not)
There are win-win solutions (sometimes there aren’t)
That false comfort is dangerous because it makes people think they’re being ethical when they’re just being comfortable. It turns ethics into performance rather than practice and obscures the real costs and real choices.
So what do we do with uncomfortable ethics? Not nothing and not everything.
Uncomfortable ethics is not an endpoint but the price of refusing lies about purity, progress, and innocence. If ethics doesn’t guarantee success, it still demands practice. Not a checklist or a universal framework but a set of commitments that remain binding even when outcomes are uncertain.
Commitment 1: Sit with the discomfort, but don’t stop there.
Discomfort is not the work; it’s the cost of seeing clearly. Don’t rush to resolve it with frameworks that make you feel better. Stay with the tension of complicity, the possibility of meaningful action without hope, the recognition that your ethics might be someone else’s oppression, the reality that some can’t afford what you can, the truth that sometimes you’re choosing who to harm. Then act anyway, knowing you are acting without guarantees.Commitment 2: Make harm visible.
When you talk about AI ethics, don’t abstract the damage. Name who bears the cost of the systems you benefit from. Trace where labor, extraction, energy, and risk are pushed downward. If your ethical stance depends on someone else absorbing harm quietly, say that out loud. Ethics that cannot survive being named is avoidance.
Commitment 3: Make your positioning explicit.
Say whose ethics you are invoking. Say what you’re assuming. Say who gets excluded by your framework. Say what privilege makes your ethical stance affordable. Don’t pretend to universality you don’t have. Positioned ethics is not a weakness; it’s more honest, and it’s a strength
Commitment 4: Refuse moral outsourcing.
Don’t hide behind policy, inevitability, market logic, or “that’s just how the system works.” Say: I am choosing this under these constraints. Say what leverage you do or don’t have, and what you’re doing with it. Ethics begins where you stop pretending the system chose for you.
Commitment 5: Build capacity for collective action anyway.
Even knowing you might lose. Even knowing it might not lead anywhere. Even knowing your ethics might be framed as resistance, obstruction, or instability. Even knowing some people can’t afford to join you. Collective action is not justified by guaranteed success. It’s justified by refusing to leave moral decisions entirely in the hands of those with power. The alternative, treating ethics as whatever the powerful say it is, participation as inevitable, resistance as futile, and comfort as the goal, is worse.
Uncomfortable ethics doesn’t offer clean answers, but it does offer honesty about impossible choices, and responsibility without innocence. It asks you to remain answerable even when the outcome disappoints you. And maybe that (refusing to lie to yourself about how hard this is) is more ethical than pretending the choices aren’t impossible.
What uncomfortable ethical truths are you avoiding?
Who can’t afford the ethical stances you’re taking?
Where does your ethics become someone else’s oppression?
These are ongoing obligations. They don’t resolve ethics; they keep it alive.
Thank you for reading!
Note: I also contributed to Everyday Ethical AI: A Guide for Families and Small Businesses, a recent book on ethical AI by Karen Smiley, which touches on many of the tensions discussed here. Readers interested in exploring this topic further may find it a valuable resource for deepening their understanding.
Support this work
Your support keeps it independent and community-rooted.
Thank you for standing with this work.



Powerful reframing of the ethics debate. The contrast between Western ethics frameworks and Ubuntu/Islamic/feminist models clarifies how much gets hidden in supposedly "universal" standards. I've seen this play out with AI auditing firms—they'll flag bias issues but never question whether equal treatment is the right goal in the first place. The section on ethics as privilege hits hard tho, especially thinking about researchers ina position to whistleblow versus contractors who cant afford to walk away. Not sure how to resolve that tension, but naming it matters.
Rebecca, thank you for this relentless and necessary piece. You’ve done the critical work of mapping the ethical minefield in vivid, human detail. The sections on “Ethics as Privilege” and the uneven framing of “resistance” versus “ethics” are particularly powerful.
Reading it, one question crystallized for me: Where does this analysis have the most potential to change material outcomes?
You’ve written from the heart of the matter—from the perspective of those living with the consequences of extraction. That gives your voice a form of authority that no Western think-tank report can match. The most powerful application of this work might not be in convincing individual consumers, but in arming sovereign actors with the moral and analytical framework to govern.
The DRC government holds the ultimate legal and moral authority over the minerals in its soil. Your work, your precise deconstruction of “ethics” as a Western-coded performance versus “resistance” as a survival imperative, is the exact toolkit needed to reframe the debate on their terms.
Imagine this analysis distilled into a policy brief or a presentation for officials in Kinshasa. It becomes a weapon of sovereign negotiation: “You call our demands ‘obstruction.’ We call it the ethical application of our jurisdiction. If you want our cobalt, here are the standards—sourced from our reality, not your convenience.”
Your writing doesn't just diagnose complicity; it can build the case for a legitimate, alternative source of power. That is transformative. The next step from profound critique isn’t despair—it’s translating that critique into the language of treaties, mining codes, and national policy.
And perhaps, its most potent translation could be a strategic brief for policymakers already fighting these battles on the global stage—figures like US Senator Bernie Sanders or Elizabeth Warren, who have the platform to turn this analysis into a line of questioning in a hearing, or a clause in a bill. Your voice could directly arm the political resistance within the very capitals of power.
You have the map of the moral terrain. Perhaps the most exciting work is now in the capitals, helping to redraw the legal borders of that terrain itself.
Again, profound thanks for the rigor and courage of this piece.