0
1
2
3
4
5
6
7
8
9
0
0
1
2
3
4
5
6
7
8
9
0
0
1
2
3
4
5
6
7
8
9
0
%

“AI was built by men, for men”

Asma Derja on reclaiming the future of technology

By Ahmetcan Uzlaşık | Curated by Zeynep Bozkır

How ethical is the intelligence we’re building into our machines, and who gets to decide? Asma Derja, a former Amazon Sr Manager and the Founder of Ethical AI Alliance, turned AI ethics advocate, believes the future of technology must be reimagined from the ground up. Drawing on her roots in North Africa and her years of corporate experience, she is leading a growing movement that challenges the power dynamics embedded in artificial intelligence. Through the Ethical AI Alliance, Asma is building a diverse, transdisciplinary community to demand accountability, amplify underrepresented voices, and shape a world where AI serves the public good, not just corporate profit.

In this conversation with journalist Ahmetcan Uzlaşık from Scrolli, she discusses the hidden labor behind AI systems, the gender and racial biases baked into algorithms, the importance of Global South innovation, and why imagination itself is a form of resistance.

Asma, let’s start from the beginning. Who is Asma Derja and what’s your story?

I like to say I’m a former corporate leader who spent 15 years in business transformation, but really, I’m a human with a deep consciousness of human rights, equity, and inclusion. I wanted to merge those worlds: the structured corporate one, and the one fighting for justice. I’m originally from North Africa, with a Moroccan father and a Tunisian mother. I grew up in Marrakesh, studied in Tunisia and France, and I’ve always been what you’d call the immigrant overachiever. But with that privilege comes responsibility, and I wanted to use mine to push for ethics in AI.

And that led to founding the Ethical AI Alliance?

Exactly. After I quit Amazon, I knew the direction I wanted to take. From the inside, I saw the lack of ethical checkpoints in how AI was designed and deployed. But inside corporations, there’s no time for that, it’s not in the KPIs. So I decided to build a community outside that could push back in.

I started reading human rights reports, tracking how tech was enabling war crimes and surveillance. I thought, “Why don’t I rally people around this?” I searched for a domain name, everything was taken or $20,000! The only one I liked that was available was Ethical AI Alliance. So I built a website and went to my first conference, the UN Summit in Geneva. They almost printed my badge as AWS. I said, “No, I’m with the Ethical AI Alliance.” The name stuck, and the mission began.

What is the Alliance working on now?

We defined the core problem clearly: accountability. There’s no real mechanism to hold AI developers accountable, especially before harm happens. The EU AI Act is a great first step, but it’s after-the-fact regulation. What we’re doing is creating a public watchdog. That starts with a community, diverse by design. Different geographies, genders, disciplines: academia, art, students, policy, tech. We bring them together to co-create solutions.

One of our first tools is an Ethical AI Label, aimed at startups and SMEs. They want to do better but can’t afford expensive ISO or IEEE certifications. We’re helping them apply community-built standards in a way they can afford and understand.

In your writings you indicated AI was “built by men, for men many times.” What do you mean?

That phrase gets attention, but sadly, there is a dark reality behind it. About 80% of the global AI workforce is male. And what is AI solving for? If it were built for women, we’d see more focus on gender-based violence, healthcare disparities, or economic inequality, issues the UN identifies as most urgent. But AI investments don’t go there. When it does, for instance, a tool tracking femicides in South America, it’s underfunded, barely scaled, and forgotten.

So yes, ChatGPT helps us summarize emails, but the real power of AI lies in the invisible systems: mortgage decisions, healthcare diagnostics, job applications. And they don’t serve everyone equally. Intersectional feminism asks: who does it actually serve?

You also emphasize the role of the Global South. Why is that important?

That's a very good question. And yes, you're right in saying that we really aim to amplify Global South voices and understand what's happening there, things we often don't hear about. Not just from an aid or humanitarian point of view, like, 'Hey, we want to help Africa', but beyond that kind of narrative.

We want to see what's emerging from the ground: what kind of use cases are being developed in Africa, Latin America, or Asia? What can they teach us about AI? The moment you enter this conversation and frame the question that way, you start to see some really beautiful examples.

I'm especially interested in small language models, community-based and independent, unlike the large, Western, Silicon Valley-driven models that usually focus on scalability and the English language. We're seeing smaller models being developed using local data, solving real problems.

For example, an Indigenous community might need a tool to protect their agriculture or predict climate shifts due to climate change. These are small-scale, local solutions. We're hoping to build a repository of these use cases and create a platform for South-to-South cooperation, so communities can build on each other's work.

And hopefully, there will be space for North–South collaboration too, but ideally with a learning mindset, not one of imposing solutions. That’s exactly what this roundtable aims to foster: South–South dialogue. We have representation from Taiwan, Latin America, and Africa."

One of your articles that stood out to me was “The Weakest Link of AI”, about data labeling. Why is that so critical?

Everything begins with data labeling, and we barely talk about it. It's outsourced to places like Kenya or the Philippines, under poor conditions and vague, top-down guidelines. A term like Allahu Akbar, which is part of daily life for millions, could be labeled as "terrorist" because someone in a Dublin office said so. And these workers are exposed to trauma, paid unfairly, and unsupported. Yet their labels determine what AI “learns.” It’s the invisible backbone, and we need to shed light on it.

What about data privacy? With tech giants owning AI, people feel helpless.

You’re right. If you’re not concerned about your data today, you’re either misinformed or in denial. This isn’t paranoia, it’s reality. We're already sliding into techno-fascism. Data is being used for political suppression. Students voicing support for Palestine in the U.S. have been detained without charges. It’s terrifying.

We need AI literacy, public pressure, and regulations, basic protections. I admire people like Timnit Gebru who’ve paid the price for speaking out. She told me this won’t be one battle, it’ll be many. Regulation, grassroots pressure, unions, corporate engagement, all at once.

Let’s end on a hopeful note. What’s your vision of a better AI future?

I love this question, because they say imagination is already a form of resistance, right? Even our imagination is colonized, we’re often only shown the future they want us to see. But what if we step back and begin to envision, create, and simply imagine different futures?

To answer your question: we've been thinking about this a lot. Maybe we haven’t projected all the way to year 2050, but we can take it as a rough timeframe. By then, AGI will likely exist. But we don’t imagine it as the monolithic, superhuman, Western-driven version that’s often portrayed, like white humanoid robots dominating the world.

In our collective imagination, AGI isn’t co-owned, it’s co-governed. Built with and for communities, not just corporations. It’s a steward of collective intelligence, developed ethically, transparently, and with grounded accountability.

We’re imagining something that’s co-designed and co-decided by those historically excluded from power. The ethical frameworks shouldn’t be top-down, they should be living, bottom-up, and transdisciplinary. Every voice should be respected, not overridden by a select few.

We want AGI to serve liberation, not domination. To center justice, environmental sustainability and cultural diversity. A future where languages, communities, and cultures are not erased. Ultimately, it’s about moving from extractive technologies to distributed, inclusive ones.

That's a great final message. You should maybe think about becoming a politician even in the future, you should start a political party.

Maybe we will. The Community AI Party. Why not?