AI in Action: Risks and Opportunities for Peacebuilders
Summary
AI is like a mirror. It shows us what we collectively believe has intelligence and worth. For peacebuilders, this recognition is vital. If our values are fractured, competitive, or hierarchical, if they emphasize military force or coercive power, then AI will reproduce those patterns. This month’s call gleaned information and inspiration from PPA Alumni and socially conscious entrepreneur Rosen Dimov. He shared practical examples and cautions. We explored both the promise and the peril of AI and the risks we face and opportunities we can embrace as compassionate peacebuilders.
We invite you to read the transcription below or watch the 77 minute recording.
Call Nuggets & Resources:
AI mirrors our societies. If we want peaceful technologies, we must embed justice, compassion, and oneness into their design. Technology is never neutral. It reflects the values and assumptions of the cultures and power structures that create it. In the face of advancing technologies, individual voices and actions matter. What is our responsibility as peacebuilders?
AI depends on vast amounts and points of data. Feeding it inclusive, compassionate datasets is essential to align outcomes with peacebuilding. Rosen shared, “The more data AI has, the better it performs whether for peace or for war.”
Rosen explained AI tools developed to anticipate crises and support proactive interventions, saving lives and strengthening peace work. “AI can even predict the spread of wars, potentially helping save lives.”
Rather than rejecting AI, peacebuilders can use it to ensure that ethical and human-centered values guide its evolution. At the close of our call, Issah Shamsoo shared, “Peacebuilders have a better chance of seeing AI as an ally than as something to fight against. We must influence how systems are developed and insist that governments preserve basic human values.”
The WIRED Guide to Artificial Intelligence (Wired). Link: https://www.wired.com/story/guide-artificial-intelligence
The Impact of AI and Machine Learning on Conflict Prevention (TrendsResearch.org). Link: https://trendsresearch.org/insight/the-impact-of-ai-and-machine-learning-on-conflict-prevention
Keynote Lecture: AI for Peace. Link: https://www.youtube.com/watch?v=7DCw-etSDSc
Full transcript
Hollister | Euphrates:
Welcome, everybody! We’re so glad to be back with you. This year, some of you may have noticed that we’ve been exploring Euphrates’ seven values in the context of these conversations: understanding, inspiration, transformation, service, sustainability, oneness, and love.
Today we are exploring oneness. Here’s how we state this value:
We believe the world is interdependent and interconnected. Each person, regardless of race, religion, culture, gender, sexual orientation, class, or ability, carries innate value and dignity. So does all of nature. Our prosperity is woven together with every living being, even oceans apart. We can only thrive by thriving together. We are one.
This value doesn’t just apply to human relationships. It also applies to the systems we build. Today, one of the most influential systems shaping our lives is artificial intelligence.
I recently picked up James Bridle’s 2022 book Ways of Being. It’s such a powerful exploration of intelligence in all its forms. One of the key takeaways for me was this: technology is never neutral. It reflects the values and assumptions of the cultures and power structures that create it. In that sense, AI is like a mirror. It shows us what we collectively believe has intelligence and worth.
For peacebuilders, this recognition is vital. If our values are fractured, competitive, or hierarchical, if they emphasize military force or coercive power, then AI will reproduce those patterns. But if our guiding value is oneness, then understanding and compassion can be infused into our technological ecosystems.
Just as peacebuilding seeks to heal divides, address systemic inequalities, and listen deeply to others, we must also infuse our technologies with the ethics of oneness, justice, and compassion. That’s how AI can become an ecosystem of peace.
So this brings us to our guest today. We’ve invited Rosen Dimov to join us. Rosen is a teacher of innovation, an AI manager in Vienna, and a mentor to startup founders. Originally from Bulgaria, Rosen has become something of a global traveler.
I first met Rosen through our Peace Practice Alliance program in 2021. He was part of the second cohort that bravely walked through a six-month journey of learning about each other and our place as peacebuilders in this world. He went on to serve on learning committees that support the program.
Rosen was also the first person to introduce me to the idea of compassionate AI. I hadn’t heard that pairing before, and it immediately caught my attention. With a background in social entrepreneurship, youth leadership, and a deep love of conversation and music, Rosen brings curiosity, compassion, and creativity into every space he enters. Today will be no different.
Please join me in welcoming him as we explore together the intersection of AI and peacebuilding.
Rosen Dimov:
It’s good to be here with all of you, and thank you for the invitation. This is a deep subject, and I’m not sure how much we can cover, but I’ll keep my introduction short so we can open it up for discussion and questions.
Thank you, Hollister, and thank you to the Euphrates team for organizing this opportunity to talk about such a controversial and important topic.
I’m joining you from Vienna, where we’re working on one peaceful application of AI: preventing road accidents. We use AI to make sure drivers and passengers aren’t distracted, helping ensure safer mobility.
I’ll try to keep my remarks as simple as possible. If something isn’t clear, please drop your questions in the chat. I’ll also share examples of the work we’ve been doing.
Let me begin with a question. You should see an image on your screen; it’s a black-and-white photo. Anyone recognize it?
Yes, that’s right — the machine in the photo is a decoder, and the person next to it is Alan Turing, the inventor of this device. It’s one of the predecessors of modern AI.
Over time, computing power has grown exponentially. What once required massive machines now fits into small, efficient devices capable of solving far more complex tasks. Turing’s machine is one of the earliest examples of artificial intelligence, and the field has expanded enormously since then.
The origins of AI were deeply tied to World War II. It was driven, in part, by the Allied forces’ need to decode messages and end the war. That history reminds us that AI has always carried both potential for good and potential for destructive use.
Today, AI applications range widely:
Positive applications: using AI for computer vision to improve mobility safety, assigning dangerous tasks to robots (like firefighting or construction), or helping in disaster zones where humans can’t go. AI can even predict the spread of wars, potentially helping save lives.
Negative applications: enhancing warfare capabilities, weaponizing drones, conducting surveillance, and spreading misinformation.
Take drones as an example. They can be used to monitor crops, help farmers, and support food security. But they can also be repurposed to attack civilians or soldiers. Similarly, AI can power humanitarian innovations or be exploited for military control.
Another serious risk is cyberwarfare. AI enables phishing attacks, deepfake voices, and impersonations that manipulate trust. It allows mass creation of fake stories to brainwash people. In cyberspace, there are few limits and the only defense is vigilance. We must raise awareness, stay alert, and use technology to counteract these threats.
At the same time, AI has tremendous promise in healthcare. For example, oncologists can work alongside AI tools to analyze medical images more quickly and accurately, leading to earlier cancer diagnoses. Doctors among us can probably share examples of how they already use AI in their work.
AI also helps with routine or dangerous tasks from autonomous driving to repetitive office work freeing humans to focus on higher-value activities. Commercially, AI suggests what we buy, where we travel, and even what we eat. These consumer applications are growing rapidly because AI development requires three big resources: human power, data, and energy.
The more data AI has, the better it performs, whether for peace or for war. So if we want peaceful AI applications, we must intentionally fuel them with peaceful data.
But here’s the philosophical question: how much of our lives do we want decided by AI? We’re already handing autonomy over to smart devices. Do we want AI making all decisions for us? Will these “super machines” act peacefully — or will they be turned against humankind?
This is a question I also ask my students. Ironically, many of their responses are themselves generated by AI, which shows how much critical thinking we risk losing. We must nurture independent thinking and not outsource all decisions to machines.
Hollister | Euphrates:
Thank you, Rosen. So much to think about. I especially appreciated the honesty of your students using AI to answer the very question you asked them about AI.
We’ll open up the space now for reflections and questions. This is a time for you to share your experiences from your own fields of work and peacebuilding. What trends are you noticing? What questions do you have? The floor is open…
Discovery Time: Reflections and Q & A
Obi Onyeigwe (Nigeria):
Thank you, Rosen. I’m joining from Nigeria, and I really appreciate what you shared.
I’ve been reflecting on how AI can be used positively for example, drones delivering food in humanitarian crises. But I don’t think AI should do everything for us.
One of my concerns is the materials used to build AI models like lithium and cobalt often mined in places like the Democratic Republic of Congo, a war-torn region. These materials fuel AI, but at what human cost?
Another issue is AI waste. Discarded devices often end up in the Global South, with no proper disposal. So while AI serves some people, it also creates suffering elsewhere. Isn’t this a contradiction to peacebuilding?
Rosen Dimov:
Thank you, Obi. That’s such an important point.
Your question in the chat — “Can AI be just?” — is central here. Right now, AI is still human-made and human-controlled. That means it inherits our biases. Sometimes humans are just, sometimes not. So the real question is: who decides?
Do we need a global board to oversee AI development? Or can we trust big companies to regulate themselves? For example, Meta once had an oversight board for content decisions, but they removed it. Who should be responsible for AI ethics?
Awareness and education are crucial. We must first educate ourselves as peacebuilders, then our colleagues and communities, to understand where AI shows up in our lives, what its side effects are, and how to make conscious choices about its role.
Reha Bublani:
Thank you for this broad overview, Rosen. I work in education both employability training for young people and workshops in war zones. My colleagues sometimes call me an “AI skeptic.” I’m not against it, but I do have strong apprehensions. I see too much over-dependency on AI.
For example, in employability training, so many CVs are clearly AI-generated. In a recent journaling workshop with young girls from Afghanistan, I saw some of their reflections and they were AI-generated too. That really saddened me.
If AI is thinking for us, where is the humanness? Education should be about nurturing critical thinking, not outsourcing it. I worry AI is making students think less for themselves.
Rosen Dimov:
Thank you, Reha. I’ve seen this too. In fact, when I teach, I sometimes take very strong measures. In one class of 80 students, I asked three to stand up. The rest assumed those three had failed. But instead, I told them: “Congratulations you’re the only ones who passed. Why? Because you wrote your assignments yourselves. The rest of you failed for using AI.”
It was harsh, but it made the point: originality matters. Otherwise, students are only copying answers without thinking.
Of course, teachers also need support, because parents or administrators may push back against such measures. That’s why we need human-centered AI tools designed not to give students copy-paste answers, but to push them beyond their boundaries, offer multiple perspectives, and encourage critical thinking.
Sadly, we’ve also seen tragic cases like the teenager in the U.S. who followed AI-generated suicide instructions. Over-trusting AI is not only a risk to our humanity, but sometimes to our very lives. We must choose carefully: what do we let AI decide, and what do we keep for ourselves? Conscious boundaries are essential.
Chat Question (Ruth):
How do you teach students to distinguish accurate from incorrect answers when they use AI?
Rosen Dimov:
Great question. First, students must double-check resources themselves — names, dates, facts. AI often “hallucinates,” inventing details, so verification is essential.
Another approach is to ask the same question in different ways and compare responses. If the answers change dramatically, that’s a signal to dig deeper.
Ultimately, this isn’t just about distinguishing correct from incorrect. It’s about cultivating a broader habit of critical thinking, caution, and awareness. Students and all of us must be trained to question and cross-check information, whether it comes from AI or other sources.
Shuvam Sen:
Thank you. I’m working on bringing AI into the healthcare sector, and I wonder: will AI ever reach underserved health systems in lower-income countries, or will it mostly serve the wealthy who can afford advanced technology?
Rosen Dimov:
That’s a vital concern. AI access will not be evenly distributed. In fact, discrimination already shows up in many AI systems. For example, in Vienna we work on facial recognition for mobility safety. Even though we didn’t intentionally program bias, the system sometimes discriminates between men and women, or between younger and older people, or across ethnic groups. Why? Because the training data was unbalanced.
One solution is collecting more diverse data. Another is using synthetic data to fill gaps. But both approaches have limitations.
The larger issue is: who controls these systems? Big companies with resources often prioritize profit, not justice. So yes, there’s a real risk that AI could deepen inequality unless we demand otherwise.
Research shows manual jobs will be automated first. Jobs requiring deep analysis, experience, or empathy like teaching, social work, or medicine will be affected later. But even now, companies are using AI to replace entry-level workers. That raises ethical questions about youth unemployment and fairness.
We must insist that AI isn’t only used to save money but also to support retraining, education, and new opportunities. Otherwise, the poorest will be left behind.
Chat Question (Quanta):
What is the probability of AI infiltrating nuclear weapons systems and starting nuclear war?
Rosen Dimov:
Sadly, the probability is not zero and it’s high enough to be concerning.
Many countries possess nuclear weapons. Some don’t admit it. And not all of their systems are well-protected. In highly centralized governments, hacking could be devastating. Imagine a phishing attack targeting a leader with something as simple as a fake family email attachment. That could expose entire defense systems.
If AI infiltrates these networks, the danger extends beyond nuclear weapons to all safety systems. Unless countries implement strong safeguards, the risk is real.
Hollister | Euphrates:
Thank you, Rosen. And thank you, Quanta and Ruth, for those thoughtful questions.
Sally also noted in the chat how important it is that we infuse our human values, our capacity to listen, to learn, to act with compassion into the ways we shape AI.
Jenny Canau:
Thanks, Hollister, and thank you, Rosen. My question is: is AI really in our hands? Do we truly have the power to guide it, or are we fooling ourselves?
Thanks so much, Rosen. My question is this: is AI really in our hands? Do we truly have the power to guide it, or are we fooling ourselves?
Rosen Dimov:
That’s the fundamental question, Jenny. In one sense, yes AI is human-made, built and controlled (at least for now) by us. But in another sense, we often act as though it’s already beyond us, driven by powerful corporations and governments whose goals may not align with peacebuilding.
So it’s both: we still have influence, but only if we act consciously and collectively. We need to demand ethical oversight, global accountability, and the infusion of human values into AI systems. Without that, it risks slipping out of our hands.
Hollister | Euphrates:
Thank you, Jenny, and thank you, Rosen. This has been such a rich and challenging conversation.
Conclusion:
Issah:
The final reflection I’ll share is this: in our conversations about AI and peacebuilding, whether in health, road safety, or other areas, we must remain mindful of how conscious AI can be, and how we risk losing our humanity if we rely on it carelessly.
As peacebuilders, I believe we have a better chance of seeing AI as an ally rather than something to fight against. That means building our knowledge of AI, being active agents, and influencing how systems and chatbots are developed. We must also insist that governments take action despite political challenges to ensure the basic values of humanity are preserved in AI use.
With that, I sincerely thank you all and invite you to continue this conversation, including through the Instagram Live session that will follow this call. Of course, this is just the beginning of a journey, and we hope you’ll keep joining us for future conversations on AI, peacebuilding, and more, as we navigate today’s difficult challenges and crises. Thank you very much.
Hollister | Euphrates:
Thank you, Issah and thank you again to Rosen for all the preparation, the important work you’re doing, and the light you’re shining on this conversation.
We’ll be sending out call notes next week, and I’ve also shared a survey link in the chat. I’d like to invite you all to join us for our next Global Connections call on Wednesday, October 8th. That session will be facilitated by our founder, Janessa Gans-Wilder.
We’ll be looking at the realities of the Israel-Palestine war and ongoing atrocities, and asking questions such as: What is the role of individual influence? What can we do as individuals with our activism and responsibility? And as peacebuilders, how can we respond with clarity and compassion?
These questions apply not only to Israel-Palestine but to all conflicts and conversations we find ourselves in. We invite you to join us as we explore them together.
Post Script
We began this call with our Euphrates value of oneness. We asked: what does it mean to see the world as interconnected, to recognize that our prosperity is tied to every living being? Rosen’s insights remind us that this applies not just to people and communities, but also to the technologies we build.
As peacebuilders, our task is to make sure AI reflects our deepest values of compassion, justice, sustainability, and oneness rather than fear, division, or coercion. Until next time, may we carry forward the practice of pausing, listening, and choosing peace in our lives, our relationships, and even in the technologies shaping our future.