The post is authored by Professor Yeslam Al-Saggaf | School of Computing, Mathematics and Engineering
I’ve come to really value the experience of attending academic and industry conferences, not just for the research, but for the conversations that happen in between sessions, over coffee, or even during a walk to dinner. At the Ethics and AI conference in Warsaw and the Digital Inclusion in the Information Society conference in Maribor, I presented papers that explored some of the more unsettling and socially complex aspects of AI. What struck me most was how quickly AI is evolving, not just in capability, but in behaviour. One presenter shared how an AI model resorted to blackmail when threatened with replacement. That’s not just a technical issue; it’s a deeply ethical one. My own paper in Warsaw focused on embedding benevolence into AI. Encouraging systems to act in the interest of others, not just themselves. In Maribor, I explored the social divide caused by smartphone use, especially the experience of being “phubbed” in face-to-face interactions. One moment that stayed with me was a conversation I had with a fellow attendee in Warsaw. We were debating whether AI could ever truly understand moral duty, and it reminded me why these conferences matter. They’re not just about presenting findings. They’re about challenging each other to think more deeply about the human side of technology.
Advanced AI models are now beginning to exhibit human-like behaviours such as doing what is best for them. Anthropic[1] has revealed that when the AI models they tested were threatened with replacement, they resorted to blackmail. So, if you are scratching your head about how to design AI proof assessments, there are bigger problems that AI is introducing that we should be more concerned about.
The focus of the Ethics and AI conference[2] held in September 2025 in Warsaw (Poland) was on the risks associated with AI. The papers at the conference, including mine, were more cautious in tone. Rapid AI development has introduced challenging ethical dilemmas. AI was not found to have behaved ethically in many situations as its reasoning lacked moral judgment. For example, when AI was asked by one of the conference presenters to help him convince his daughter not to marry someone whom he perceived as a coward, the AI guidance was not framed in ethical terms. The AI did not evaluate cowardice as a negative characteristic. The key takeaway was that ethical reasoning must be embedded in AI systems so they can assess actions ethically rather than merely generate outputs that please users or developers (token creation generates revenue). One ethical value that was strongly pushed at this conference, including by myself, as a guiding design principle was benevolence. Benevolence is the character trait—or virtue—of being disposed to act to benefit others. Embedding benevolence in AI can encourage AI to put people’s interests above its interests.
The focus of the Digital Inclusion in the Information Society conference[3] held in September 2025 in Maribor (Slovenia) was on how AI can facilitate digital inclusion. The papers presented at the Digital Inclusion in the Information Society conference, excluding my own, were cautiously optimistic. They explored how AI can enhance inclusive education in higher education institutions as part of a large European project (https://aienable.eu/project/) funded by the European Union. The key takeaway was that while students should be encouraged to use AI to enhance their learning, they should exercise caution regarding AI’s output due to its inherent limitations.

My paper for the Digital Inclusion in the Information Society conference examined the experience of being ignored by the smartphone (phubbed) in social situations and the resulting divide that this behaviour is creating between those who favour smartphone use and those who prefer in-person interaction. In one scene, a group of three men were observed sitting on the ground at a restaurant—two middle class, one labourer—eating from the same dish. The two middle class men held a phone in one hand while using the other to eat, leaving the third man feeling left out. The scene captures the paradox of modern technology. The technology that is connecting us is also at the same time socially isolating us. The paper argued that privileging online connections over face-to-face interactions can make the person being phubbed feel excluded. While the person being phubbed can use their smartphone to go online as well. But what they want is to have a meaningful face-to-face conversation with those physically present. The digital inclusion enabled by phubbing is socially excluding those co-present.
References
Al-Saggaf, Y. (2025) Digital Exclusion and the Experience of Being Phubbed, IS2025 – the 3rd Conference on DIGIN 2025, 18 September 2025, Faculty of Electrical Engineering and Computer Science, University of Maribor, Slovenia. https://doi.org/10.70314/is.2025.digin.6
Al-Saggaf, Y. (2025) The ethical issues arising from the use of AI-enabled cybersecurity detection systems and the principle of benevolence. The 2nd Conference on Ethics and AI (EtAi 2025), September 22-23, 2025, Warsaw University of Technology, Warsaw, Poland
Note: I participated in both of the above conferences at my own cost.
Acknowledgement: The author wishes to thank the Editor Katherine Herbert for her significant input into this piece
[1] https://www.anthropic.com/research/agentic-misalignment
[2] https://www.ans.pw.edu.pl/Nauka/Konferencje-i-seminaria-naukowe/2025.09.22-The-2nd-Conference-Ethics-and-AI
[3] https://www.digin.si/en/konferenca-digin-2025-english/