Week 1: The Algorithm of Empathy (Supports SDG 16: Peace, Justice & Strong Institutions)
The Story Begins with...
During the past,it started with a whisper in the code. People in earth would think that everything seems fine until something went terribly wrong... As we know, we believed technology is everywhere, it began with silence and it did not begin with a cyberattack. Algorithms began shaping what we saw online until today.
No system crashed.
Social media learned our likes, dislikes, and emotions to fed us more of what kept us engaged with those recommended and suggestion given by our interest but the people on earth showing more emotion and demanding more than ever before. Where people started addicted to social media and glued to their smartphone all the time in digital world era. But any sudden post could get viral in any second and anywhere around us, where anger spread faster than truth. Some of the misuse of AI in social media, fake news, cybercime and cyberbullying started to spread faster than the truth. Lies travelled farther than facts. A single post could triggered thousands of hostile and toxic comments and it could also possibly harm a victim and reputation itself. Nowadays, there are many people misuse AI to generate fake faces without others consent to break many people trust and their judgement. By the time communities realized the harm, the line between harmful online behavior and real-world conflict had already been crossed.
So in that moment...
we can realized that humanity indeed is crucial to bring us together to encounter something unsettling. Cybercrime was no longer just targeting the digital world and the victim but it was hacking people's emotions and mindset.
Finally, every since that happen the world began building The Algorithm of Empathy.
________________________________________________________________
ECHO is a future digital bot built for humanity, but to strengthen connection and relationship with the society so that we can understand that it serves detect bias and emotional escaltion before harmful content was spread and encourage respectful dialogue between users with different perspective.
This ECHO bot includes bias detection alert system where it identifies emotionally charged or control the vulgar language and toxic with hatred comment to alert the user to stop the action.Besides that, the bot also included AI driven empathy prompt system where the bot has the AI chat to communicate with the user to solve the problem and giving some advice with counselling effect to the user.
____________________________________________________________________________________________
The Solution: ECHO — Empathy-Centered Human Online Network
Imagine a platform designed not for engagement, but for empathy in human society.
ECHO uses Natural Language Processing (NLP) and machine learning to detect:
Emotionally aggressive language
Consideration of related public relation matters
Escalating hostility vulgar words
Instead of censoring users, ECHO gently remind the user:
“This message contains strong emotional language. Would you like to emphasize in a respectful way?”
The system may suggest alternative wording that keeps the user’s opinion but reduces aggression. It acts as a guide, not an impulsive thinking way.
Perspective Switch
Before users share controversial news, ECHO displays an alert showing:
Multiple viewpoints
Verified information
Fact-based explanations
This directly addresses the filter bubble effect. Instead of publishing directly , the bot filtered out those toxic resources and reject the request of the uploading news, the algorithm intentionally diversifies exposure.
Over time, this could lead to:
Less hostile comment sections
More balanced news consumption
Healthier community discussions
Why This Matters?
During global COVID-19 health crises, many misleading information spread rapidly because algorithms rewarded engagement. Sensational content often traveled faster than verified facts.
With ECHO:
Manipulative posts like accusation for taking other's credit for his own benefits
Misleading claims could include contextual information
Users would be encouraged to review sources before sharing
The goal is not to remove disagreement but to reduce escalation for the sake of dealing the world of peace.
Ethical Challenges
Designing empathy into algorithms raises important factor:
Who defines “empathetic” language?
Bias can be embedded in the system – AI may inherit developer bias and unfairly classify certain groups or communication styles.
Nudging and censorship – Encouraging respectful dialogue must not turn into suppression of free expression.
Risk of emotional surveillance – Analyzing tone and emotion could be misused for behavioral tracking or profiling.
Algorithmic bias is already a known issue in AI systems.
Conclusion: Designing for Digital Justice
Technology does not have to exploit emotion BUT it can nurture responsibility.
Justice in the digital age happens in comment sections, news feeds, and online communities. If we can design algorithms to predict consumer behavior, we can also design systems that promote understanding.
ECHO represents a shift from engagement-driven platforms to empathy-driven platforms where a step towards a more peaceful and inclusive digital society.
The Algorithm of Empathy is not about controlling speech or eliminating disagreement. It is about redesigning digital infrastructure so that technology connects rather than divides. In supporting SDG 16’s vision of peaceful and inclusive societies, empathy becomes not just a human virtue, but a design principle embedded into code. In a world shaped by algorithms, perhaps the most powerful innovation is not artificial intelligence alone, but artificial intelligence guided by conscience.
Reference
- Rodilosso, E. (2024). Filter Bubbles and the Unfeeling: How AI for Social Media Can Foster Extremism and Polarization. Philosophy & Technology, 37(2). https://doi.org/10.1007/s13347-024-00758-4
- Wikipedia Contributors. (2019, September 6). Algorithmic bias. Wikipedia; Wikimedia Foundation. https://en.wikipedia.org/wiki/Algorithmic_bias






0 Comments:
Post a Comment