What is the major concern of HateAid?
We are a non-profit organization that advocates for human rights in the digital space. Our goal is to make the digital space safer so that everyone has the opportunity to participate in public discourse, especially on social networks, but also on other platforms. We do this in many different ways. We support those affected directly by advising them and also covering legal expenses. We also raise awareness and train police and judicial authorities in dealing with digital violence and those affected by it, because this is not about individual fates but an issue that concerns all of us. That is why we want to work towards better legislation and also initiate court proceedings to clarify legal issues that the judiciary has not yet addressed.
How do you see the organization’s function within democracy?
HateAid was founded in 2018 because of the threat to democracy posed by digital violence. It was based on the realization that digital violence is changing the entire public discourse, which has largely shifted to the internet. A 2017 study showed for the first time that people now censor themselves even if they have not yet been affected by hate speech, threats, or other criminal offenses. Half of those surveyed said they were participating less often in debates. One of the great promises of the internet was that it would make the world more democratic. In Germany and many other European countries, the opposite has happened. Some extremist groups exploit social network algorithms and employ them for their own purposes. They hijack public discourse, harass people, and systematically intimidate them in order to ultimately silence them. They strategically give the impression that their position is a majority opinion, but usually, these groups represent a minority position.
While the focus in 2019 was on the topics of refugees and migration, almost anything can now become controversial. We are currently seeing polarization in almost every argument, along with a censorship narrative that distorts our understanding of freedom of expression. The motto seems to be: “We are still allowed to say that, aren’t we?” In this way, the unspeakable becomes sayable again because any restriction of supposedly free speech is declared censorship. However, we can only guarantee freedom of expression if people can speak without fear of intimidation, reprisals, or violence. And that is precisely not what we find on the internet.
What kind of cases do you and your team deal with?
Digital violence has many faces and changes with technological developments. As was the case 20 years ago, insults in comment sections are still common today. Death and enemy lists are also widespread, and the legislator has now reacted by creating a new criminal offense. We are also increasingly dealing with deepfakes: for example, perpetrators use AI to create fake images of people from a profile photo on LinkedIn. This is free and can be done in seconds. Pornographic deepfakes mainly affect people perceived as female. A fake nude photo can ruin someone’s reputation immediately and permanently. We are also dealing with the phenomenon of “dick pics,” which are mainly sent to young women and girls on Instagram, which I find very worrying. We are also seeing smear campaigns designed to destroy the reputation of individuals. The worst-case scenario is so-called doxing, i.e., the publication of private addresses with hostile intent. That’s why we recommend that all users who could potentially be affected set up restrictions with regard to their civil register.
Is violence on the net increasing?
We can’t “measure” the internet – but we can say something about how digital violence is perceived by the general public. According to the study “Lauter Hass, leiser Rückzug” (Loud Hate, Quiet Retreat), which we co-published in 2024, 80% of respondents have the impression that digital violence has increased in recent years. In addition, more and more people report being affected or are turning to us for help. Of course, this also has to do with growing awareness of the problem.
The internet is – in theory – not a legal vacuum. But although much has improved since HateAid was founded, it remains a space where the law is not enforced. It is not only those who spread digital violence who believe that nothing will happen to them. Some of those affected are not even aware that threats, insults, or sending explicit images are criminal offenses. Who wants to believe that they are regularly victims of violence in a state under the rule of law, without anyone doing anything about it?
It seems that people lose inhibitions on the internet that they would certainly have in personal contacts. What do we know about the perpetrators?
50-70 % of perpetrators are not identified. So there is a very large dark figure. Those who act professionally know how to conceal their identity. However, the spectrum is very broad. It ranges from the owner of a dog grooming salon to pensioners who get carried away, so to speak, and later do not know how it could have happened. But of course, digital violence also has a strategic political component. We have analyzed major shitstorms and observed orchestrated strategic attacks, with only 20 people usually behind 100 accounts. This results in several thousand comments for which only 100 user accounts are responsible. They deliberately give the impression that many are turning against a single person. The accounts then interact with each other, with social network algorithms giving them a lot of visibility.
So I don’t assume that all of Germany is hateful and venting its anger at politics from behind their laptops. In most cases, it is a matter of political calculation. Figures published by the Federal Criminal Police Office and the Office for the Protection of the Constitution show that digital violence is mainly perpetrated by right-wing extremists. Left-wing extremist violence has also increased since the beginning of the current Middle East conflict. However, the numbers are still far below those of crimes from the right-wing extremist milieu. But cases of digital violence that cannot be attributed to a particular political side are also increasing. These can concern a variety of topics, from vaccination to gas supply.
What problems do those affected come to you with, and what are the consequences of digital violence?
Digital violence affects everyone, even people who deliberately choose to be in the public eye for professional reasons. In politics in particular, people are systematically dehumanized and serve only as projection screens. The psychological consequences can include anxiety, restlessness, depression, or sleep disorders. Many whose private addresses have been made public face harsh new realities: They may have to change their name, switch jobs, or move. The experience also leads to behavioral changes. Victims of image-based violence, for example, often refrain from creating a LinkedIn profile. And the fear of being Googled by a new partner or employer – even if the latter is not allowed – still remains.
From your point of view, which legal steps are necessary to counteract hate on the internet?
We want to ensure that those who post criminal hate speech on the internet are punished. But, as I said, identification is difficult. And we are dealing with social networks that claim to be passive screens. In my view, the operators of social media platforms must be held much more accountable. If the platforms were to take an active role, this would mean passing on data to the police or requiring registration on social platforms to be possible only upon presentation of identification. However, this would require a European solution. Although the Digital Services Act now exists at the European level, its implementation is not yet where we would like it to be. For over a decade, platforms have benefited from very extensive liability privileges, which make their business model possible in the first place. They would have to allow significantly more transparency.
At the same time, we still see legal loopholes: the non-consensual creation of deepfakes, for example, is not a criminal offense, but their distribution is. This is a huge problem because deepfakes are a very effective weapon, especially against women.
Social cohesion seems to be under threat, and disputes, on the other hand, are becoming increasingly fierce. Are social networks partially responsible for this?
Whether this development can be directly attributed to online discourse cannot be conclusively determined. However, we can observe a temporal correlation between polarization and the spread of social media. To some extent, the division in our society is certainly due to the way social networks function. The operators of these platforms want us to interact: not just watch videos for hours on end, but also like, comment, and view advertisements. And unfortunately, it is often not the cat videos that get clicks, but extreme content. The press also contributes to this emotionalization of debates. In my view, we can only counter this by regulating platforms. They are not the saviors of democracy. They are very susceptible to manipulation and do nothing about it because they are profit-driven. It is precisely the emotionally heated debates that bring them the most money. So it’s about profit at the expense of our social discourse and our democracy. This could be changed by creating a level playing field for everyone.
What challenges are you currently facing as a non-profit organization? For example, the CDU’s minor interpellation in February 2025, which challenged the financing of many NGOs.
Hate Aid is not among the affected organizations. However, because we receive public funding for counseling victims, we are regularly accused of lacking political neutrality. HateAid is politically neutral, though, and does not call for demonstrations; that is not our social purpose. In my opinion, the CDU’s minor interpellation is an expression of skepticism toward non-profit organizations that has reached the general public. This reminds me of the worrying narrative of the “shadow state,” which comes from the US but is also taking shape in Germany. Yet NGOs take on important tasks of the state: the term “non-profit” comes from tax law. The tax office decides on this and grants tax relief because otherwise the state would have to set up its own structures, which would cost money. The idea of the “shadow state” is so dangerous because it attacks civil society engagement as a whole.
Josephine Ballon studied law at the University of Potsdam and was admitted to the German Bar in 2018. From 2019, she was Head of Legal at HateAid and, since 2023, has been leading the organization together with Anna-Lena von Hodenberg as CEO. Ballon has been invited several times as an expert, e.g., to the Legal Affairs Committee and the Committee of Digital Agenda of the German Federal Parliament and the European Parliament, to testify about law enforcement against online hate crimes, gender-specific digital violence, and platform regulations.
Link to the non-profit organization HateAid: https://hateaid.org/en/
This text was published in the university magazine Portal - Zwei 2025 „Demokratie“. (in German)
Here You can find all articles in English at a glance: https://www.uni-potsdam.de/en/explore-the-up/up-to-date/university-magazine/portal-two-2025-democracy