>> MODERATOR: Just a brief introduction. It's collaboration between Dublin Center, Dangerous Speech Project, Jerry and George, the assistants.
We felt it would be very good to get ‑‑ the last collaborator is critical to the South Asia focus. It's called the Digital Asia Hub. It's launching at the end of this month and it is basically a hub that's going to focus on Asia projects. So from that perspective also, we feel that it's valuable for hate speech within the region. After thanking you for getting here on time, I will hand it over to the executive director of Dublin Center.
>> Thank you so much. Good morning, again. Really delighted to be here and moderate this conversation. I'm particularly excited and this is, indeed, the first event also of the digital. How many of you have attended the hate speech which happened two days ago here? A few of you. So I read the transcript. I wasn't there, but I read the transcript, and, obviously, this panel here is closely related to the decision and hopes to build on this session and expand it. Specifically what we hope to do today is first to clarify what you mean by hate speech. Those who attended the previous session know that there are many different issues of the term and we hope we can be a bit more specific to use this term hate speech, particularly we would like to introduce the concept of dangerous speech. We will talk about that a little bit more. We would like to look at hate speech phenomenon in South Asia, with reports from various countries, actually. So it's very exciting in that sense, with our commonalities and our differences.
Third focus and perhaps difference from the previous session, however, is that we want to specifically focus on the solution space so the bulk of the discussion of focus on the question what are available approaches, instruments, strategies to deal with the hate speech problem. What our initial experience is different countries using these strategies. Some of them may be legal. Some of them may be counter‑speech strategies, awareness raising, platforms. What can we do about the problem? That will be the center of the discussion. We not only want to look in that segment on solutions or approaches that are out there, but also think creatively in the future to address this serious challenge.
So that's how we try to build but also expand on the previous session.
We have a fantastic lineup of speakers. The idea is to make this very interactive, so I will be very strict in allocating time slots. That's, perhaps, why I was moderating the panel, being time aware. I'll use the timers here when your time is up. We do that to save as much time as possible for discussion and open discussion.
Of course, also our friends joining remotely are very much invited to weigh in. We'll keep an eye on the participants.
So before we start with a few brief updates from various countries to get a sense of what the hate speech challenges are in South Asia, I thought it would be helpful, Susan, if you could tell us a bit more about distinction of different terminology and definition of hate speech and the dangerous speech could not at the present time that you developed. Thank you.
>> SUSAN BENESCH: I'm delighted to do that. I will offer a few ideas on the condition that you will critique them vigorously, either in the Q&A, or afterward.
So dangerous speech is a term that I've coined to define or to describe a narrower category, first of all, to protect freedom of expression. Hate speech, as the colleagues on our panel Tuesday he will gently said, it's difficult to define in practice both in law and in common parlance. Hate speech is defined variously. Hate speech is used in many governments in order to crack down on what I suggest are the wrong people, the political opposition, the journalists, the bloggers. So it may be that particularly in certain contexts law especially and maybe also policy focusing on hate speech does at least as much harm as it does good. This is also not limited to the obvious countries. The second reason why I suggest a focus on a different, narrower category of speech is to direct our attention to the harms with which certain kinds of speech are associated.
When I say "speech," by the way, I'm thinking about any kind of online content, not necessarily word. Speech might be a photograph or a video clip. Of course many examples probably spring to mind: If we can focus on the harm, it seems to me we can define the categories and draw the lines more effectively. The second is to do more constructive targeted research on the harms. Suppress go or diminishing the speech itself, it could also maybe just as effectively mean impacting the speech on relevant audience.
I've described it as speech that has a special capacity to catalyze intergroup violence. By that I do mean physical violence. Our Frank La Rue pointed out on Tuesday's panel that physical violence is not the only harm we might need to think about. I agree with that. However, there is a spectrum of dangerousness. Dangerousness is not a toggle. You can't say it is or is not dangerous. You have to say in a context it is more or less dangerous, which leads me to how one must identify it. I've been engaged in some empirical research and shown that you can't find a list of slurs, because there are too many false positives and false negatives. The process must be contextual. I suggested there are five particular questions that one could ask to make a systematic or at least quasi systematic categorization. The five are the speaker. In some cases, a particular speaker says or disseminates the speech. The second prong is the audience. Some audiences are more susceptible than others. The third is the speech itself. The fourth is the social and historical context. Of course that's a very big kitchen sink, which can be analyzed in a more granular way. The fifth one is the means of dissemination. Sometimes, if a community is used to relying on a particular medium for most of its news or information, then the fact that a message came via those airways may itself confirm what philosophers are called force on that speech.
It also has striking similarities among context. That's because of the transformative work that such speech does. I think maybe I'll just finally, we are already engaged in some interesting empirical work on this, both to identify such speech with the help of some software, although can't do it alone that way, I think. We have someone here doing work in that regard.
It's also important to involve humans and discussion within that context of humans of what it is that makes speech dangerous in their context may itself be a very useful process for identifying the responses within that context.
>> MODERATOR: Thank you so much. That's a very helpful taxonomy that I think is guiding our discussion today and especially the five elements as we go through the examples. So what I like to do next is basically to ask a few colleagues coming from different countries, Pakistan, India, Sri Lanka, and give us a flavor of what the particular hate speech problem is defined in their specific country. Context matters so much. And perhaps the way we could start this conversation is if you would be willing one story, one example that somehow is charismatic.
Add man, I'll ask you to start. Introduce yourself in one sentence. Thank you.
>> ADNAN CHAUDHRI: Sorry. Hello. My name is Adnan Chaudhri. I'm from Digital Rights Foundation in Pakistan. While in Pakistan, you've had minorities under attack from several years, '70s onwards. With the presence of the Internet in Pakistan, you've had women and other minorities go online in search of safer spaces, which are not readily available off line. However, the downside to that is that the harassment and hate speech, which is offline, has followed them online.
One example I wanted to give was off a report last year of the prominent women activists and journalist in Pakistan. She would talk at length about harassment. For about a year she was a victim of a hate campaign online. Her ‑‑ she received hundreds of death threats to her personally, her postal address was put up online, as were images of her, her husband, and her daughter and she did report this to the authorities and to the different social media platforms. It eventually led to people firing shots outside her house and threats over the phone saying, "You will be next. Your family will be next."
She then spoke to one of the platforms, which was Facebook. They took down her page very quickly. But then shortly afterwards, another page came up and another page came up.
The other social platform which had her material was Twitter. That took over a month to remove her material.
She's to severely curtail her social media as a result.
>> MODERATOR: I suggest we go across the regions or countries and if there are similar stories to extent are different. Again, please as we listen to these stories, keep in mind as we heard from Susan, there are already several elements that we hear.
Ritu from India, if you could go next.
>> RITU SRIVASTAVA: I'm Ritu Srivastava. I'm from India, working with Digital Empowerment Foundation. Hate speech in India has been defined in various forms. It starts from inter‑religion context and hatred against interreligious communication. Sometimes it is a communication. It becomes a personal basis and following with certain religion actively. It has been seen more often in a political context during the elections times. One of the things the example which I would like to notice as well that two girls from mom by, when the right wing leader was died, they posted some comments on Facebook, and they had been arrested because they stated antireligious sentiments. These days, it's seen more often in the context of riots. Online speech also transforms the offline mob and vice versa as well. The offline mob also transforms to online hate speech.
So this is something which we have been seeing in India. And it's now also beyond hate speech has come up with what we, the mobs, and those things are being seen in this in this scenario in there.
>> MODERATOR: Hopefully we can translate what happens offline and online and shaped by the factors. We heard that by the previously statements. If Sanjana could report from Sri Lanka, that would be great.
>>SANJANA HATTOTUWA: My name is Sanjana. I work for the Center for Policymaking in Sri Lanka. The kind of hate speech that you find online in Sri Lanka targets ethnic minorities, women, GLBTI community, political opponents and individuals with dissenting views; however, what I would like to focus on today is an emerging trend that we noticed, which started this year with the political changes that have been happening in my country with elections taking place and a new government coming being.
This is where existing Facebook receptacles which contain hate speech, which also have an existing following, have been appropriate for political campaigning. We have seen this happen throughout many kind of hate speech sites. One that I like to use as an example is this site called ‑‑ a Facebook page to save an army soldier who was convicted for killing eight civilians during the war. Although it was set up for that purpose, it very quickly turned into a political narrative that it was used for political campaigning, to malign the opposition candidates and so on, and for fear mongering, that it would result in him being tried and sent to the gal owes as murderers. The link, if you would like to have a look at it. There is a lot of evolving happening when it comes to democracy, new laws coming into practice to make the country more human‑rights friendly, more democracy friendly, to reestablish the rule of law, etc. In this light, there are also concerns with regard to hate speech which has in the past incited more violent in Sri Lanka. What we are seeing with the change of regime, the Prime Minister made a statement that he will not tolerate any kind of hateful speech and incite meant of violence. What this has done at ground level is to kind of shrink the space for international groups which is inside mob sister violence to operate. The shrinking of that space in real time and in real space actually limits the possibility for online hate speech to be translated into physical violence in the real world.
So what we are noticing is the emergence of this kind of hate speech on various websites. That will continue to happen, but we feel the lack of a space for real time impunity would actually stop the hate speech from being translated into physical violence.
>> MODERATOR: Thank you so much. The last country report out of this second comes from Myanmar. We have a written statement.
>> SUSAN BENESCH: As you all know, Myanmar just had an election, and one of them has very kindly sent us a little bit of a report. She works for the Myanmar ICT for Development Organization, where she's research and development manager. So she says, "After the last two years, Mito has conducted two research products for hate speech that can mobilize violence both online and offline. Two important findings from the researchers are: 1, there is a set of discourses that are common online and offline that are creating a feeling that Myanmar people and Buddhism are under threat and must defend themselves. This sense is promoted on a everyday basis and used in specific moments to incite violence such as riots. Such speech is both a product of individuals act out their own opinions and organized groups."
Second conclusion, in addition to promoting violence, hate speech and other incendiary is making it intimidating for people who speak out in peace or harmony. The worst examples of this are online smear campaigns targeting individuals who are working to prevent violence, as well as other forms of daily harassment using online tools. So those same colleagues have been subject to this particular phenomenon.
We have recognized that the rise of dangerous public speech is often a precursor of violence. In some situations, figures can turn their supporters against another group using speech that has the ability to inspire violence. There is growing recognition that the potential for violence. Online spaces such as Facebook, users are growing up so fast and this situation increases violence and leads to misuse of social media.
Mito has been studying over 30 Facebook accounts, including famous personal accounts, the Mabata organization, Buddhist extremist groups, and top Facebook hate‑threat spreaders. The content analysis framework used for the monitoring has been developed by Mito, together with the research team based on project findings and unstructured online monitoring of youths from Mandalay. For that purpose, Ritu has used the framework designed by Jonathan leader Maynard, and the dangerous speech by Susan Benesch.
Violence flow in two directions. They can be about the other group, for example, showing that they are dangerous or guilty or needing to be punished. And they can be about our own group. For example, showing that the brave or virtuous thing to do is to defend race and religion, even if that includes committing horrible violence against other people.
Then she mentions the dangerous speech framework, which I'll skip, since I already talked about it. She says, "We have identified four categories of Facebook accounts. Number one, accounts of high profile people. These people are important to study because they are influential. These accounts include people known to be extreme speakers. They can also include accounts of famous and influential people who do not regularly contribute to conflict, but sometimes do with deliberate or inadvertently subtle messages. Second kind of Facebook account identified pages dedicated to some specific issues such as a race or religious.
"Third count, accounts of media groups, also print journals, also post news in online formats. Fourth kind, accounts of other active users who are often posting dangerous content online. Those accounts include both real people and fake accounts."
Then I'm afraid she goes on to present the data with lots of illustrations, and I think we don't have time to go into that, but perhaps we can distribute that for those who are interested.
>> MODERATOR: Wonderful. Thank you so much. We have heard now four or five stories, antidotes. And you have written a book focusing on hate speech in the region. I was wondering if you would be able to distill from these four or five narratives some of the common elements that you see that they shared with those things you see across the region, and what is different across the region. That would be wonderful if you could spend four or five minutes to report on your findings. Thank you.
>> SUSAN BENESCH: My pleasure. Thank you. I think what these societies certainly have in common is a history of common violence, and it's not surprising in such societies that there would be a public impulse to try to nip hatred in the bud. There is already an internationally recognized right to be protected from incitement, and harm and violence. The question that arises in these societies with a strong sense that they are on the brink of violence is the question of whether it is enough to wait for undeniably harmful effects to be manifested downstream, so to speak. Why not instead tackle the problem upstream, supposedly closer to the source of hate resident through various laws, such as blasphemy. This has a lot of appeal. It is not unique to South Asia.
My own country, Singapore, is sold on this idea prohibiting racial and religious insult is a national policy, which many of these inherit from the British as a common heritage with South Asia, and this policy of actually banning the wounding of religious feelings seems to have broad public support. But I think what I've discovered in my research is that in most societies this approach backfires badly. The problem is that insult laws essentially enshrine a legal right to take offense. They then oblige the state with all its coercive might to step in when a group claims righteous indignation. All those such eruptions of offendedness are usually portrayed as spontaneous, visceral, impossible to control. In practically every major case that I've seen, they are in fact manufactured by political figures that came with that offense taking. This free‑wheeling combination of traditional hate speech, with politicalized is what my book called hate spin.
The societies that we have incite laws, seem to have a low threshold for these intervention he is, there are seemingly strict laws. Speech continues to be rampant. And this is not because the Internet is so hard to regulate, although that is possibly one factor. There is a much deeper issue, which is that the power to take offense is not democratically distributed. Some people, positively significances from the dominant majority, get to decide which expression is intolerably. Marginalize the groups, thus when you have hate speech laws super imposed on a society that does not protect equality as a fundamental pillar of democracy, the result is a double whammy for vulnerable minorities. They are on the one hand victims of incitement that occurs regularly with impunity, for peace building, themselves are silenced as being offensive to the majority culture. So I think any hate speech regulation should not be seen isolation.
It depends a great deal on these frameworks. In South Asia, Indonesia, Southeast Asia. This is happening right now, this combination of blasphemy laws that seem to promote respect for religion occurring side by side with impunity for dangerous speech. That's the dynamic that I see. Thank you.
>> MODERATOR: Thank you so much. Extremely helpful, also as a segue, before we do so, however, we have a remote participant that would like to offer a comment.
>> INJU PENNU: I am Inju Pennu from the United States. I will be playing the video feed on the presentation.
>> MODERATOR: My hope is that Frank La Rue can basically comment on some of these normative issues.
>> FRANK LA RUE: Thank you for inviting me to participate.
>> In the case of the journalist, women that had to plea their country or flee the profession because the people in the audience and begin a consistent form of harassment, forms of psychological violence. There is some defined moment when this from psychological violence will become actually physical violence. Oftentimes it does if. We are dealing with violence, even when we talk about torture and the questions of torture; we also talk about psychological torture, not only direct physical torture.
And finally, I would say that Susan has made a solution in her five categories, but the point that dangerous speech is not a collateral definition of hate speech. It has to be something different. Because I think even the term hate speech we use freely, but hate speech going under Article 20 and expanding the groups, to facility and discrimination and vial sense and we all agree it should had been added for gender, disability, etc., you could add different groups. The idea is that this incitement to a very specific danger, I worked with documents in 19. One is to look at the intent. There has to be the intentional at this to harm and to provoke harm. Secondly, to the content, which Susan has in her categories as well? The content has to be very serious content, but also has to be able to go far away to be disseminated, to have an impact. It's not a conversation of two people or one basic statement.
Thirdly, there has to be the call for a serious harm to be done and inciting to serious harm, provoke fear in the person. Four, there has to be immediacy of the harm. It has to be something provoked for the immediate future.
Fifth is the context of all this. If we go with some of these categories, obviously we do see the harm that some of this speech can be done today. Now, I called two things, and I finish one. There is a friend of many groups to ask now with the Internet era to ask big platforms to listen to the Illuminate and download some of this content. I think this is a very dangerous track. I think this has to be a state decision, hopefully by a judge, and hopefully on individual cases, even if it's too much work for the judiciary to establish some degree of independent state authority. And the reason is because the state is accountable to its citizens, while private corporations are not. There should be an element of corporate responsibility, but we can't privatize the responsibility of the date. The big platforms have to respond to the guidelines and the indications they receive from a court or authority. We cannot become censors. You'll have a harder degree of censorship. Those will be the qualifications of how we must look at this. In this I'd like very much the issue presented by Singapore, that oftentimes more violence is used as an excuse to censors. The Charlie Hebdo is an example. The ones that decide to do them, we can never try to transfer the responsibility on the victim. This is the same case of sexual harassment. The person will be seen as the provocative, and it can be clear. Preventing should never be implied into prevention of speech. I think prevention is another area, but there was principles established in the plan of action. And I think prevention and counter‑speech is one of the most important elements that we should look at it.
>> MODERATOR: Thank you so much for outlining the international legal framework and some of the key issues in that. There was also perfect segue, actually built into your presentation to Judith, the role of private actors and the responsibility, actually, of private actors. Would you be willing to share your observations working in a multi‑stakeholder setting where an issue has tried to come up with principles that guide private actors dealing with some of the issues that frank mentioned.
>> JUDITH LICHTENBERG: Thank you for your introduction. For those of you who are not familiar where I'm coming from, the Global Network Initiative is a global multi‑stakeholder organization. We bring different groups of stakeholders to form a common approach and privacy and freedom of expression in relation to government demands that can impact these rights. Our members are drawn from investors, businesses, academia, and a variety of Civil Society organizations. Our member companies include Microsoft, Google, Yahoo! LinkedIn and Facebook. I cannot speak of member companies individually or the private sector generally, but I am in a position to say something how a human rights‑based approach can look like.
So the guiding principles on business and museum rights actually define the responsibilities of the public sector and the private sector as the state's duty to protect human rights and the company's duty to respect human rights. GNI has developed a set of principles and guidelines that are based on international human rights norms to guide responsible decision‑making. And that also applies in this area. But what we notice is that ICT companies that host or transmit content globally face various challenges. That is, I think, demonstrated as well by the stories told. And I think that the main point is actually that in many countries hate speech is prohibited, but it's very difficult to determine what speech is hate speech. It's very dependent on the context. And I fully support Frank La Rue when he says that it is a state.
Responsibility to make clear what hate speech is. And don't leave it up to private companies only. What we see is a trend that state responsibilities are transferred to the private sector. That is a concern.
It is on the other hand what I said. The private sector has its own responsibility and what you see is that also our member companies have worked on the issue through their Terms of Services, but probably the next speaker can talk more about that. GNI itself, it does not address the way Terms of Services are addressed, but with the help of our members, and also the center of Internet and society, are helping us to get a better sense of what the issues are.
>> MODERATOR: Thank you very much. Again, this lining up wonderfully, I think. So we are very pleased to have a Facebook representative from the panel. I was wondering whether you could basically share your experiences now bringing us back to the region, to South Asia, and some of the challenges that you see against this backdrop that we heard from Judith, but also what are ‑‑ is he there hope? Are there approaches that you see that actually work? We have heard how prominent the role of Facebook is in many of these stories. Where do you see things going? Also as a segue into the solution space discussion. Thank you.
>> Thank you. Excuse me. I think it's great that we go back to South Asia. A couple of things I want to start off with and drill deep into some of the real life examples and experiences that we've had. I want to talk about that.
Fundamentally, a lot of the speech is political speech in South Asia. We have to recognize that. When politicians get involved and use speech which can get into the category of hate speech as a mobilizational tool, we just have to recognize that. This is beyond the spectrum of technology. There is these social problems, which is why the social norms that Frank La Rue talked about is critical. Technology is not a silver bullet to solving those problems. There are conditions which further exacerbate. You came out of one of your longest civil war in Sri Lanka. Therefore, to think that politicians are not going to use that in an election time is just ‑‑ that's a phenomena that does not exist. Both sides used it. We saw that in the presidential elections. We saw that during the elections recently. The kind of things that you talked about did play out over there. The same thing happens in India. We see that routinely.
But I think one thing which got missed in all this conversation is how the platform is being used as an organizing tool for counter‑speech. And I think that is a very important tool which we have to recognize and understand. So whether it is the mobilization of rationalists to sort of take on fundamentalists attacks on the Internet, in India a substantial risk we have been b we have been reading about, but still those and that is the nature of the Internet we have to recognize. I think counter‑speech has to become a very important aspect of even the social normative scheme and the framework in terms of addressing some of these concerns which we have raised in terms of hate speech. The second thing I think Frank La Rue talked about it, you talked about it, Judith, is that there is tendency to outsource a lot of this rule making platform. That is not right. So I think what platforms are trying to do is set community standards which works at a global level in a scalable way. They are looking at international standards to inform their community standards in Terms of Service and implement that globally.
But I think this entire notion that you outsource this because you are mounting up a campaign and putting private sector and other platforms to be in a position where they are going to be the arbiter in terms of those choices, something as fundamental as speech is just not the right thing to do. Therefore, we have to look at some involvement of state practices, which, again, various people have talked about in terms of understanding that, how does that play. That needs to be a process, the state behavior, the behavior of the state cannot be bilateral. Bad laws do happen in Asia. There is an entire spectrum from one country to the other which you look at. People have actually lost their lives testing that. In Pakistan, a lot of us know Semu, because of blasphemy laws and she was just organizing a meeting. She was coming back home after addressing a meeting in the university, she was just basically shot by the Taliban. These are very real‑life situations playing out in the field as we sit here and talk about this.
So I think there is all these factors which are playing. The platforms have a role to play to make sure there are appropriate community standards, which are reflecting appropriate safety practices to make sure whenever there is an access to real harm, an access to violence, that could be a very important test which various platforms, including us, we look at very carefully. When we sort of look at either TDR practices or when we are engaging in state agencies on specific legitimate targets, there are these mechanisms that are in play in terms of looking at this. Some of these problems are intrinsically social. These are societies which have had long history of ethnic and identity politics. Ethnic and identity conflict and they're trying to work through that within a democratic structure. A lot of these values are imposed values and it is sort of coming of age for a lot of these ideologies. It's not as easy. It's a highly complex environment we're talking about over there.
In responding specifically to the concerns that ‑‑
>> MODERATOR: Briefly, please.
>> I don't want this to become a Facebook discussion. We're talking about broader principles. We made some changes to our policies recently and we announced them. We still think the authentic culture creates the base safety net. You are responsible for your actions and we have seen in helping the platform safe and keeping responsible speech. At the same time we are sensitive to the kind of feedback which we have heard from various Civil Society architects. Therefore, we announce this work in progress and hopefully it will get implemented by early next year. There is a wide category of ID verification which Facebook will now allow, in addition to government ID to establish who you are. We also, when you're flagging down profiles, there is this notion that people can just unleash a mob on the Internet with reporting a profile. What we are doing is making it very specific requirement that when you're reporting a profile, you have to give very detailed context as to why you are reporting a profile, because we hope people will be more responsible when they are reporting a profile, and really citing causation as to why they do that. There are various mechanisms which can be looked at in making sure that there is all the safety nets which are created to make sure that appropriate practices.
But I think the enforcement piece cannot be outsourced to the private sector alone.
>> MODERATOR: Thank you very much. As an exception, if I may put you on the spot, we have now roughly two sets or approaches in the solution space. One is using the law and legal systems and legal frameworks and approaches, and the other is to use nonlegal approaches, including using technological algorithmic reporting schemes and the like, but also, as you pointed out, counter‑speech measures. That goes to grassroots movements and the like.
I would like to hear from Susan and Chinmayi ‑‑ because they have done a lot of research ‑‑ how much emphasis can we put on law and what's in the legal toolbox to deal with the problem and then Susan, who has studied what is happening with counter‑speech movements and how can counter‑speech strategy address the challenge. So far I'm hearing a lot of distrust that law can only go so far. It's very difficult, enforcement is an issue. It's difficult to draw boundaries. Are you equally pessimistic about the role of law in addressing, Chinmayi.
>> CHINMAYI ARUN: There are ways in which I echo what Frank said about the law and legal institutions, in the sense that that who uses them and who has the power to use them effectively. I think it's important in the context of the law to understand a few things. One is that the law makes a statement sometimes of what is acceptable and what is not acceptable. That in some ways is power. The other is state actors has more instruments than just the law to react to hate speech. So in India and, in fact, in all of the countries that Anki described and then modified it in various ways, our habit is to criminalize hate speech and see that as the only way in which the law and the state can respond to it. There are other powerful ways in which the state can respond. One can be political statements by head of state. What Rashimi said about the Prime Minister saying there will be no hate speech that will be tolerated, that can be empowering. This happened in India, this round of significant violence against the Sikh community when one Prime Minister was killed by her bodyguards and her son basically took over as Prime Minister. There is an autobiography by a close friend of his mother's, the killed Prime Minister, she pleaded with him you are a grieving son and also a Prime Minister, and you have to make a statement that this is not the time for violence. He chose not to do it and he made a statement after saying that when a tree falls, the earth will shake.
Similarly, our current Prime Minister has been criticized at multiple points for not making statements against violence that Ritu described. There has been killing in India in response to religious and cultural practices. A lot of people said it was the Prime Minister's job to come out and comment on it. This is not the law exactly, but it is an obligation of the state to take action.
Just another thought before I move on, is that in terms of counter‑speech strategies, Susan tells you about what happens in other countries. It's interesting that we haven't seen a lot of counter‑speech strategies from other actors in response to religious and ethnic violence, but we have seen it in response to misogyny. We've had online campaigns that have used humor to defame actors that were trying to incite violence against couples exercising their right to choice in the context of sexuality.
But I haven't seen so much in the context of religious violence. I'll leave you to Susan from here.
>> MODERATOR: Susan, can you tell us? I think government's role is more important than rule making.
>> SUSAN BENESCH: She made a tremendous point about counter‑speech. If we think about the role of influentials, who may be political figures, may be representatives of the state, but may also be, for example, religious figures. Even cultural figures. In some cases sorry politicians, but in some social contexts a music rock star can have an influence on an audience as a political figure. A couple of basic points about discourse norms: We can think of those as informal law. Even in a country like the United States, as you all know, we have the world's most supreme law regarding speech, the most speech protective law, and yet there are unspoken, uncodified rules about what one can and cannot say in public.
You will immediately tell me they don't apply on the Internet and that's true. However, or they don't apply in many spaces in the Internet. However, those rules can be nonetheless extremely powerful in, for example, ending the career of a political candidate if he or she says something that violates those resumes. So the first point that's worth remembering is the discourse norms can be immensely powerful. The second one is that they can change quite quickly. I sometimes give an example from the United States. We now have a phrase, common in American political discourse, which is marriage equality. This phrase didn't exist 20 years ago. If you had suggested to me that it would, I would not have ‑‑ I would've laughed, frankly. So there is now quite a bit of talk about counter‑speech in circles of people worrying about that content online. It's clearly useful and effective in some circumstances.
It can also, of course, endanger the people who engage in it in other circumstances. I would like to suggest that we need to know a great deal more about it. When and in what circumstances is it effective? We mentioned Myanmar earlier. As was mentioned in the short report that I read for you for several years now leading up to this tremendously important election Myanmar has seen offline and online an increase in what I would call dangerous speech, hateful speech, maybe we could call it fear speech, pointing to another very important and operative emotion in this kind of language, designed to persuade one group of people that another group of people posed a serious threat to them, a mortal threat.
About a year and a half ago a brilliant and courageous group of activists and Myanmar suddenly started a movement calls Panzagar, meaning flower speech or flower language. They even had a meme. It is a person with a flower in his or her mouth. At first it appeared in drawings and then very quickly appeared in a large number of selfies. Of course, they instantly put up a Facebook page, which within days had 10,000 likes. We don't know what impacts such efforts have had.
They are, however, proliferating an enormous variety of contexts. I won't take time to list them, but it's something I love to do. And I suggest that all of us who are interested in this should focus narrowly on study Internet Governance these efforts to see what effects they have in which circumstances. I do a little bit of that myself. But that's just a drop in this large and perhaps quite interesting and maybe even promising bucket.
>> MODERATOR: Thank you so much also for pointing out the need for more research, like research I like to hear that, of course. Kashi from Pakistan. I know you also have ideas. That's the moment where I would really like to open it up. What are your observations and reflections coming from Pakistan when you hear about these legal and other nonlegal strategies in dealing with the hate speech problem online?
>> Thank you so much for providing me with in opportunity. I would like to just carry forward my discussion from the point that Chinmayi was making. I would like to do a mix of legal and nonlegal strategies to address the problem of online hate speech. I will take forward your point when you mentioned talking about counter‑speech. We need to engage sort of like the political leaders who actually can give the statement.
I see we also need to engage social and political solutions in a counter‑speech narrative. If you just analyze this problem, things happening online are basically, in my opinion, as the result of something cooking inside the kitchen offline, kind of like the social environment we live in, the kind of peer group learning that we developed, the socialization we have in our societies, it's basically that's the thing that we really get things from. For example, if a child is brought up in a society where things are really discriminatory against religious or ethnic minorities, definitely things will be expressed in the same way online as well. We need to actually look into who are the sorts of institutions, like family, can be very important institution in developing a counter‑speech. Sort of like most ‑‑ talking about the situation in Pakistan, most of the hate speech has come offline from the clerics, and the religious leading in developing a counter‑speech narrative, that can be very much helpful in that. Talking particularly from the perspective of the case, the murder of the governor. The murder was so much glamorized; we actually need to develop a counter strategy for that. We need to glamorize counter‑speech and say, look, these are the people that are actually talking about inclusiveness, talking about tolerance and talking about peace and harmony. So I think that these two key strategies could be worth experiencing in sort of like experimenting that, number one is the glamorizing speech role models.
Plus, definitely legal ways are also as much important as the nonlegal are. So I think we can criminalize online hate speech. Particularly in Pakistan, they have this expert on this counter cybercrime bill. Online hate speech could be sort of like tagged as a criminalized. So it would be helpful in dealing with problems.
>> MODERATOR: The audience, get ready for your questions. But I know you want to weigh into the questions.
>> AUDIENCE: I think there is a problem with legislations. We know that the states want to make new legislation to deal with hate speech in the online spaces. As a result, we are witness go new cybersecurity laws, cybercrime legislations, and in case of Pakistan, the new law that is proposed by the government is very Draconian and they have inserted, I'm sorry if I don't agree that we should criminalize this, because there are already existing legislations. Not only in Pakistan, but in India and other south Asian governments which government can implement that actively? Those legislations are still problematic, but they can use existing legislation instead of making new ones, which are really, really problematic, and the object of new legislations are more regulating the Internet and controlling technologies.
Another thing I would like to talk about the platforms. And we understand, as digital rights activists, that they have their own challenges in dealing with different social context. But at the same time I feel that they can improve their reporting mechanisms, keeping in mind different social context of different countries. At the same time should be more accountable and transparent when they release their transparency reports that under which laws they are taking down requests, they are taking down pages or giving information of the users. So I think these are the few things which platforms ‑‑ I know they are working. I know that we have been in touch with Facebook. And they are trying to deal with the situation in particular countries, keeping in mind the social context and the conservative nature of the society.
So they can do it more in a more global framework, which I think is very much needed.
>> MODERATOR: I trust to this text note of that comment. Any questions? Yes, please, and introduce yourself. Please keep it to one minute question. Thank you.
>> AUDIENCE: Okay. Good morning. My name is Alan. I'm from the youth program, which has created the youth observatory on the Internet. My question is we know that our laws are too tender in hate speech. We know that hate speech sometimes becomes physical violence. I'd like to ask up how could we call the government attention to make the laws more severe, to make it work in a better way to have punishment to avoid this behavior? Thank you.
>> MODERATOR: Chinmayi, would you like to respond?
>> CHINMAYI ARUN: Actually, governments are not so resistant to making laws more severe. It's the easiest and sometimes I keep telling them also the laziest response to hate speech, because they look like they're doing something without actually doing anything.
The challenge is to get governments to implement existing laws effectively. And that can be a challenge because as Anki says, it's usually the most powerful people that are violating hate speech norms. There is a really compelling incident in India that is an example of this where hate speech was given ‑‑ really terrible hate speech calling for violence took place during an election campaign by a prominent politician. It's the digital age, so people took videos of this. There were multiple videos. It was in a crowd, so over 80 witnesses. By the time the case reached trial court, basically what was at issue was that he was arrested, but it was also our election laws don't permit somebody that is engaging in hate speech to campaign. So that was kind of the more important thing to them.
He managed to make sure that all of the videos were declared inauthentic and 80 witnesses recanted their statement. So although the laws existed, it was impossible to implement them against the person that was doing the most damage. Perhaps what's critical is to see whether we can frame our laws in different ways to see whether we can change the incentives for powerful players for the people we need to fear. I hope that answers your question.
>> MODERATOR: Thank you. I know you wanted to.
>> SUSAN BENESCH: Just to clarify, I think sometimes we use the term hate speech too broadly when we're actually referring to hate crimes. While hate speech can be difficult to define and involves trade‑offs that are problematic, I think hate crimes is something that we should be far clearer about. There is actual violence against minorities, against sexual minorities, whatever. There is no excuse for the state to abrogate its basic responsibility to provide security to its citizens. I think I would ‑‑ I should disagree that states don't need persuading to do it, because I think it's quite a rampant problem that in fact states are prepared to give impunity when the victims are seen as politically unimportant.
At least conceptually, this is much less of a problem than hate speech generally is.
>> MODERATOR: Thank you. Next question, please, or comment? Yes, at the end of the table.
>> Good morning. My name is Ninjura. We work on hate speech in Kenya and South Sudan. What I haven't heard is where the roles of citizens in encountering this narrative in South Asia. I think we talk often about the role of a state and platforms, but not the role of citizens themselves who are also engaging in the spaces is. And so I'd be curious to hear from any if not all the panelists around how that role citizens are playing in pushing back on these hate narratives in the different country contexts.
>> MODERATOR: Who would like to comment on that? Susan?
>> SUSAN BENESCH: So what we can ‑‑ what I can say is that one can collect numerous examples of either courageous individuals who have started an anonymous Twitter account in misogyny, a pack Stan I actress. This particular account that I'm thinking is called anti‑Pakistan. It produced or the person, the anonymous person or people who are doing it, produced a small geyser of Tweets which used a tool that we often neglect to mention, that is humor, as well as great argument. There are many, many other cases particularly in South Asia of collective online efforts. There was one six or seven years ago in response to a politician who was supporting violence against women for going out to a bar and also for, perhaps, committing the sin of celebrating Valentine's Day. In response, a couple of online and offline activists suggested that people mail, of course, offline pink women's underwear to this politician. He was more or less buried in women's underwear and eventually this problem of his, what I would call hateful speech, died down.
There are many other anecdotal examples like that.
>> MODERATOR: Sorry. Just to be mindful of the time. I think yes, there are many examples, but if I get your question right, there is a structural issue that is the question. What are approaches that could systematically empower young people or other actors involved in online speech to take initiative and exercise responsibility? We see that looking at cyber bullying, where the bystander is the key part and there is a movement to conceptually change and empower people where it's systematic approach that goes beyond the examples.
But we have to wrap up, unfortunately. We are running out of time, sorry. I would like to give the last word to Frank. Where shall we go from here? We have heard from kaleidoscopic array of stories and problems. Some of them are really depressing and scary. We've touched upon different instruments in the toolbox from legal to nonlegal. Where do you see things going? What are the most promising steps forward?
>> FRANK LA RUE: Thank you. Number one is on the counter‑speech coming from Civil Society, I think the question is fantastic, because in the freedom of expression we have emphasized so much that we don't want in my opinion censorship by the state and we don't want any intervention by the state defining content. That I think we neglected the other part of the story. In the other part of the story is what happens from the bottom up? And from the bottom up, what we really want is to engage society in the healthiest, broadest, strongest debate possible on all contents.
This is what freedom of expression is all about. So the same way we're defending noncensorship, we should also promote, and this is a challenge to all of us that do freedom of expression, we should promote a healthy debate. These issues should be debated within the broad public. Obviously it depends on the context and some cases it would be difficult. From misogyny as a tragedy in many countries and people coming up from Civil Society defending the equal rights and dignity of women or posting discrimination against minority, I think this would be important.
What steps to take is precisely that. How do we get people with the supporting of platforms and ICT, how do we get this dialogue going online as well in a healthy way. And I think this can be a very counter message to what we're seeing the world in crisis today.
>> MODERATOR: Thank you so much. My take away from this, along the same lines, is we're talking about an ecosystem problem. It's a very hard problem because it's both an offline and online problem. It's multifaceted, multidimensional. It's highly contextual. That description, of course, makes it very hard to find solutions. Rather what is very likely from what we hear today is that we need blended approaches where we mix law with nonlegal approaches, where we think about the role, of course, of governments in a broader sense than just law making where we work with the platform providers and the platform providers have to take responsibility as well as they do increasingly, but also where we think about the users themselves and how can we empower users for self‑help and collective action dealing with that problem from the bottom up. So these are some of my thoughts, clearly a field that will keep many of us in the research world busy for the years to come. The conversation will continue this afternoon. I learned in Room 3 at 4:30 to 6:30 for those who would like to join. Again, BL Room 3 at 4:30 to 6:30 for a continuation of the conversation. I know there are many questions and comments still in the room. So please rejoin us later today.
Thank you so much to all the panelists. Thank you for your questions and for your attention. Thank you.