Dean Russell MP: As a broad question—I will go into a few details in a moment—is there a bit of chicken and egg with social media? From the descriptions we have had today from previous panellists, it sounds like social media is effectively creating more hate, more racism, more prejudice, more homophobia—all of these things. Is that because society is moving in that direction anyway, or has social media caused that and therefore this Bill will help to reduce it?
Danny Stone: It facilitates the spread. We know that there is an increase generally in hate. The Community Security Trust, which monitors, collects and reports on the incidents of antisemitism, has seen a steady upwards trajectory in antiemitism. There are particular events that spur that antiemitism on. Another piece of research done by the Institute for Jewish Policy Research found that while maybe 2.4% to 4% of the population might be considered antisemites, up to 30% of the population consider one antiemitic statement to be true. So up to 30% of people might believe something antisemitic. If you then layer that on to social media, look at how that spreads. I could put my antisemitic idea on to social media, it will stay there, other people will see it and then it links. The report that we did on Instagram talks about chaotic trolling: people use various hashtags as gateways into conspiracy and into antisemitism. So it is a bit of both. We know that things are worse, but social media facilitates the spread of that hatred.
Nancy Kelley: It is probably quite important, as regards LGBTQ people, to think differently about general public attitudes and the direction they are taking, which is overwhelmingly positive. If we think about the general population, attitudes to lesbian and gay people are some of the fastest changing positive social attitudes we have had over the last 50 years. We should not confuse what happens in the online space with too much of a fear that actually public opinion on LGBTQ people is negative and is moving in the wrong direction. We actually do not have any evidence of that; in fact, to the contrary. That points, though, to this farming of abuse; you have a minority of the community who are enabled through the way these platforms function to engage in this kind of abuse. Platforms reward proliferating abuse. The click-through is king.
Also, to pick up points that Danny and the people in the previous session alluded to, it is important to understand that in this online abuse space, there are close interconnections between the abuse of one group and the abuse of other groups. We should not think of this as necessarily somebody motivated to go online because they are racist. There are many people online engaged in wholesale trolling of multiple marginalised communities. Indeed, having prejudiced views about one marginalised community and expressing them online can lead, through the kinds of processes Danny has been pointing to, to people becoming essentially radicalised into having prejudiced views against a number of our communities. We should feel confident about where the general public are at and worried about the systematic online abuse.
Q43 Dean Russell MP: I have been involved with digital for many years, although probably nowhere near the experience around the table, but I remember when social media started. It was trying to encourage people to go online. Now in our society it is one of the main methods of communication with friends and so on. Was the reflection of racism, homophobia, prejudice and all these things on social media 10 years ago different to what it is now? Back then, a few people would be saying things that did reflect what was going on in society, whereas it is now being used as a recruitment driver to encourage more people to be homophobic, racist, prejudiced and hate filled. Is there a risk that if we do not stem this with this Bill now, in five or 10 years’ time we will see this much more in the reality of society and not just online?
Nancy Kelley: Maybe yes, in two ways. We should worry, as Danny has pointed to, about online hate spilling over into real-world hate crimes. I pointed to the fact that prevalence of online hate towards LGBTQ people is heading up from a high base. We know that reported hate crime in the UK is also heading up rapidly. Much of that is likely to be improved reporting and improved recording, but when we are suffering a wave of serious violent attacks, primarily against gay men, in our cities, and 51 indeed the homophobic murder of a gay man a few weeks ago in Tower Hamlets near where I live, we should not be sanguine at the extreme ends of this—the normalisation of hateful attitudes or hateful attitudes that spill over into real life. There is also an important connection to be made here with radicalisation generally.
A good report from Kings College’s International Centre for the Study of Radicalisation Studies that came out this year pointed, during the first 100 days of the Biden Administration, to the far-right using transphobia, which is a prevalent form of abuse against our community, as a recruiting tool for a range of beliefs including entrenched anti-Semitism and complex racism. Dean Russell MP: On that point, are certain types of content being used at the moment as a gateway drug to hate and hashtags on things that people might generally not think of as prejudiced but that are one of the 30 comments that are then getting people down the Alice in Wonderland rabbit hole and going deeper into more conspiracy theories?
Nancy Kelley: Yes, for a minority of social media users. There is good research in this area because of all the data on radicalisation and farright studies. For a minority of users, there is a rabbit hole effect. Prejudice against one group—it could be anti-Semitism, it could be homophobia, it could be anti-Black racism, as we were hearing about earlier—brings people into contact with a wide range of deliberately radicalising content against other groups. The previous speaker called it the Four Horsemen of the Apocalypse - that is how it is. These prevalent forms of abuse online then become gateways to other beliefs.
Danny Stone: If it is a recruitment tool, it is working. Five years ago, the incident levels of antisemitism online were at about 18%. They are now up to about 40% of the incidents that are reported. We did some work with Media Matters for America, a US NGO, which investigated the alternative platform 4chan and looked at the rising nature of antisemitism there. It rose from hundreds of thousands of posts in 2015 to something like 1.7 million in 2017. The intersecting abuse, misogyny and antisemitism increased about 180%. When it comes to the Bill, it is important that intersectionality is considered for people with multiple, distinct and overlapping identities. That shows you that it is being used as a gateway. We know from the QAnon movement that people start in gaming and can be drawn from those comments boards into other online spaces. There was a trailer for the video of the Antisemitism Policy Trust on YouTube, which tried to get people off YouTube on to BitChute and from BitChute, presumably, on to other platforms, too. There are those gateways.
Q44 Dean Russell MP: Briefly, building on that, we heard evidence today, especially in the first session, about this interconnection and this crosspollinating of hatred in different groups, across different channels and so on, and often from a small group of the same people with multiple 52 anonymous identities. Where do you see the anonymity as part of this Bill? Is there enough to tackle that? Also, do you see it as one of the challenges here that you might have one person with 20 identities spreading hate to millions of people?
Danny Stone: This issue is not addressed in the Bill properly. It certainly was in the White Paper that anonymity would be addressed. There is nothing at the moment. People in this House—Siobhan Bailllie, Margaret Hodge—have called for action including verification systems so that one can engage only with verified accounts. We would go further and make it the platforms’ problem. They should be liable when they cannot provide the details of an individual. Where there is a burden of proof, sufficient evidence and a limited revelation of that data, you should be able to find out who is responsible, as you can in financial services with the know-your-customer principle. That said, I know there must be important protections for victims of domestic abuse, whistleblowers, those who would like to come out but are frightened to give their details or those who have parents who may be concerned about that for whatever reason. Anonymity is precious for various reasons. Perhaps middleware or trusted partners who can be middlepeople with that data might be a solution.
Nancy Kelley: I want to say a couple of things, but mostly I will talk about anonymity, because I suspect it is one of the areas where Stonewall’s perspective differs, particularly from an international protection perspective. It is possible to regulate a lot of the behaviours you are describing without knowing the individual identity of the account. Accounts that are behaving in a particular way can be identified without knowing who the account owner is. It is important to separate whether we can regulate and whether we can identify and remove accounts that behave in abusive ways, whether through networking or through the use of bots from weakening anonymity. The answer is that we can.
Those platforms can identify those accounts without knowing who owns them and they can delete them. They do not, but they can. In terms of anonymity, while acknowledging and understanding the reasons why many groups will have a deep desire to have some degree of lifting of that veil, I would like to emphasise to the committee our deep concerns, particularly for LGBTQ people around the world, about personally identifiable information. I know people have suggested things like names being visible. Even in progressive countries that are accepting, we know that will expose LGBTQ people to harm. Our community in the UK is already harmed by outing and doxing. Making it easier to identify an LGBTQ end user, even in liberal, accepting environments, increases danger to our community.
If we look at that in the global context, we know from research that Article 19 has done that almost 90% of LGBTQ users in Egypt, Lebanon and Iran said they are incredibly frightened of mentioning even their name in any kind of private messaging online. We know that over 50% of the men charged in Egypt in recent years with homosexual “offences”— because it is indeed illegal to be gay there, as it still is in 71 countries around the world—were the subject of online stings. It is incredibly important to understand the potential impact of these global companies introducing identity verification that is visible in any way to end users.
Middleware options and know-your-customer types of approaches do not expose personal data to end users, but we think they would have a considerable chilling effect on people’s participation, particularly in regressive countries. In these countries there will be concern about data security and who is working in those platforms, in countries where the Government have the right to request personal data from companies about LGBTQ people—there are not many, but there are some—LGBTQ people will be extremely reluctant to participate online. There is a risk of a chilling effect. I would entreat the committee to be thoughtful about those impacts. I recognise that I am not coming with a solution but just presenting you with a difficult problem. We would be happy to work those through, including with our international partners.