Q129 Dean Russell: Thank you, Ms Zhang, for your testimony today. One of the parts that is core to this Bill and that we need to get right is the legislation to make sure that organisations—specifically Facebook in this instance—do the right thing. My question to you is about the culture. You mentioned that you went pretty much to the top to raise concerns about democracy. Would you say that Facebook has a culture that would rather protect itself than protect democracy and society? If so, how robust do we need to be in this Bill to make sure that it follows the rules rather than potentially create loopholes that it will work around?
Sophie Zhang: Absolutely. I would like to take a step back and remind people that we are asking whether a company whose official goal is to make money is more focused on protecting itself and its ability to make money or protecting democracy. We do not expect Philip Morris tobacco to have a division that reimburses the NHS every time someone gets lung cancer and needs to be treated. We do not expect Barings Bank to keep the world economy from crashing. That is why Britain has its own bank. It is important to remember that Facebook is ultimately a company. Its goal is to make money. To the extent that it cares about protecting democracy, it is because people at Facebook are human and need to 7 sleep at night, and also because, if democracy is negatively impacted, it can create news articles that impact Facebook’s ability to make money.
That said, I have several suggestions about changing the culture at Facebook, or at least creating measures on the company, with regards to Ofcom regulating the company. The first is requiring the company to apply policies consistently, which is, I believe, in Clauses 9 to 14 of the Bill. The idea that fake accounts should be taken down was written into Facebook’s policies. I saw that there was a perverse effect, in that if I found fake accounts that were not directly tied to any political leader or figure, they were often easier to take down than if I found fake accounts that were. That created a perverse effect in that it creates an incentive for major political figures, essentially, to create a crime openly. If a burglar robs a bank, the police would, hopefully, arrest them very quickly, but suppose a burglar robs a bank and that burglar is a Member of Parliament who is not wearing a mask and openly shows his face, and the police decide to take a year to arrest him because they are not sure about arresting a Member of Parliament. That is essentially the analogy with Facebook.
Others have made a proposal to require companies over a certain size to separate product policy and outreach and governmental affairs, because, at Facebook, the people charged with making important decisions about what the rules are and how the rules get enforced are the same people charged with keeping good relationships with local politicians and government members, which creates a natural conflict of interest. Facebook is a private company, but so is the Telegraph, the Guardian and so on. Those organisations keep their editorial department very separate from their business department—at least I hope they do. The idea of the Telegraph killing a story because it made a politician look bad is unthinkable, at least to me, and I hope it would be to other members of the committee, although, of course, you know better than me. Dean Russell: Would it focus the minds of the senior leadership in Facebook if they were liable for the harm that they do both to individuals and society from what happens within Facebook? For example, would the situation you shared earlier about the elections have happened not in 10 months but perhaps overnight if they were liable for the impact of that?
Sophie Zhang: Potentially, but it depends on precisely how they are liable and how the rules are enforced. What I mean is that the Online Safety Bill, as I understand it, is focused on liability for harm in the United Kingdom, which is an approach that can make sense for the United Kingdom as it has robust institutions and cultures, but of course, Honduras is not the United Kingdom and Azerbaijan is not the United Kingdom. They are authoritarian countries. I see it as highly unlikely that Honduras or Azerbaijan would take an approach that required Facebook to take down the inauthentic networks of their own Governments. The other point is how it is enforced. I have read the text of the Bill. It took quite a while. My understanding is that the first way of enforcement is self-assessment by the company in regular reports under Clauses 7 8 and 19. This may not be reliable, and it may actually create an incentive for companies to avoid acknowledging problems internally. If you bury your head in the sand and pretend that the problem does not exist, you do not have to report as much to Ofcom. If you look for crime, you are more likely to find it, so companies will have an incentive to look for less.
With regards to enforcement, I have two separate proposals that may be difficult to apply, but I will make them nevertheless. The first is to try to independently verify the ability of each platform to catch bad activity by having Ofcom conduct better team-style penetration test operations on certain types of illegal activity. What I mean by that is this. If you want to find out how good each platform is at stopping terrorist content, you have Ofcom send experts on social media to post terrorist content in a controlled and secure manner and see what percentage of them are taken down and caught. You can then say, “Facebook took down 15%, Twitter took down 5%, and Reddit took down 13%”. I am making up those numbers, of course. In that case, you could say, “All of those are terrible, but Facebook is the best. We need to focus on the companies that are less good at this”. You could take the same approach with, for instance, child pornography. The reverse could also be used. For instance, if you are worried about harassment, you could have people report benign content to see what is done to it if the content is incorrectly taken down. Ultimately, the goal is to take down the most violating content and have the least harm done to real people. You could stop everything bad overnight by banning social media in Britain, but that is obviously not what we want to do. The second proposal that I would make is to require companies to provide data access to trusted researchers and provide funding for such researchers to have more independent verification. However, this creates some privacy risks. Aleksandr Kogan, after all, was also a university researcher.
Dean Russell: Indeed, he was. Thank you.