Gov Confirm Measures to Tackle Anonymous UK Internet Trolls

0
26

The Government has today confirmed that they will add two new duties to the Online Safety Bill (OSB), which will act to crackdown on the anonymous online abuse that occurs on the largest social networks. The wider bill also tasks Ofcom with tackling “harmful” internet content through website bans, fines and other sanctions.

The vast majority of social networks used in the UK do not strictly require people to share any personal details about themselves. Users are often able to identify their accounts by a nickname, alias or other term not linked to a legal identity. This is broadly a good thing because most of us balk at the idea of sharing too much of our personal information with such networks, but we frequently still want to engage with other users on them.

However, some people also exploit this cloak of anonymity to spread abuse, with offenders typically having little to no fear of recrimination from either the platforms or law enforcement. Mercifully, the Government has recognised that banning anonymity online entirely, which isn’t likely to be realistic, would negatively affect those who have positive online experiences or use it for their personal safety (e.g. domestic abuse victims, activists living in authoritarian countries or young people exploring their sexuality).

As such, the new duties are more focused upon giving end-users greater control over what they choose to see. The two additional duties will also only apply to so-called Category One sites, which are said to reflect “companies with the largest number of users and highest reach” (e.g. Facebook, Twitter, Reddit). In other words, smaller sites won’t need to worry about needing to meet a costly and potentially unworkable requirement.

The first duty will force big social media sites to give adults the ability to block people who have not verified their identity on a platform, while the second duty will require platforms to provide users with options to opt-out of seeing harmful content.

The Two New OSB Duties

First duty – user verification and tackling anonymous abuse

Category one companies with the largest number of users and highest reach – and thus posing the greatest risk – must offer ways for their users to verify their identities and control who can interact with them.

This could include giving users options to tick a box in their settings to receive direct messages and replies only from verified accounts. The onus will be on the platforms to decide which methods to use to fulfil this identity verification duty but they must give users the option to opt in or out.

When it comes to verifying identities, some platforms may choose to provide users with an option to verify their profile picture to ensure it is a true likeness. Or they could use two-factor authentication where a platform sends a prompt to a user’s mobile number for them to verify. Alternatively, verification could include people using a government-issued ID such as a passport to create or update an account.

The new duty aims to provide a better balance between empowering and protecting adults – particularly the vulnerable – while safeguarding freedom of expression online because it will not require any legal free speech to be removed. While this will not prevent anonymous trolls posting abusive content in the first place – providing it is legal and does not contravene the platform’s terms and conditions – it will stop victims being exposed to it and give them more control over their online experience.

Users who see abuse will be able to report it and the bill will significantly strengthen the reporting mechanisms companies have in place for inappropriate, bullying and harmful content, and ensure they have clear policies and performance metrics for tackling it.

Second duty – giving people greater choice over what they see on social media

There is said to be a growing list of toxic content and behaviour on social media which falls below the threshold of a criminal offence but which still causes significant harm. This includes racist abuse, the promotion of self-harm and eating disorders, and dangerous anti-vaccine disinformation. Much of this is already expressly forbidden in social networks’ terms and conditions but too often it is allowed to stay up and is actively promoted to people via algorithms.

Category one companies will now have to make tools available for their adult users to choose whether they want to be exposed to any legal but harmful content where it is tolerated on a platform. These tools could include new settings and functions which prevent users receiving recommendations about certain topics or place sensitivity screens over that content.

We should point out that the new duties are in addition to the OSB’s existing measures, which are designed to more effectively tackle both “illegal content” (e.g. child abuse, hate crimes, terrorism etc.) and “harmful content posted anonymously” (e.g. banning repeat offenders associated with abusive behaviour, preventing them from creating new accounts or limiting their functionality).

Nadine Dorries MP, UK Digital Secretary, said:

“Tech firms have a responsibility to stop anonymous trolls polluting their platforms.

We have listened to calls for us to strengthen our new online safety laws and are announcing new measures to put greater power in the hands of social media users themselves.

People will now have more control over who can contact them and be able to stop the tidal wave of hate served up to them by rogue algorithms.”

We suspect that some people will still find a way to circumvent the first duty, so that they’re able to continue posting abuse but from supposedly “verified” accounts. However, much will no doubt depend upon what systems and approaches the social networks choose to adopt.

Ofcom will be expected to set out in guidance how companies can fulfil the new user verification duty and the proposed verification options. In developing this guidance, the regulator will be expected to ensure that the possible verification measures are accessible to vulnerable users and consult with the Information Commissioner (ICO), as well as vulnerable adult users and technical experts.

Credit: Source link

#

LEAVE A REPLY

Please enter your comment!
Please enter your name here