Advertisement
X

Moderating Behaviour On Virtual Platform Can Be Impossible: Incoming Metaverse CTO

In an internal memo to employees, the incoming chief technology officer of Meta said while he would want to have "almost Disney levels of safety" on the platform, moderating people's behaviour at any meaningful scale is practically impossible.

Head of Metaverse's Realty Labs, Andrew Bosworth has told employees that moderating how users speak and behave "at any meaningful scale is practically impossible". The internal memo accessed by the North American publication, Financial Times, affirms the existence of the concern despite Bosworth's ambition of having "almost Disney levels of safety" on the virtual platform. 

He warned that virtual reality can be a toxic space for women and minorities, the memo states. The Realty Labs chief states that could be an "existential threat" to Facebook's ambitious plans if it turned off "mainstream customers from the medium entirely", reported the publication. 

Andrew Bosworth is currently the Head of Metaverse's Realty Labs and would be taking over as the Chief Technology Officer in the next year. 

Bosworth's prime concern is that bullying and toxic behaviour can be exacerbated by the immersive nature of virtual reality, considering the fact that monitoring billions of interactions in real-time would require significant effort and may often be impossible. This is especially Facebook's on-and-off record in dealing with content in the form of texts, images and videos on its platforms.

He had written about the concerns in his blog post. As per Bosworth, it is impossible to record everything happening in the virtual world indefinitely. He added that it would be a violation of people's privacy, and at some point, the headset could run out of memory and power. 

The Realty Labs chief stated that in order to address these concerns, Meta-owned Oculus developed a solution in their Horizon Worlds experience operating with a rolling buffer. Every time there's a report on the platform, the captured information is automatically sent as evidence of the alleged occurrence. The information is captured through a rolling buffer, the blog informs. 

Bosworth suggests that the company could potentially lean on its existing community rules, which "have a stronger bias towards enforcement along some sort of spectrum of warning, successively longer suspensions, and ultimately expulsion from multi-user spaces." 

With a single Meta account in place, the offender could be removed from all metaverse applications even if they had multiple avatars.

Further, in a separate blog post he suggested that the company would not be able to build a platform with such safeguards on its own. "It’s table stakes: To build the next computing platform that puts people at the center, there needs to be a collaboration between companies, experts, and policymakers to develop new tools that help keep those people safe," he wrote in an Oculus blog. 

Advertisement

He added that virtual reality is a nascent medium and norms about its usage and behaviour are still evolving. 

Facebook rebranded itself into 'Meta' on October 28. The company mentioned that it was a step further towards building what they called an extension of the online world called the 'metaverse'. Bosworth mentioned that metaverse would imply a set of interconnected digital spaces which combines augmented reality, virtual reality and what the internet has on offer.

Show comments
US