Towards the end of October 2021 Facebook announced big news: the corporate business was changing its name to Meta, while Facebook the social media platform would remain. 

Mark Zuckerberg has solid reasons for the rebrand. The company needed a broader title, now that it also includes Instagram, Whatsapp and Oculus VR as well as mobile web analytics company Onavo, and Messenger precursor Beluga. Meta reflects a new focus on the metaverse and demonstrates the ambition to lead the way in this future digital realm. As an aside, it’s also worth noting that Facebook the social media site is more popular than ever, but it’s not attracting young people like it used to. A shrewd operator like Zuckerberg knows that it’s better to shift focus when a successful product is at its peak rather than on the decline.

All of this makes sense. First imagined in a 1990s sci-fi novel and conjured up in movies from Total Recall to Wreck-It Ralph, the future metaverse is an exhilarating concept, a place of boundless possibilities and experiences. Zuckerberg wants his company to be its guiding light. Yet many people are sceptical. Was this really the deciding factor for the name change, or was it to distance the business from negative press? 

Trust in Facebook was already low after testimony from whistle-blower Frances Haugen hit the press, telling of polarizing algorithms, understaffing in key areas concerned with safety and a culture that ignored known problems. The rebrand hasn’t helped its cause. A survey by SightX reported that 37.5% of respondents did not believe the name change would bring any real changes to the organization. Many believe it was because of poor public perception, rather than to better fit the company’s future goals and vision. Still, 2022 is a new year and as people start to see the metaverse taking shape they may be more accepting of the reasons behind the rebrand. 

The good, the bad, and the need for regulation

Like Coca-Cola, Facebook the platform is nigh-on universal; open to anyone with internet access. Most of us have been Facebook users at one time or another and have had largely positive experiences. We’ve enjoyed its window into the lives of friends, family and colleagues, the way it has re-connected us with those we had lost touch with and enabled groups of people from all over the world to create communities around niche interests. But there’s no ignoring the bad stuff.

That bad stuff has been coming from all angles. Privacy and a lack of transparency over user data is one issue; the company’s low tax contributions is another. Cloning competitor apps like TikTok (Instagram Reels), and Snapchat (Facebook Stories), has also attracted criticism. Content moderators brought a lawsuit after reporting poor working conditions and post-traumatic stress disorder; some have now been compensated for their experiences

But the biggest concerns are to do with the disconnect between Facebook’s mission statement of bringing the world closer together, and the real-world damage caused to individuals, minority groups and sometimes entire nations because the business hasn’t done enough to take down and prevent the spread of fake news and harmful content.

A Wall Street Journal investigation found that changes to Facebook’s content algorithm stoked division and did not do enough to reduce Covid 19 vaccine hesitancy. In addition, Instagram was harming the mental health of teenage girls. UK natural beauty company Lush recently took the radical step of quitting Facebook and Instagram alongside Snapchat and TikTok, citing the negative impact the social media sites have on young people’s mental health.

Comments made by ex-members of Facebook staff together with the company’s own leaked research and that of many other organizations also suggest that not enough is being done to deal with misinformation and malign content. Former Facebook executive Chamath Palihapitiya didn’t pull any punches about the seriousness of the issue, saying ‘We have created tools that are ripping apart the social fabric of how society works’. 

Meta/Facebook stress that they make robust efforts to deal with negative content. The company has just announced the development of a new AI which is quick to ‘learn’ to spot harmful content, rather than taking months of training. 

However, the company has been criticised for placing too much emphasis on reacting to problems and not enough on preventing them. So far, AI does not seem to have been able to spot harmful content before the damage is done. Is it possible to do enough? And how can they be confident about policing behaviour in the future metaverse, with its billions of tiny interactions in every moment? We just don’t know the answers yet. 

Meta’s Horizon Worlds platform may provide a clue as to how moderation of the metaverse might work. Users in this colorful virtual space can report harmful behavior and send recorded data from their device as evidence. They can also activate a ‘safe zone’, a personal space where they can take time out and mute, block, or report users if necessary. Users can be suspended or permanently excluded if they are found to be breaking the rules. Community Guides with their own avatars inhabit the space and keep an eye on things. It’s a mostly reactive rather than preventative approach, but then it’s hard to see how prevention could work. Though some warning signs can be noted, we can’t – yet – predict crime in the way shown in Minority Report.

People might just have to accept that a future virtual world, like social media, reflects society and so will never be perfect. Techdirt editor Mike Masnick put it like this: content moderation is impossible to do well at scale, because in a situation where there are billions of interactions, even if 99.9% of content decisions are ‘right’, the 1% of ‘wrong’ decisions could still represent thousands of negative experience. It will be up to individuals to decide how much time they want to spend in the metaverse and, to a degree, how to keep themselves safe.  

But more regulation will be needed. Businesses exist to make money; it’s governments who must take charge of putting in measures for the sake of the public good. Future metaverse users will be under constant surveillance. VR headsets will be tracking what users see, hear, feel and how they react, both physically and mentally. This puts current concerns about how much Google and Facebook/Meta know about us in the shade. In the metaverse, users could be subject to a constant deluge of exceptionally nuanced marketing that taps directly into the emotions felt during virtual experiences. It needs regulation to ensure that users can control who their data is shared with and always know when they are being marketed to, whether they’re watching a video or talking to an avatar. Somehow, limits for manipulation, whether political or commercial, need to be set, so that people are free to enjoy the metaverse without fear of exploitation. 

The metaverse must be built

The consensus is that the Metaverse should be built by communities, rather than by one corporate entity with a guiding hand at best, or ultimate power at worst. Even Zuckerberg seems to agree, stating in his Meta Founder’s Letter that ‘The metaverse will not be created by one company. It will be built by creators and developers making new experiences and digital items that are interoperable and unlock a massively larger creative economy than the one constrained by today’s platforms and their policies’. Though it’s hard to see Facebook’s name change to Meta as anything other than an attempt to ‘own’ the space. 

Just in 2021, Meta spent $10 billion developing metaverse technologies. The company is creating 10,000 jobs in the EU as part of its growth program. It recently invested more than $50 million in non-profit groups to help ‘build the metaverse responsibly’. Other major players turning their attention to the metaverse are Epic Games, creator of Fortnite, Pokémon Go developer Niantic, graphics technology company Nvidia, blockchain-based virtual world Decentraland, Microsoft, and Apple. 

Meanwhile Elon Musk believes that his own Neuralink brain interface products will eventually offer a better way to experience virtual reality than spending much of the day trying to move around in a VR headset.

So, the issues of the future metaverse, the problems around trust, privacy, transparency, manipulation, and possible harassment are not just Meta’s to solve. All the more reason why it’s important that government regulations keep up with the technology. 

The metaverse will transform our lives. It could enrich our day-to-day experiences, and even reduce our environmental impact by allowing us to be ‘present’ in the office, ‘attend’ concerts hundreds of miles away, and ‘travel’ to see the world’s sites without ever leaving our homes. 

Like the internet in general, and social media in particular, the metaverse will hold a mirror up to our world. There’s extraordinary potential for good, and equally for bad. Meta and others cannot just go through the motions. To create trust, companies need to demonstrate that they are truly doing all they can to keep users safe. 

Above all, metaverse businesses and governments must work together to build the metaverse we want – a creative, inspiring space worthy of exploration, a place where we feel safe and protected, but have the freedom to make up our own minds. 

  1. Founder’s Letter, 2021, Meta,
  1. Facebook Wants To Attract Young People, But Gen Z Teens Say It’s A ‘Boomer Social Network’ Made For ‘Old People’, Insider,
  1. This 29-Year-Old Book Predicted The ‘Metaverse’ — And Some Of Facebook’s Plans Are Eerily Similar, CNBC,
  1. Facebook Whistleblower Hearing: Frances Haugen Calls For More Regulation Of Tech Giant – As It Happened, The Guardian,
  1. Facebook’s Name Change Receives Poor Marks In New Poll, Forbes,
  1. Facebook Will Pay $52 Million In Settlement With Moderators Who Developed PTSD On The Job, The Verge,
  1. The Facebook Files: A Wall Street Journal Investigation,
  1. ‘I’m Happy To Lose £10m By Quitting Facebook,’ Says Lush Boss, The Guardian,
  1. Ex-Facebook Executive Chamath Palihapitiya: Social Media Is ‘Ripping Apart’ Society CNBC (via YouTube),
  1. Our New AI System to Help Tackle Harmful Content, Facebook/Meta,
  1. Horizon Community, Oculus,
  1. Masnick’s Impossibility Theorem: Content Moderation At Scale Is Impossible To Do Well, Techdirt,
  1. Founder’s Letter, 2021, Meta,
  1. Facebook Says It Expects Its Investment In The Metaverse To Reduce Its Profits By ‘Approximately $10 billion’ This Year, Insider,
  1. Investing in European Talent to Help Build the Metaverse, Facebook/Meta,
  1. Building the Metaverse Responsibly, Facebook/Meta,
  1. Breakthrough Technology For The Brain, Neuralink,

Elon Musk Sits Down With The Babylon Bee, The Babylon Bee (via YouTube)