By Josh Entzminger, Mark Esposito, and Terence Tse | China Daily | Updated: 2022-08-27 08:47
The “metaverse” doesn’t exist yet, and when it arrives it won’t be a single domain controlled by a single company. Facebook wanted to create that impression when it changed its name to Meta, but its rebranding coincided with significant investments by Microsoft and Roblox. They are all shaping how we use virtual reality and digital identities to organize more of our daily lives – from work and healthcare to shopping, gaming and other forms of entertainment.
Metaverse is not a new concept. The term was coined by science fiction novelist Neal Stephenson in his 1992 book Snow Crash, which depicts the hyper-capitalist dystopia in which humanity has collectively chosen life in virtual environments. So far, the experience was no less miserable here in the real world. Most experiences with immersive digital environments are immediately marred by bullying, harassment, digital sexual assault, and all the other abuses we’ve come to associate with platforms that “move fast and break things.”
None of this should come as a surprise. The ethics of new technologies have always lagged behind the innovations themselves. That’s why independent parties must introduce governance models sooner rather than later — before self-interested companies do so with their profit margins in mind.
The evolution of ethics in artificial intelligence is useful here. After a major breakthrough in AI image recognition in 2012, corporate and government interest in the field has spread, attracting important contributions from ethicists and activists who have published (and republished) research on the dangers of training AI on biased data sets. A new language has been developed to incorporate the values we want to support in the design of new AI applications.
Thanks to this work, we now know that AI is effectively “automating inequality,” says Virginia Eubanks of the University of Albany, State University of New York, as well as perpetuating racial bias in law enforcement. To draw attention to this problem, computer scientist Joy Buolamwini of the Massachusetts Institute of Technology launched Media Lab, the Algorithmic Justice League in 2016.
This first wave response aimed to highlight the ethical issues associated with AI. But it was soon overshadowed by a renewed push within the industry for self-regulation. AI developers have provided technical toolkits for conducting internal and third-party assessments, hoping this will allay public concerns. It did not, because most companies seeking to develop AI have business models in open conflict with the ethical standards that the public wants to uphold.
To take the most common example, Twitter and Facebook will not effectively deploy AI against a whole host of abuses on their platforms because doing so would undermine “engagement” (anger) and thus profits. Likewise, these and other technology firms have benefited from value extraction and economies of scale to achieve quasi-monopolies in their respective markets. They will not now willingly give up the power they have gained.
Recently, various corporate and software consultants have professionalized AI ethics to address reputational risks and practical risks of ethical failure. Those working on AI within Big Tech will be pressured to consider questions such as whether a job should default to opt in or out; whether it is appropriate to delegate a task to AI or not; and whether the data used to train AI applications can be trusted. To this end, many tech companies have set up supposedly independent ethics boards.
However, the reliability of this form of governance has come into question after the expulsion of high-ranking in-house researchers who raised concerns about the ethical and social implications of some AI models.
Establishing a sound ethical foundation for metaverses requires that we advance industry self-regulation before it becomes the norm. We must also keep in mind how the metaverse is already diverge from AI. Whereas AI has been largely focused on the internal operations of companies, the metaverse is emphatically focused on the consumer, which means it will come with all kinds of behavioral risks that most people wouldn’t even think of.
Just as communications regulation (specifically Section 230 of the US Communications Decency Act of 1996) introduced the governance model for social media, so social media regulation would become the default governance model for the metaverse. It should concern us all. While we can easily anticipate many abuses that will occur in immersive digital environments, our experience with social media suggests that we may underestimate the sheer scale and spillover effects that will occur.
It would be better to overestimate the risks than to repeat the mistakes of the past fifteen years. The fully digital environment creates the possibility of more comprehensive data collection, including personal biometric data. And since no one knows exactly how people will respond to these environments, there is a strong case for using regulatory sandboxes before a wider deployment is allowed.
The metaverse’s ethical challenges can still be anticipated; But time passes. Without independent and effective oversight, this new digital realm is almost certain to become sinister, recreating all the abuse and injustice of both AI and social media — and adding more than we even expected. The Metaverse Justice League may be our best hope.
Josh Entzminger is a PhD candidate in innovation and public policy at the University of London’s Institute for Innovation and Public Purpose. Mark Esposito, co-founder of Nexus Frontier-Tech, a political partner at the same institute and a professor at Hult International Business School; and Terence Tse, co-founder and CEO of Nexus FrontierTech, a professor at the same school.
Opinions do not necessarily reflect those of China Daily.
If you have a specific experience, or would like to share your thoughts on our stories, send us your writing at chinadaily.com.cn, and email@example.com.