Advertisement

The metaverse will be filled with 'elves'

Some say the metaverse is nothing but marketing hype, while others insist it will transform society. I fall into the latter camp, but I’m not talking about cartoon worlds filled with avatars like many are pitching.

Instead, I believe the true metaverse – the one that will change society -- will be an augmentation layer on the real world, and within 10 years it will be the foundation of our lives, impacting everything from shopping and socializing to business and education.

I also believe that a corporate-controlled metaverse is dangerous to society and requires aggressive regulation. That’s because the platform providers will be able to manipulate consumers in ways that will make social media seem quaint. Most people resonate with concerns around data collection and privacy, but they overlook what will be the most dangerous technology in the metaverse – artificial intelligence.

The most dangerous part of the metaverse: agenda-driven artificial agents that look and act like other users but are actually simulated personas controlled by AI.

In fact, if you ask people to name the core technologies of the metaverse, they’ll usually focus on the eyewear and maybe mention the graphics engines, 5G or even blockchain. But those are just the nuts and bolts of our immersive future – the technology that will pull the strings in the metaverse, creating (and manipulating) our experience, is AI.

Artificial intelligence will be just as important to our virtual future as the headsets that get all the attention. And the most dangerous part of the metaverse will be agenda-driven artificial agents that look and act like other users but are actually simulated personas controlled by AI. They will engage us in “conversational manipulation,” targeting us on behalf of paying advertisers without us realizing they aren’t real.

This is especially dangerous when the AI algorithms have access to data about our personal interests and beliefs, habits and temperament, all while monitoring our emotional state by reading our facial expressions and vocal inflections.

If you think targeted ads in social media are manipulative, it’s nothing compared to the conversational agents that will engage us in the metaverse. They will pitch us more skillfully than any human salesperson, and it won’t just be to sell us gadgets – they will push political propaganda and targeted misinformation on behalf of the highest bidder.

And because these AI agents will look and sound like anyone else in the metaverse, our natural skepticism to advertising will not protect us. For these reasons, we need to regulate AI-driven conversational agents, especially when they have access to our facial and vocal affects, enabling our emotions to be used against us in real time.

If we don’t regulate this, ads in the form of AI-driven avatars will sense when you’re skeptical and change tactics in mid-sentence, quickly zeroing in on the words and images that impact you personally. As I wrote in 2016, if an AI can learn to beat the world’s best chess and Go players, learning to sway consumers to buy things (and believe things) that aren’t in our interest is child’s play.

But of all the technologies headed our way, it’s what I call the “elf” that will be the most powerful and subtle form of coercion in the metaverse. These “electronic life facilitators” are the natural evolution of digital assistants like Siri and Alexa, but they won’t be disembodied voices in the metaverse. They’ll be anthropomorphized personas customized for each consumer.

The platform providers will market these AI agents as virtual life coaches and they will be persistent throughout your day as you navigate the metaverse. And because the metaverse will ultimately be an augmentation layer upon the real world, these digital elves will be with you everywhere, whether you are shopping or working or just hanging out.

And just like the marketing agents described above, these elves will have access to your facial expressions and vocal inflections along with a detailed data history of your life, nudging you toward actions and activities, products and services, even political views.

And no, they won’t be like the crude chatbots of today, but embodied characters you’ll come to think of as trusted figures in your life – a mix between a familiar friend, helpful adviser and caring therapist. And yet, your elf will know you in ways no friend ever could, for it will be monitoring all aspects of your life down to your blood pressure and respiration rate (via your trusty smartwatch).

Yes, this sounds creepy, which is why platform providers will make them cute and non-threatening, with innocent features and mannerisms that seem more like a magical character in your own “life adventure” than a human-sized assistant following you around. This is why I use the word “elf” to describe them, as they might appear to you as a fairy hovering over your shoulder or maybe a gremlin or alien – a small anthropomorphic character that can whisper in your ear or fly out in front of you to draw attention to things in your augmented world it wants you to focus on.

This is where it gets especially dangerous -- without regulation, these life facilitators will be hijacked by paying advertisers, targeting you with greater skill and precision than anything on today’s social media. And unlike ads of today, these intelligent agents will be following you around, guiding you through your day, and doing it with a cute smile or giggly laugh.

To help convey what this will be like, both positive and negative, I’ve written a short narrative, Metaverse 2030, that portrays how AI will drive our immersive lives by 2030 and beyond.

Ultimately, the technologies of VR, AR and AI have the potential to enrich and improve our lives. But when combined, these innovations become especially dangerous, as they all have one powerful trait in common – they can make us believe that computer-generated content is authentic, even if it’s an agenda-driven fabrication. It’s this powerful ability for digital deception that should make us fear an AI-enabled metaverse, especially when controlled by powerful corporations that sell third-party access to its users for promotional purposes.

I raise these concerns in the hope that consumers and industry leaders will push for meaningful regulation before the problems become so ingrained in the technology of the metaverse that they’re impossible to undo.