Advertisement

Spectrum Labs raises $10M for its AI-based platform to combat online toxicity

With the U.S. presidential election now 40 days away, all eyes are focused on how online conversations, in conjunction with other hallmarks of online life like viral videos, news clips and misleading ads, will be used, and often abused, to influence people's decisions.

But political discourse, of course, is just one of the ways that user-generated content on the internet is misused for toxic ends. Today, a startup that's using AI to try to tackle them all is announcing some funding.

Spectrum Labs -- which has built algorithms and a set of APIs that can be used to moderate, track, flag and ultimately stop harassment, hate speech, radicalization and some 40 other profiles of toxic behavior, in English as well as multiple other languages -- has raised $10 million in a Series A round of funding, capital that the company plans to use to continue expanding its platform.

The funding is being led by Greycroft, with Wing Venture Capital, Ridge Ventures, Global Founders Capital and Super{set} also participating. The company has raised about $14 million to date.

Spectrum Labs' connection to combatting toxic political discourse is not incidental.

CEO Justin Davis said the startup was founded in the wake of the previous U.S. election in 2016, when he and his co-founder Josh Newman (the CTO) -- who hailed from the world of marketing tech (they and about 9 other employees at Spectrum all worked together at Krux and then Salesforce after Krux got acquired by it) -- found themselves driven to build something that could help combat all the toxicity online, which they felt had a huge role to play not just in how the election unfolded but in the major rifts that get established, and play themselves out everyday, on the internet and beyond.

"We were all looking for some way to get involved," he said. "We wanted to use our big data experience" -- Krux's specialty was online content categorization for marketers to better measure their campaigns -- "to do some good in the world."

Spectrum Labs today works with a wide range of companies -- from gaming giants (Riot Games is one customer), to social networks (Pinterest is another), online dating sites (the Meet Group is one more), marketplaces (Mercari is a fourth), DTC brands and organizations that want to track their own internal conversations.

The company's primary platform is called "guardian" (not to be confused with the eponymous newspaper, whose logo it resembles) and it comes in the form of a dashboard if you need it, or just a set of services that you can integrate, it seems, into your own.

Customers can use the tech to check and vet their existing policies, get guidance on how to improve them and use a framework to create new samples and labels to train models to track content better in the future.

Tools for content moderation have been around for years, but they have largely been very simplistic complements to human teams, flagging keywords and the like (which as we now can throw up many false positives).

But more recently, advances in artificial intelligence have supercharged that work -- an arrival that has come not too soon, considering how online conversations have grown exponentially with the surge of popularity of social media and online chatting in general.

Spectrum Labs' AI-based platform is currently set up to scan for more than 40 kinds of toxic behavior profiles, such as harassment, hate speech, scams, grooming, illegal solicitation and doxxing, a set of profiles that it built initially in consultation with researchers and academics around the world and continues to hone as it ingests more data from across the web.

The startup is not the only one that is tapping AI to target and fix toxic behavior. Just this year, for example, we've also seen the AI startup Sentropy -- also focusing on social media conversations -- raise money and come out of stealth and L1ght also announce funding for its own take on tackling online toxicity.

Indeed, what has been notable is not just the emergence of other startups building businesses around fighting the good fight, but seeing investors interested in backing them, in what might not be the most lucrative ventures, but definitely efforts that will help society for the better in the longer term.

"Justin and Josh have grit and resilience and it takes a unique set of leaders and team," said Alison Engel, a venture partner at Greycroft. "But as investors we know to solve the most systemic problems requires capital, too. You have to invest behind them. To pull it off, you will need coalitions, platforms coming together. A lot of this is a problem rooted in data and making it more robust, second is people behind it and third is the capital."

She said that it feels like there is a changing tide right now among VCs and where they choose to put their money.

"When you look at the investment community supporting and thriving on community growth you have to think, what is our value system here? We need to invest in the platforms that are part of this greater good, and you are starting to see investors responding to that."