What are the implications of removing creators from advertising inventory for speaking up about social movements or their identity?
Until recent years, automated solutions for audio advertising didn’t exist. The first solutions to appear in the market were largely keyword-driven, but there are many examples of brand safety solutions in related industries such as content moderation that demonstrate the negative impact keyword tools have on creators and advertisers alike.
In this article, The Drum presents the systemic impact of keywords/blocklists as a brand safety measure demonetizing conversations around race, gender, sexuality, and other social topics—an issue that is ongoing today.
One specific challenge presented was the demonetization of creators for months should they address social issues and topics related to marginalized identity in their content.
Lauren Douglass, senior VP of Global Marketing at Channel Factory provided the following example, “Imagine someone is discussing BLM and the protests or posting content from a protest. That creator could be negatively impacted by content moderation for months until they’ve created more content that pushes that topic down. It can be a tedious and incredibly frustrating situation.”
Similarly, publishing groups like, “Attitude and PinkNews [have] faced as much as 73% of their stories flagged as ‘brand unsafe,’ with terms such as ‘lesbian,’ ‘bisexual’ and ‘drag queens’ on the lists.”
This situation exemplifies advertising technology and investment gone wrong.
In America, freedom of speech is the constitutional right to express opinions without censorship or restraint. The demonetization of safe news content and minority creators addressing social topics that impact their communities acts as a violation of this right. It’s the economic restraint of free speech at scale through poor technology and decision-making.
It’s also against the greater interest of advertisers and brands. Keyword-driven solutions remove large swathes of safe inventory, increase cost, and go against the wishes of the very consumers with whom businesses seek to align.
In podcasting, solutions driven by keyword targeting and anti-targeting present a similar issue.
At Sounder, we know that keywords are an insufficient data point to rely on in brand safety, suitability, and contextual targeting tools. Keyword-driven solutions lack the nuance to understand the context in which keywords arise and accurately assess safety and suitability scores.
To learn more about the limitations of keywords and what accurate/relevant contextual analysis in audio requires, check out our blog post “Keywords Aren’t the Key to Podcast Advertising.”
The over-blocking of diverse creators and news publishers from monetization is unacceptable. It disincentivizes authentic conversations and insights in the media.
Our team believes in the power and value of diverse voices. We also believe that well-designed and well-trained AI can help diversity in media because of its accuracy in evaluating content at scale.
Here’s a quick breakdown of how the proprietary technology behind our automated brand safety and contextual targeting solution helps avoid the issues above altogether:
- It doesn’t punish a creator long-term for speaking up on challenging subjects, ever. The Audio Data Cloud tracks millions of data points over time when assessing risk. It knows if a creator has mostly released safe content in their show but recently addressed a risky topic. By analyzing content for safety on the segment, episode, and show level, it maximizes the amount of safe content preserved in ad inventory for investment. From there, brands can decide what’s most suitable for them.
- Its contextual analysis takes into account the intention, tone, sentiment, and more when evaluating risk. Keywords simply aren’t enough. For example, a creator discussing a social movement with educational intent is lower risk and brand safe content, while content addressing the same topic with inflammatory intent would be higher risk. While keyword solutions cannot reliably distinguish safe use cases of blocklisted words, our audio intelligent solution can.
- It learns from new audio conversations and episodes in real time, staying up to date as new terms rise and fall in their risk levels and use cases. For example, the word “pot” is much more brand-safe today than it was a few years ago.
- It fills the gaps beyond GARM definitions for advertisers who seek more nuance. Humans are complex. Certain topics and words are discussed more safely by some groups and not others. Context and intention matter. Oftentimes, investing in content is less a decision about its safety level than it is a brand’s comfort level. Our solutions can be trained on any dataset to optimize for the specific market intelligence of advertisers while adhering to industry standards.
Of course, even AI can inherit the bias of the team that built it and the data that was used for training and testing. That’s why having a diverse team of internal and third-party annotators to review the solution’s output is essential to its performance.
The issue of diversity in advertising technology can’t be solved overnight, but at Sounder we’re deeply cognizant of the problem and have prioritized the capability to handle such complexity in our model design since day one.
We also closely interact with our partners and industry-standard development groups like the Global Alliance for Responsible Media to ensure the issue is addressed from all its source points (i.e., training models with diverse publishing datasets, collaborating with the makers of safety classifications to ensure they’re correctly applied to audio content, etc.).
Long term, we hope to expand our brand safety technology from advertising to also address the need for AI/ML safety tools in content moderation, and solve for the challenging mental health impact currently faced by 100,000s of moderators.
To learn more about how our audio data solutions can support your publishing, advertising, or research work—just reach out!
About Mercan Topkara, Chief AI Officer at Sounder
Mercan Topkara brings 20+ years of experience building machine learning and artificial intelligence consumer products for Podnods, Luminary, Teachers Pay Teachers, JW Player, and IBM Watson. She holds a Ph.D. in natural language processing from Purdue University. She’s highly passionate about ethical AI and dedicated to the creation of technology that solves for large-scale challenges impacting societies and industries.