Intel’s AI-powered Bleep tech lets you choose exactly how much online hate speech you hear
Last month, Intel demoed a beta version of Bleep, its in-development AI technology that will “detect and redact audio based on user preferences”.
Bleep is specifically being designed to combat harmful language spewed online during gaming sessions and has been in development since at least 2019, though it has now come to the wider attention of the internet thanks to screenshots from Intel’s recent GDC 2021 demo which reveal its interface.
With multiple sliders and toggles, Bleep’s interface is being designed to let people pick which hate speech it will filter out and choose between categories such as “Racism and Xenophobia”, “White nationalism”, “LGBTQ+ Hate” and “Misogyny”. Each allows you to individually slide between options to block “none”, “some”, “most” or “all”.
A simple on/off switch is shown in the UI to redact the N-word “including all its variations”.
Ableism and Body Shaming, Aggression, Name-calling, Sexually Explicit Language and Swearing are other options.
To be clear, this is a real product in development at Intel for release later this year. It is not a concept cut from Black Mirror.
“With Bleep, we’re enabling gamers to take control of their conversations, one key step to eliminating toxicity in gaming today,” Intel marketing engineer Craig Raymond said. “The app interfaces our AI models into the Windows architecture to integrate the feature transparently into your voice applications.”
Intel quoted research from the Anti-Defamation league which stated that out of 1000 US video game players surveyed, around a quarter had been forced to quit playing at some point because of harassment.
“We realise technology isn’t the complete answer,” Intel exec Roger Chandler said, “but we believe it can help mitigate the problem while deeper solutions are explored.”