“When a user tries to upload a terrorist’s photo or video, our systems look for whether the image matches a known terrorism photo or video.”
“This means that if we previously removed a propaganda video from IS, we can work to prevent other accounts from uploading the same video to our site. In many cases, this means that terrorist content intended for upload to Facebook simply never reaches the platform,” Facebook explained.
Facebook also understands the language where it uses AI to interpret text that might be advocating for terrorism. It also identities pages, groups, posts or profiles as supporting terrorism by using algorithms.
“We’ve also gotten much faster at detecting new fake accounts created by repeat offenders. Through this work, we’ve been able to dramatically reduce the time period that terrorist recidivist accounts are on Facebook,” the company said.