Combatting online extremism: Tech Against Terrorism
The spread of extremist content in cyberspace can be difficult to contain. Harry Lye finds out from Tech Against Terrorism’s research Manager Jacob Berntsson how the organisation is working to find and remove sources of terrorist content online.
// Image courtesy of
In a connected world, the opportunities for terrorists to communicate and spread propaganda have never been greater. From videos being shared on Facebook and Twitter, to more niche instant-messaging services like Telegram, the scope of communications channels available to terrorist has never been greater.
One venture, Tech Against Terrorism, is trying to turn the tables on extremist use of the internet. What started as part of the United Nations Counterterrorism Executive Directorate (UNCTED) morphed into the public private partnership known now as Tech Against Terrorism, whose mission under UN Security Council Resolutions is to help companies stamp out extremist content riddling their sites.
The endeavour tackles a host of issues, from tracking and finding offensive content to helping small tech companies that do not have the same resources as the likes of Facebook or Google to nip such content in the bud.
How do terrorists use the internet?
Terrorists – whether they are Islamic extremists or, increasingly, the far right – have all used the internet to their advantage in getting their messages out into the world. As communication networks have joined people from across the world together, they have also opened doors to a world of extremist content.
From mainstream social networks and established media, to encrypted messaging apps such as Telegram or WhatsApp, to web archivers, there are multitude of ways that terrorists are using the internet to advance their goals. Add cryptocurrency to the mix, and not only can networks be used to share extremist content to receptive audiences, but also to raise funds.
As Tech Against Terrorism’s research manager Jacob Berntsson explains, the success of mainstream social media companies and their ability to invest in counter-terror content measures has led to a diversification of delivery platforms for extremist content.
“I think we very much see this as an ecosystem problem; this is a problem that affects all kinds of tech platforms, and all kinds of technologies,” Berntsson says. “A lot of media focus is often on Facebook, Twitter and YouTube. And you know, for good reason; we work with them and we do think that they often get an unfairly bad reputation, given how much resources they have invested and how much improvement we have seen on those platforms. That being said, there's still a lot more to do.
“But, you know, one of the reasons why we're seeing smaller platforms being exploited by terrorist groups is because some of the larger tech companies have done a pretty good job in terms of getting rid of at least the worst type of ISIS content, for example.”
“We are not in the business of media regulation, obviously, but we're concerned that a lot of the outlets that are very quick to criticise some of the tech companies are actually contributing to the problem in some instances.”
Terrorists use different media platforms for different things. Just as an ordinary person may use Facebook Messenger to chat, a terrorist may use an encrypted messaging service or forum such as 8-Chan. As we may use Google Drive to share a file, terrorists may use web archivers to save and share extremist content.
“Terrorists don't only use one app or one platform, they use different apps for different purposes,” Berntsson explains. “So in terms of strategic purposes, you have propaganda, you have the use of social media, content storage sites, and so on.
“There's a tactical purpose we often see in the use of encrypted messaging, but also financial technologies and crowdfunding. It's more cryptocurrencies, even though they haven't really been exploited that much today.”
However, the best laid plans of Tech Against Terrorism and the work of the social networks, Berntsson explains, can sometimes be unravelled by established media. This sadly became all too apparent after the Christchurch shooting: while the killer’s livestream and manifesto was being pulled from Facebook, it was simultaneously being widely circulated across tabloids and other news websites.
“Sadly, we've all seen the mainstream media play this role,” Berntsson says. “So that was evident in both the Christchurch attack and also the Halle attack last summer where tech companies are working around the clock to take content down, but you see it recirculated on The Daily Mail and so on.
“We are not in the business of media regulation, obviously, but we're concerned that a lot of the outlets that are very quick to criticise some of the tech companies are actually contributing to the problem in some instances. For us it is important that the discussion includes them as well and also showcases the sort of ecosystem angle.”
How do you find extremism online?
The next logical question when you learn about extremist use of the internet is, where do you find this content? After all, if you want to remove terrorist videos or manifestos from the internet you first have to find them.
Tech Against Terrorism uses a number of means to identify and locate the sources of extremist content in order to take them down, including its own in-house open source intelligence team.
“A lot of content is found by effectively using beacon platforms,” Berntsson explains. “So Telegram has been a very good source in terms of identifying URLs that link to the platforms of concern.
“There are a lot of smaller file sharing sites that we are concerned about, and that we work closely with; we have identified that simply through sort of monitoring relevant Telegram channels and groups. And that's true for both the Islamist terrorist groups and the far right.”
“A Europol operation to rid Telegram of terrorists resulted in them spreading to new platforms, meaning they have to be discovered again.”
Berntsson notes that finding the sources of content is sometimes only half the battle, citing a Europol operation to rid Telegram of terrorists resulting in them spreading to new platforms, meaning they have to be discovered again. While the operation succeeded in stopping the spread of content on one platform, it was not without its drawbacks.
“I'm not criticising this individual operation, but the fact that we're now seeing a bifurcation of terrorists' online ecosystems and operation means that now there's a lot of people with no idea what's going on” he explains.
“A lot of intelligence agencies have built entire monitoring operations based on Telegram. For that reason, a lot of researchers have actually argued that it's better to marginalise terrorist propaganda to platforms like Telegram where they can be more contained, than to sort of open a Pandora's Box with a removal campaign.”
Looping back to Berntsson’s original point, a lot of content that shouldn’t be on the web can be found through the concerted, efficient monitoring of beacons. We know how to find the content, we know how to take it down, the mission now is to build an ecosystem that helps tech companies big and small do just that.