
Terrorist networks, not surprisingly, are focusing their efforts on weaponizing artificial intelligence (AI). A recent study by MEMRI discloses the use of artificial intelligence by Hamas, Al-Qaeda, the Islamic State (ISIS), the Houthis and Hezbollah, among others:
"ChatGPT Advises Users On How To Attack A Sports Venue, Buy Nuclear Material On Dark Web, Weaponize Anthrax, Build Spyware, Bombs...."
The Associated Press reported on December 15:
"Such groups spread fake images two years ago of the Israel-Hamas war depicting bloodied, abandoned babies in bombed-out buildings. The images spurred outrage and polarization while obscuring the war's actual horrors. Violent groups in the Middle East used the photos to recruit new members, as did antisemitic hate groups in the U.S. and elsewhere."
MEMRI noted:
"[T]errorists have begun to use generative AI chatbots to more easily, broadly, anonymously, and persuasively convey their message to those vulnerable to radicalization – even children – with attractive video and images that claim attacks, glorify terrorist fighters and leaders, and depict past and imagined future victories....
"As supporters of terrorist organizations like ISIS and Al-Qaeda follow the development of AI, they are increasingly discussing and brainstorming how they might leverage that technology in the future, and the full consequences of terrorist organizations' adoption of this sophisticated technology are difficult to foresee. Its biggest benefit to jihadi groups may come not in supercharging their propaganda, outreach, and recruiting efforts – though that may be significant – but in AI's potential ability to expose and find ways to take advantage of as-yet-unknown vulnerabilities in the complex security, infrastructure, and other systems essential to modern life – thus maximizing future attacks' destruction and carnage."
Israeli Professor of Criminology Shai Farber recently wrote in the Journal of Strategic Security:
"AI enables terrorist groups to analyze vast quantities of data effectively, identify tactical weaknesses, and refine their targeting strategies with increasing precision....
"Technical sophistication in terrorist use of AI continues to advance rapidly. Generative adversarial networks now enable terrorist groups to simulate and evaluate potential attack scenarios in virtual environments, allowing for comprehensive outcome assessment before execution. The emergence of advanced AI language models has further transformed terrorist capabilities, enabling the automated production of convincing, personalized propaganda material for radicalization and recruitment."
AI helps terrorists both in their mass propaganda and psychological campaigns, and to recruit individuals, according to Farber:
"[T]errorist groups deploy AI-powered chatbots and social media bots to engage with potential recruits at scale, adapting their messaging based on target audiences... machine learning algorithms enable the micro-targeting of individuals with personalized propaganda, analyzing vast datasets to identify patterns and predict which messages will resonate with specific demographic groups... this technological capability allows terrorist organizations to automate the production and distribution of misinformation on a massive scale....
"AI is shifting the nature of terrorist influence operations from traditional propaganda to highly personalized psychological manipulation. Recent incidents, such as the use of AI-generated videos in the aftermath of terrorist attacks to sow panic and misinformation (NCTC, 2024), exemplify this trend... terrorist groups are increasingly deploying AI-powered chatbots and generative models to create false narratives, simulate credible sources, and erode trust in state institutions. This aligns with the hypothesis that future conflicts will extend beyond the kinetic realm into cognitive and informational domains, where AI will play a key role in shaping public perception and decision-making processes."
"For any adversary, AI really makes it much easier to do things," cautioned John Laliberte, a former vulnerability researcher at the National Security Agency who is now CEO of cybersecurity firm ClearVector. "With AI, even a small group that doesn't have a lot of money is still able to make an impact."
Meanwhile, Google is helping Qatar's terror-promoting Al Jazeera television network to be even more effective at terrorist propaganda: On December 21, Al Jazeera announced that it was expanding its collaboration with Google Cloud on the network's new initiative, "The Core," that will integrate AI into its news operations.
"This transformational program leverages our advanced AI tools to reshape how journalists report and create news, and how audiences consume it. Together, Google Cloud and Al Jazeera are setting a new future direction for digital journalism," Alex Rutter, AI managing director for Europe, the Middle East and Africa at Google Cloud said, praising Al Jazeera's decision to build "The Core" platform as a "pivotal step in developing the next generation of intelligent media".
Already in 2017, Google and Al Jazeera announced, "a global partnership... to help cement Al Jazeera's position as a digital-first broadcaster and accelerate its growth through use of Google technology." Google appears to be playing a leading role in making Al Jazeera's terrorist propaganda mainstream.
There is a need, MEMRI has warned, "to consider and plan now for AI's possible centrality in the next mass terror attack – just as the 9/11 attackers took advantage of the inadequate aviation security of that time."
Google's former CEO Eric Schmidt says he is concerned about just such a scenario:
"The fears I have are not ones that most people talk about AI – I talk about extreme risk... This technology [for instance, biological weapons] is fast enough for them to adopt so that they could misuse it and do real harm. I'm always worried about the 'Osama Bin Laden' scenario, where you have some truly evil person who takes over some aspect of our modern life and uses it to harm innocent people."
Perhaps a start would be for the US government to "look into" what world-leading companies working with AI, such as Google, are doing to aid supporters and promoters of terrorism such as Al-Jazeera?
Robert Williams is based in the United States.

