Tuesday, December 16

AI Emerging as a New Weapon for Terrorists, Raising Concerns in the US

Artificial Intelligence (AI) is increasingly becoming a tool for terrorist organizations, prompting concern even in powerful nations like the United States. Recent reports indicate that extremists are leveraging AI to recruit new members, create fake photos and videos, and enhance cyberattack capabilities.

How Terrorist Groups Are Using AI
Experts in national security and intelligence warn that AI could serve as a force multiplier for extremist groups. Even small or underfunded organizations can exploit AI tools to generate propaganda, fabricate realistic visuals, and execute more sophisticated cyber operations.

According to The Hindu, in November 2025, a pro-Islamic State (IS) website published instructions urging followers to integrate AI into their activities. The post highlighted AI’s ease of use and encouraged extremists to overcome intelligence agency fears by making recruitment easier. Previously dominant in Iraq and Syria, IS has now formed alliances with various militant groups sharing violent ideologies. These groups had already mastered social media recruitment and disinformation campaigns, making AI adoption a natural progression.

AI Amplifies Threats
AI programs, including tools similar to ChatGPT, allow extremists to create lifelike photos, videos, and audio recordings at scale with minimal resources. Cybersecurity CEO John Laliberte emphasized that AI significantly lowers the barrier for adversaries, enabling small groups to achieve disproportionate impact.

Research from the Site Intelligence Group shows IS has used AI to generate audio recordings of leaders delivering extremist messages and to quickly translate communications into multiple languages. While countries like China, Russia, and Iran are far ahead in AI deployment, the widespread availability of cost-effective AI tools increases risks globally. Hackers are already exploiting deepfakes and AI-generated media for misinformation. Experts also warn that AI could potentially assist in designing weapon systems, a concern highlighted in a recent US Department of Homeland Security report.

US Lawmakers Take Notice
Terrorist groups previously exploited platforms like Twitter for recruitment; now they are seeking to adopt emerging technologies like AI. In response, US lawmakers have proposed measures to ensure AI developers report misuse to authorities. Senator Mark Warner suggested that companies should facilitate easier reporting when AI tools are misused by bad actors. Recent hearings also revealed that IS and Al-Qaeda have trained supporters in AI use, signaling the urgent need for global monitoring and regulation.


Discover more from SD NEWS agency

Subscribe to get the latest posts sent to your email.

Leave a Reply

Discover more from SD NEWS agency

Subscribe now to keep reading and get access to the full archive.

Continue reading