Government’s Stance and Challenges:
Foyez Ahmad Taiyyab, Special Assistant to the Chief Adviser of the interim government on Information and Technology, stated that the Cyber Security Act provides guidelines to address AI misuse. “Police and other law enforcement agencies can take action under this law if they choose,” he said. However, he acknowledged that bringing everyone under legal scrutiny is impractical. Reflecting on past efforts, Taiyyab noted that the previous government had requested Meta and YouTube to remove misleading content, but compliance was inconsistent. “We’ve urged Meta to strictly adhere to their community guidelines and remove content that violates them, but they are reluctant to invest in these areas,” he added. He attributed the public’s susceptibility to AI-generated fakes to low digital literacy rates in Bangladesh.
Milestone Tragedy: A Case Study in Disinformation: On July 21, a Bangladesh Air Force FT-7 BGI fighter jet crashed into Milestone School and College in Dhaka’s Uttara due to mechanical failure, turning a national tragedy into a breeding ground for disinformation. Social media was flooded with AI-generated videos claiming to depict the crash, which went viral due to their dramatic and realistic appearance. Fact-checking organization Rumor Scanner confirmed these videos were fabricated using Google’s VEO-AI tool, featuring errors like misspelled names and inconsistent building structures. This incident underscored how AI can distort sensitive events, raising concerns about its potential impact on the upcoming election.
Election-Related Concerns.: The spread of AI-driven disinformation ahead of the 13th National Election has reached alarming levels. According to Dismislab, a fact-checking organization, over 70 AI-generated political videos were published in June and July, amassing more than 23 million views. These videos portrayed women, rickshaw pullers, and fruit vendors as fictional supporters of parties like Jamaat, BNP, or Awami League, while others spread baseless claims about government officials. Particularly troubling is the rise of deepfake videos targeting female candidates, signaling the start of digital violence that could intensify during the election.
A report by Cyber and Gender-Based Violence in Bangladesh revealed that 76% of AI-driven digital violence victims in the first half of this year were women, many of whom are politically active. ActionAid Bangladesh’s survey found that 64% of women faced online harassment through AI-generated content, with 70% of it being sexually explicit. This trend not only threatens personal safety but also hinders women’s political participation and causes psychological trauma.
Expert Opinions: Professor Md. Abdur Razzaq, Chairman of the Computer Science and Engineering Department at Dhaka University, emphasized the need for proactive measures. “While AI has immense positive potential, its misuse must be curbed through preventive systems. We need vigilance teams, technology experts, active policymakers, and public awareness campaigns,” he said.
Shameem Sarkar, Head of the Technology Division at a London-based multinational company, warned, “AI-driven disinformation could surge hundreds of times before the election, especially targeting female candidates. Legal frameworks, media literacy, fact-checking platforms, women’s safety measures, and coordination with social media platforms are critical to counter this threat.”
Causes and Impact of AI Misuse: Affordable and accessible AI tools like HeyGen, DeepFaceLab, and Synthetics enable anyone to create hyper-realistic fake content, turning political narratives into fictional spectacles. Experts note that negative content garners more organic reach, amplifying confusion. In the Milestone tragedy, over 50 AI-generated videos misrepresented the event, fueling rumors that it was a planned attack. Such disinformation, often created using AI to manipulate screenshots or fabricate social media posts, distorts reality and undermines trust.
Proposed Solutions: Experts advocate for a multi-pronged approach to tackle AI misuse. First, AI-generated fake content should be criminalized. Second, advanced tools like blockchain-based verification systems could detect deepfakes. Third, social media platforms must collaborate to identify and remove AI content. Professor Razzaq stressed, “A robust monitoring team would deter perpetrators. The Election Commission must ensure candidate safety by deploying adequate resources.”
Sarkar urged the government to negotiate with platforms like Facebook, X, and YouTube to enforce stricter AI content controls. “If Hong Kong can restrict ad promotions, why can’t Bangladesh demand AI content removal? Without immediate action, disinformation could destabilize the election process,” he warned. He highlighted that AI-driven content could be automatically flagged and removed, similar to copyright or violent content, if platforms implement stricter policies.
The misuse of AI in Bangladesh threatens information security and democratic processes. With the national election looming, coordinated legal, technological, and awareness-driven efforts are urgently needed to curb AI-driven disinformation. Failure to act could lead to catastrophic consequences for the country’s political and social stability.