Home News How Cybercriminals Abuse AI and ML for Launching Sophisticated Cyberattacks

How Cybercriminals Abuse AI and ML for Launching Sophisticated Cyberattacks

Cybercriminals Abuse AI and ML for Launching Sophisticated Cyberattacks

Threat actors can misuse advanced technologies like Artificial Intelligence (AI) and Machine Learning (ML) to launch sophisticated cyberattacks and invent new kinds of malicious operations. A joint report from the United Nations Interregional Crime and Justice Research Institute (UNICRI), Europol, and cybersecurity firm Trend Micro highlighted the current and predicted cyberthreats leveraging AI technology. It is predicted that AI systems are being developed to enhance the effectiveness of malware and disrupt anti-malware and facial recognition systems.

The report revealed that hackers could use AI to support:

  • Convincing social engineering attacks at scale
  • Document-scraping malware to make attacks more efficient
  • Evasion of image recognition and voice biometrics
  • Ransomware attacks, through intelligent targeting and evasion
  • Data pollution, by identifying blind spots in detection rules

Deepfake: A Popular AI-based Attack Vector

According to the report, threat actors are mostly using AI to launch Deepfake attacks. Deepfakes are specially crafted images and videos using AI and ML technologies, to look like legitimate content. Hackers often use Deepfakes to cause confusion and spread disinformation campaigns, mostly political.

“One of the more popular abuses of AI are Deepfakes, which involve the use of AI techniques to craft or manipulate audio and visual content for these to appear authentic. Because of the wide use of the internet and social media, Deepfakes can reach millions of individuals in different parts of the world at unprecedented speeds,” the report stated. 

The trio also listed several recommendations for organizations to follow:

  • Harness the potential of AI technology as a crime-fighting tool to future-proof the cybersecurity industry and policing.
  • Continue research to stimulate the development of defensive technology.
  • Promote and develop secure AI design frameworks.
  • De-escalate politically loaded rhetoric on the use of AI for cybersecurity purposes.
  • Leverage public-private partnerships and establish multidisciplinary expert groups.

“AI promises the world greater efficiency, automation and autonomy. At a time where the public is getting increasingly concerned about the possible misuse of AI, we have to be transparent about the threats, but also look into the potential benefits from AI technology. This report will help us not only to anticipate possible malicious uses and abuses of AI, but also to prevent and mitigate those threats proactively. This is how we can unlock the potential AI holds and benefit from the positive use of AI systems,” said Edvardas Šileris, Head of Europol’s Cybercrime Centre.

“As AI applications start to make a major real-world impact, it’s becoming clear that this will be a fundamental technology for our future. However, just as the benefits to society of AI are very real, so is the threat of malicious use,” said Irakli Beridze, Head of the Centre for AI and Robotics at UNICRI.