Home Features Artificial Intelligence and Cybersecurity: A Double-Edged Sword

Artificial Intelligence and Cybersecurity: A Double-Edged Sword

AI provides great potential for our corporations and companies, but also entails risks that can be detrimental to mankind’s existence. A different set of threats may arise if a person or an organization intentionally tries to exploit AI systems to their advantage as if they were a weapon in a crime. But it’s not all bad news; AI can be very effective in network monitoring and analysis, for instance.

Artificial Intelligence

As artificial intelligence (AI) becomes a hot topic, there is also an increasing amount of misinformation and confusion about what it can do and the potential risks it presents. The cultural legacy of decades of literature and film has depicted dystopian visions of human downfall at the feet of omniscient machines. On the other hand, many people understand the beneficial potential of AI to speed up and help the evolution of our society. Although computer systems can learn, reason, and act, these behaviors are still in their early stages. Machine Learning (often abbreviated as ML) needs huge amounts of data even for just learning, translated into training or coaching depending on the function that is assigned to Artificial Intelligence. Allowing AI access to information and giving it full autonomy therefore carries serious risks that must be considered.

By Pierguido Iezzi, Cyber Security Director and co-founder of Swascan

The first risk of artificial intelligence is intrinsically linked to its creation. The human mind bears the ancestral burden of error. Accidental bias, or more simply, an error of the programmer or in the dataset, can generate a series of countless errors on the part of the AI. Incorrect design of artificial intelligence can also lead to over-or under-sizing of the computational system, with the effect of causing the machine to make unnecessarily complex decisions or bringing it to a halt. Providing for control systems such as human supervision and rigorously testing artificial intelligence systems can reduce these risks during the design stage. The decision-making capabilities of computing systems must be measured and evaluated to confirm that any biases or questionable decisions are quickly addressed and, if necessary, corrected. Although the risks described so far are based on unintentional errors and flaws in design and implementation, a different set of threats may arise if a person or an organization intentionally tries to exploit AI systems to their advantage as if they were a weapon in a crime.

Cybercrime perpetrators can ‘trick’ AI more easily than one might think, and ‘train’ it to use it to their advantage

Deceiving an AI system can be surprisingly easy. Cybercriminals can manipulate the data sets fed to the computer (data poisoning) to “train” the AI, making even minor changes to the control parameters. All in order to lead it in the desired direction. Using another approach, if the attackers are unable to access the datasets, tampering techniques could be exploited to force computational errors by the AI or to make it difficult to correctly identify the datasets. Even though checking the accuracy of data and inputs may not be feasible, if only for financial reasons, it would be desirable for professionals to make every effort to collect data from reliable and verified sources. Among the possible defensive countermeasures against possible hacking attempts, it is worth mentioning the addition of isolation functionalities of some segments or entire AI systems with automatic prevention mechanisms.

The power of Deepfakes

Cybercriminals can also use AI to make their attack or social engineering strategies more effective and efficient. Artificial intelligence can be provided with datasets on hacking activity in order to study which technique has proved most effective. All the strategies that cybercriminals are currently implementing could be dramatically improved by using AI. The other potential field of application of AI by criminal organizations consists of identifying new flaws in the code of software, apps, or sites. In this case, AI would provide criminal hackers with a list of potential points of attack, like a well-trained bloodhound.

And we should not forget the rising shadow of Deepfakes, maybe the next frontier of social engineering attacks. We should not forget what happened in 2019 in the U.K., where the CEO of a company was the victim of a purpose-built scam with a phone call based on an audio deepfake. Believing he was talking to his boss, the victim sent nearly $250,000 to the criminal hackers’ address without batting an eyelid. The phone call scam is, without a doubt, one of the most bizarre applications of deepfake technology.

However, as we have seen, it is one that can clearly be successfully and convincingly applied. So much so that the CEO who fell victim to the cyber-attack stated that he recognized his boss’ voice by its “slight German accent” and “melodic cadence.” As if that weren’t enough, sophisticated technology aside, the process behind building the fake audio is surprisingly simple. Criminal hackers need only modify machine learning technology to clone an individual’s voice, usually using spyware and devices that allow them to collect several hours of recordings of the victim speaking.

The more data they are able to collect — and the better the quality of the recordings — the more accurate and potentially damaging the voice cloning will be in practice. Once a voice pattern has been created, the AI goes to work “learning” to imitate the target. It will then use so-called generative adversary networks (GANs): those systems that continuously compete against each other where one creates a fake and the other tries to identify its flaws. With each new attempt, the algorithm is able to exponentially improve itself.

For high-profile targets this is not good news (think CEOs of large companies); their speeches are recorded online and shared via social media, while phone calls, interviews, and everyday conversations are relatively easy to obtain. With enough data, the level of accuracy achieved by deepfake audio files is as impressive as it is frightening, and criminals are able to make the AI say whatever they want.

The future of phishing?

Certainly, today phishing attacks remain very popular — and successful — with as many as 85% of organizations finding themselves targeted. However, one of the main reasons why deepfake audio could find the fertile ground is its ability to evade most classical security measures. On the other hand, these AI-generated calls depend solely on human error and trust… and that’s what makes them potentially so dangerous. If we add to this the fact that, even the smartphones we keep on hand at all times, are not as secure as we think, it is not hard to see a multitude of ways in which cyber attackers could bypass our defenses.

There are two main obstacles, in my opinion, to the massive spread of this technology at the moment. The first is that it hasn’t yet climbed to the top of the cost-benefit ratio: a botnet sending phishing emails is capable of sending millions of emails in a short period of time. Deepfake audio requires time to study the target and processing time. In the same time frame, classic criminal hacking methods are able to yield more. The second is that, although it is true that AI-driven cybersecurity measures capable of recognizing the patterns of a deepfake are still at an experimental stage, there is already a very analog method of defeating the threat: if you are not sure, just hang up.

Most deepfake scams are carried out with the use of a VoIP account, created to contact targets on behalf of criminal hackers. By calling back, victims should be able to tell immediately whether they were talking to a real person or not.

How to benefit from AI for corporate and personal security?

But it’s not all bad news, AI can be very effective in network monitoring and analysis. Such computational systems have proven to be surprisingly efficient in detecting ‘standard’ behavior and, consequently, in identifying possible anomalies. This capability could be employed, for example, in the analysis of server access logs or data traffic. By detecting intrusions in advance, there is a greater chance of minimizing damage. Initially, it may be useful to have AI systems report any anomalies and alert corporate IT departments for further investigation. AI continues on its path of constant improvement, and it is possible that in the near future it will have the capability to neutralize threats and prevent intrusions in real-time. Given the undeniable cybersecurity shortcomings of public and private sectors, AI can take over some of these oversight tasks, allowing qualified professionals (available in limited numbers) to focus on complex problems.

With companies constantly striving to cut costs, Artificial Intelligence is becoming more and more appealing, with the prospect of a not-too-distant replacement of “physical” cybersecurity staff. This transformation will bring undeniable benefits to businesses, in terms of results and cost-efficiency. But the most ambitious and knowledgeable operators in the industry need to plan a strategy now to reduce the potential risk of cyber-attacks using AI.

About the Author

Pierguido Iezzi Pierguido Iezzi is the Cyber Security Director and co-founder of Swascan with over 30 years of experience in the world of Cyber Security. With a degree in Information Sciences, he has had the opportunity to work nationally and internationally in large corporate contexts and in the largest multinationals as a Cyber Security representative. Author of several publications, he regularly collaborates as Author and Contributor to a number of newspapers and publications. Keynote speaker and testimonial at universities, national and international events.


Views expressed in this article are personal. The facts, opinions, and language in the article do not reflect the views of CISO MAG and CISO MAG does not assume any responsibility or liability for the same.