Home Features AI-Powered Cybersecurity: From Automated Threat Detection to Adaptive Defense

AI-Powered Cybersecurity: From Automated Threat Detection to Adaptive Defense

Artificial Intelligence, AL and ML

Cybersecurity, the protection of IT infrastructures and communication networks in cyberspace and cyber-physical systems, is becoming increasingly important, covering threat detection and security countermeasures for interconnected digital devices, from computers to Internet of Things (IoT) devices. While components of traditional cybersecurity infrastructures, such as firewalls, malware signature databases, and strong password policies are still crucial, in today’s ever-changing online landscape, they no longer provide sufficient security measures against evolving, previously unknown cyberthreats, particularly when aiming for proactive rather than reactive countermeasures.

By Dr. Leslie F. Sikos, Edith Cowan University, Australia

Considering that a large share of network intrusions relies on stolen credentials used for gaining administrative access, and others on malware infections, analyzing and making sense of network data is more crucial than ever before. Compared to the late 1990s when there were very few threats online (such as malicious executables on software download sites), today not only the volume, but even the variety of cyberthreats is beyond comprehension, ranging from ransomware to sophisticated cyberattacks. Cybersecurity personnel cannot cope alone anymore with the advanced persistent threats that knock down a nuclear power station, an ongoing cyber-espionage that leaks out classified government data, or cyberattacks that jeopardize business continuity.
An emerging direction in cybersecurity is to employ artificial intelligence (AI), i.e., intelligence demonstrated by machines — not to be confused with cybernetics, which deals with communications and automatic control systems. By using AI, not only can known threats can be identified, but also unknown threats can be isolated by suspicious and nefarious online actions. Some AI fields utilized in cybersecurity applications include machine learning, formal knowledge representation and automated reasoning, and automated planning and scheduling.

Machine learning (ML) is the utilization of methods and technologies that “give computers the ability to learn without being explicitly programmed.” If an ML algorithm builds a mathematical model from a dataset that contains both the input and the desired output, it is called supervised learning. If the training data is incomplete, we talk about semi-supervised learning. If there are no desired output labels, the ML is called unsupervised learning.

ML can be utilized in cybersecurity for a variety of tasks, such as to predict cyberattacks based on behavioral patterns on social media, recognize network attack patterns, identify malicious webpages, and prevent adversarial ML (which attempts to mislead training with malicious input). State-of-the-art antivirus software use supervised learning by considering a set of object features (file content or behavior) in the training phase, together with associated object labels indicating which samples are malicious and which ones are not (in case of more fine-grained classifications, various types of malware, such as virus, Trojan, etc., are used).

Based on this input, a predictive model is created that will produce labels for previously unseen objects. In the protection phase, unknown executables are processed by this model, which yields a model decision about the executable most probably being malicious or benign. For finding groups of similar objects or highly correlated object features, unsupervised learning can be used.

Combining several learners’ models into an ensemble performs better than the original learners (ensemble learning). If the predictive model is a set of decision trees that use a tree-like model of decisions and their possible consequences, all nodes that are not leaves contain questions about file features (e.g., whether the file size or file content entropy is bigger than a certain threshold), and all nodes that are leaves have the final decision of the tree on the object. During the test phase, this type of model traverses the tree by answering the questions in the nodes with the corresponding features of the object. The final decision on the object is calculated by averaging the decisions of multiple trees. One of the most common ML methods in this category is called a random forest, which uses random feature samples rather than the entire feature set for training, thereby trying to reduce the correlation between estimators.

Inspired by biological neural networks, artificial neural networks (ANNs) consider examples such as manually labeled malicious and benign code samples, but without task-specific rules (i.e., without explicitly collecting the characteristics of malicious code, for example, in the case of zero-day exploits). The sequence of system events during software execution and the parameters of the commands executed can be used to create a behavioral log, which is suitable for training deep neural networks (DNNs), which have multiple layers between the input and output layers (to model complex, non-linear relationships), thereby identifying previously unknown malicious activities while minimizing false alarms. To process the ever-increasing volume of unknown files, MLbased clustering algorithms can be used so that malicious and benign groups of files can be efficiently differentiated based on their properties.

Utilizing ML can help organizations perform a dynamic risk analysis, discover signs of malicious network traffic, detect anomalies, protect their assets from cyberattacks, and minimize or mitigate malware spread. Contemporary implementations include cloud-managed firewalls, real-time threat intelligence, and adaptive antivirus software that perform complex behavior analysis, thereby complementing traditional signature-based malware detection with advanced heuristics. Software examples include, but are not limited to, Bitdefender Advanced Threat Intelligence, Chronicle, Cyber Reconnaissance (CYR3CON), Cylance, the Darktrace Enterprise Immune System, IBM QRadar SIEM, Senseon, and Vectra.

Knowledge Representation in Cybersecurity

Knowledge representation is a field of AI focusing on formalisms and data models for computer systems to solve complex tasks, such as fusing and finding correlations in complex datasets such as cyberthreat intelligence datasets. Formal knowledge representation can uniformly capture the semantics (meaning) of expert knowledge derived from diverse sources in the form of structured data upon which automated software agents can categorize vulnerabilities, threats, and attacks; perform entity resolution; detect anomalies, and match attack patterns, thereby revealing correlations even experienced analysts might miss. To this end, cybersecurity concepts and their properties as well as the relationships between these are defined formally in knowledge organization systems, such as thesauri and ontologies, which can be used in the automation of network data processing via querying and automated reasoning. These systems are typically grounded in description logics with computationally favorable properties and often implemented using Semantic Web languages such as RDF, RDFS, and OWL.

Cybersecurity ontologies facilitate data sharing and reuse across information security infrastructures and automated knowledge discovery that reveals new insights for cybersituational awareness, cyberthreat intelligence, and digital forensic investigations. They can assist intelligence gathering and data analytics and provide aggregated data for SOC monitor dashboards automatically 24/7/365.

AI Planning in Cybersecurity

AI planning is the study of strategies and action sequences. By planning from the perspective of a hypothetical attacker, AI planning can be utilized for estimating the vulnerability of communication networks to cyberattacks. If the attacker has complete knowledge about the network and IT infrastructure (in the case of ethical hacking), classical planning can be used for penetration testing. If the knowledge is incomplete, partially observable Markov decision processes (POMDP) can be used, which can model the attackers’ prior knowledge about the network configuration in the form of a probability distribution over possible states (belief). However, it is more realistic and scalable if a qualitative model is used. A qualitative model employs partially observable contingent planning instead of POMDP. This type of model attempts to find a plan tree (or graph) of actions in which the leaves correspond to goal states, and the edges are labeled by observations. Contingent planning can effectively model attackers with initial qualitative knowledge about the network configuration, which is improved during the attack based on the outcome of exploits attempted and explicit sensing actions. This way, contingent planning is suitable for combining exploits and sensing actions similar to real-world attackers.


Artificial intelligence is well-utilized in a variety of cybersecurity applications, from antivirus software to automated threat detection and mitigation. Being a dynamic and promising field, it attracts increasing research interest and development efforts. However, while utilizing ML in cybersecurity certainly has its benefits, there are also some challenges. Some of these include the need for large representative datasets, interpretable trained models, and mechanisms to obtain very low false-positive rates in (near) real-time, just to mention a few. On top of these, developers have to consider an increasing number of expectations and legal requirements, such as explainability, interpretability and being free from bias, as outlined in, for example, the Algorithmic Accountability Act of 2019 in the U.S. Moreover, AI can be used not only for defense but also for attacks, as seen with adversarial ML.

About the Author

Leslie F. SikosLeslie F. Sikos, Ph.D., is a computer scientist specializing in network forensics and cybersecurity applications powered by artificial intelligence and data science. He has industry experience in the data center and cloud infrastructures, cyberthreat prevention and mitigation, and firewall management. He regularly contributes to cybersecurity research projects and collaborates with the Defence Science and Technology Group of the Australian Government, CSIRO’s Data61, and the Cybersecurity Collaborative Research Centre. He is a reviewer of academic journals, such as Computers & Security and IEEE Transactions on Dependable and Secure Computing, and chairs sessions at international conferences, and regularly edits books, on AI in cybersecurity. Dr. Sikos holds professional certificates, and is a member of the IEEE Computer Society Technical Committee on Security and Privacy, and a founding member of the IEEE Special Interest Group on Big Data for Cybersecurity and Privacy.


Views expressed in this article are personal. The facts, opinions, and language in the article do not reflect the views of CISO MAG and CISO MAG does not assume any responsibility or liability for the same.