Offensive AI: The Sine Qua Non of Cybersecurity

"Peace is the virtue of civilization. War is its crime. Yet it is often in the furnace of conflict that the sharpest tools of peace are made." - Victor Hugo.

In 1971, a disturbing message started appearing on multiple computers that comprised ARPANET, the precursor to what we now know as the Internet. The message, which read "I'm the Creeper: catch me if you can." was the output of a program entitled Creeper, which was written by the famed programmer Bob Thomas while he worked at BBN Technologies. While Thomas's goals were not malicious, the Creeper program reflects the advent of what we now call a computer virus.

The advent of Creeper on ARPANET laid the foundation for the birth of the first Antivirus software. While unproven, it is thought that Ray Thomlinson, best renowned for creating email, developed Reaper, a tool designed to remove Creeper from Infected Machines. The development of this tool used to defensively chase down and remove a malicious application from a computer is frequently attributed to represent the inception of the cybersecurity sector. It illustrates an early realization of a cyberattack's potential potency and the need for protective measures.

The revelation of the necessity for cybersecurity shouldn't come as much of a surprise, considering the cyber realm is nothing more than an abstraction of the natural world. In the same manner that we moved from fighting with sticks and stones to swords and spears to now bombs and aircraft, so too has the conflict over the cyber domain advanced. In the beginning, it all started with a basic Creeper virus that was a playful portrayal of what may be a harbinger of digital doom. The discovery of armed electronic systems forced the construction of antiviral solutions such as Reaper, and as the attacks grew more complicated, so too did the defensive solutions. Fast forward to the era of network-based attacks, and digital battlefields began to take shape. Firewalls arose to replace massive city walls, load balancers operate as generals distributing resources to guarantee one isolated location isn't overloaded, and Intrusion Detection and Prevention systems substitute sentries in watch towers. This isn't to claim that all systems are faultless; there is always the existential dread that a globally favored benevolent rootkit that we call an EDR solution could contain a null pointer dereference that will operate as a trojan horse capable of bricking tens of millions of Windows devices.

Putting aside catastrophic, and all be it inadvertent, situations still leaves the question of what's next. Enter Offensive AI, the most dangerous cyber weapon to date. In 2023, Foster Nethercott presented a study at SANS Technology Institute showing how threat actors may misuse ChatGPT with low technical capacity to construct innovative malware capable of avoiding existing security protections. Numerous other papers have also studied the use of generative AI to develop complex worms such as Morris II and polymorphic malware such as Black Mamba.

The seemingly counterintuitive solution to these mounting concerns is continued development and research into more advanced offensive AI. Plato's aphorism, "Necessity is the mother of invention," is an excellent portrayal of cybersecurity today, when new AI-driven threats drive the innovation of more complex security solutions. While building more advanced aggressive AI tools and strategies is far from morally praiseworthy, it continues to emerge as an essential requirement. To effectively protect against these risks, we must comprehend them, which needs their further development and study.

The rationale for this strategy is anchored in one simple truth. You cannot protect against a threat you do not comprehend, and without the development and research into these new risks, we cannot hope to understand them. The unpleasant reality is that bad actors are already employing offensive AI to design and deploy new threats. To try and argue this would be stupid and naive. Because of this, the future of cybersecurity lies in the further development of offensive AI.

If you want to learn more about Offensive AI and acquire hands-on experience in applying it into penetration testing, I welcome you to attend my forthcoming workshop at SANS Network Security 2024: Offensive AI for Social Engineering and Deep Fake Development on September 7th in Las Vegas. This workshop will be an excellent introduction to my new course, SEC535: Offensive AI - Attack Tools and Techniques, to be launched in the beginning of 2025. The event as a whole will also be a wonderful opportunity to meet numerous key experts in AI and discover how it is changing the future of cybersecurity. You can find event details and the whole list of extra activities here.

Note: This paper is carefully written by Foster Nethercott, a United States Marine Corps and Afghanistan veteran with nearly a decade of expertise in cybersecurity. Foster heads the security consulting firm Fortisec and is an author for SANS Technology Institute, currently producing the new course SEC 535 Offensive Artificial Intelligence.

Post a Comment