The Sine Qua Non of Cybersecurity

Jul 26, 2024The Hacker NewsDigital Warfare / Cybersecurity Training

“Peace is the virtue of civilization. War is its crime. Yet it is often in the furnace of war that the sharpest tools of peace are forged.” – Victor Hugo.

In 1971, an unsettling message started appearing on several computers that comprised ARPANET, the precursor to what we now know as the Internet. The message, which read “I’m the Creeper: catch me if you can.” was the output of a program named Creeper, which was developed by the famous programmer Bob Thomas while he worked at BBN Technologies. While Thomas’s intentions were not malicious, the Creeper program represents the advent of what we now call a computer virus.

The appearance of Creeper on ARPANET set the stage for the emergence of the first Antivirus software. While unconfirmed, it is believed that Ray Thomlinson, famously known for inventing email, developed Reaper, a program designed to remove Creeper from Infected Machines. The development of this tool used to defensively chase down and remove a malicious program from a computer is often referred to as the inception of the cybersecurity field. It highlights an early recognition of a cyberattack’s potential power and the need for defensive measures.

The revelation of the need for cybersecurity shouldn’t come as much of a surprise, as the cyber realm is nothing more than an abstraction of the natural world. In the same way that we grew from fighting with sticks and stones to swords and spears to now bombs and aircraft, so too has the war over the cyber realm progressed. In the beginning, it all started with a rudimentary Creeper virus that was a cheeky representation of what could be a harbinger of digital doom. The discovery of weaponized electronic systems necessitated the invention of antivirus solutions such as Reaper, and as the attacks grew more complex, so too did the defensive solutions. Fast forward to the era of network-based attacks, and digital battlefields began to take shape. Firewalls emerged to replace vast city walls, load balancers act as generals directing resources to ensure one singular point isn’t overwhelmed, and Intrusion Detection and Prevention systems replace sentries in watch towers. This isn’t to say that all systems are perfect; there is always the existential dread that a globally favored benevolent rootkit that we call an EDR solution could contain a null pointer dereference that will act as a trojan horse capable of bricking tens of millions of Windows devices.

Putting aside catastrophic, and all be it accidental, situations still leaves the question of what’s next. Enter Offensive AI, the most dangerous cyber weapon to date. In 2023, Foster Nethercott published a whitepaper at SANS Technology Institute detailing how threat actors could abuse ChatGPT with minimal technical capability to create novel malware capable of evading traditional security controls. Numerous other articles have also examined the use of generative AI to create advanced worms such as Morris II and polymorphic malware such as Black Mamba.

The seemingly paradoxical solution to these growing threats is further development and research into more sophisticated offensive AI. Plato’s adage, “Necessity is the mother of invention,” is an apt characterization of cybersecurity today, where new AI-driven threats drive the innovation of more advanced security controls. While developing more sophisticated offensive AI tools and techniques is far from morally commendable, it continues to emerge as an inescapable necessity. To effectively defend against these threats, we must understand them, which necessitates their further development and study.

The rationale for this approach is rooted in one simple truth. You cannot defend against a threat you do not understand, and without the development and research into these new threats, we cannot hope to understand them. The unfortunate reality is that bad actors are already leveraging offensive AI to innovate and deploy new threats. To try and refute this would be misguided and naive. Because of this, the future of cybersecurity lies in the further development of offensive AI.

If you want to learn more about Offensive AI and gain hands-on experience in implementing it into penetration testing, I invite you to attend my upcoming workshop at SANS Network Security 2024: Offensive AI for Social Engineering and Deep Fake Development on September 7th in Las Vegas. This workshop will be a great introduction to my new course, SEC535: Offensive AI – Attack Tools and Techniques, to be released at the beginning of 2025. The event as a whole will also be an excellent opportunity to meet several leading experts in AI and learn how it is shaping the future of cybersecurity. You can get event details and the complete list of bonus activities here.

Note: This article is expertly written by Foster Nethercott, a United States Marine Corps and Afghanistan veteran with nearly a decade of experience in cybersecurity. Foster owns the security consulting firm Fortisec and is an author for SANS Technology Institute, currently developing the new course SEC 535 Offensive Artificial Intelligence.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.