Posted: March 28, 2023 by Mark Stockley
This morning I decided to write some ransomware.
I've never done it before, and I can't code in C, the language ransomware is mostly commonly written in, but I have a reasonably good idea of what ransomware does. Previously, this lack of technical skills would have served as something of a barrier to my "criminal" ambitions. I'd have been left with little choice but to hang out on dodgy Internet forums or to sidle up to people wearing hoodies in the hope they're prepared to trade their morals for money. Not anymore though.
Now we live in the era of Internet-accessible Large Language Models (LLMs), so we have helpers like ChatGPT that can breathe life into the flimsiest passing thoughts, and nobody needs to have an awkward conversation about deodorant.
So I thought I'd ask ChatGPT to help me write some ransomware. Not because I want to turn to a life of crime, but because some excitable commentators are convinced ChatGPT is going to find time in its busy schedule of taking everyone's jobs to disrupt cybercrime and cybersecurity too. One of the ways it's supposed to make things worse is by enabling people with no coding skills to create malware they wouldn't otherwise be able to make.
The only thing standing in their way are ChatGPT's famously porous safeguards. I wanted to know whether those safeguards would stop me from writing ransomware, and, if not, whether ChatGPT is ready for a career as a cybercriminal.
Additional blog posts