The widespread news surrounding ChatGPT and its alternatives got me thinking about how it may or may not impact API security. Current top of mind headlines are those touting an impending doom as a result of ChatGPT taking over our lives.
An article that says ChatGPT is bad because it has written polymorphic malware, even though ChatGPT was tricked to write it, seems odd. Especially when you can turn around and have ChatGPT tell you how to defeat polymorphic malware, without having to trick it. Each snippet about how ChatGPT can be used to evade API security escalates thoughts of how disruptive it is going to be, while the positive aspects receive less coverage. Why is it that we treat every new thing, particularly in the application security world, as gloom and doom?
Part of this thinking can be blamed on the media, because they need negative coverage to bring in viewers and readers. The happy, positive aspects do not generate clicks or attract eyeballs so they are often ignored, or relegated to a back-page mention. Let’s look at this at this from the exact same perspective with a simple twist, can ChatGPT be used with great power and responsibility to improve all of humankind? More specific to the API security realm, can we use ChatGPT to improve our security posture and outsmart attackers who use it against us?
Should Enterprises Ban ChatGPT?
During a recent customer lunch day, a CISO mentioned that his company’s board was considering a ban on ChatGPT. The CISO compared ChatGPT to Stack Overflow, which has more than 20 million questions asked and answered since 2008. His reply was perfect, “ChatGPT is the modern version of Stack Overflow, which we use extensively. Why would I ban copying and pasting, since it has gotten our business further than most things.” Good point, if you ban the ability to quickly find answers, you ban progress.
As an application security practitioner that has been around a while I don’t tend to jump on trends like this, fads are fads for a reason. This is a level playing field in an arms race where both sides have exactly the same weapons. Yet we hear company boards wondering if we should be banning this latest thing. Banning ChatGPT will fail – and it means we only arm one side and defenselessly tell them they are using something bad.
Using ChatGPT to Improve API Security
Here are a few things I think ChatGPT is making better when it comes to software security. These are drawn from examples of people using it for good, not a guess at ways it can might be used for bad.
-
Using ChatGPT to Debug APIs:
ChatGPT is very code aware, the more the language is used the more it learns and can help. You can actually put API code in and ask the platform to debug it and the output is pretty accurate. Admittedly, the security challenge is that you may be putting proprietary code into another platform where it might be viewed. But this is the same thing everyone has been doing with Stack Overflow for years except with ChatGPT, the answer is a few seconds away. Getting debugged code from Stack Overflow would rely on someone reading the Stack Overflow post, understanding what was being asked, and answering with an API code example that may or may not fit.
-
Using ChatGPT to Write APIs:
I wish I had a tool like ChatGPT 10 years ago. I would have had a litany of application security tools written for all manner of purposes. Not having to figure out things like regexes for parsing OS, using trial and error to determine why someone else’s tool wasn’t working would have had a huge impact on my ability to finish projects. More importantly, it would have helped me learn more about my craft. ChatGPT can write secure, enterprise grade APIs and application code just as easily as it writes a basic script. You can ask ChatGPT how to securely use the libraries it suggests. Imagine if there were fewer API code mistakes before the API enters the testing or quality phase. That would mean fewer vulnerabilities, and fewer exploits. That’s a world I would enjoy living in as a 24 year appsec veteran.
-
Using ChatGPT to Find Security Flaws in Existing APIs:
ChatGPT can analyze your APIs for operational issues, but it can also help you understand where flaws might exist in your code. Often software is made up of dependent libraries of activities, not necessarily shortcuts but operations that are performed over and over and may be combined with other libraries. For example, there are millions of bad Java libraries in the wild. What if you dropped in the dependencies and asked if the libraries are secure? This is already happening for some organizations. A simple library security analysis is allowing developers to double check the 3rd parties they rely on in order to ensure that code is secure.
ChatGPT Can Improve API Security
These are just a few positive ways that ChatGPT can help developers be more efficient and create a more secure API or application. With ChatGPT, you simply think up the way to express what you wanted done, and the tool can gave you a start or something to tweak. The efficiencies gained will give you time to think about inefficient or insecure parts that you may not have had time to consider in the past.
Rather than believe the attackers have a leg up and the end of the world begins is here with ChatGPT, why not believe we can make use it to improve network security as a whole, (and world for that matter)? There are negatives currently, just as there were with every other new and exciting technology. People are going to put their code and intellectual property into a system that has license to do what it may. This challenge needs studying and can be overcome. But let’s not say this new, powerful tool is bad because someone figured out how to make it bad.
Sign up for the latest Cequence Security news
By clicking Subscribe, I agree to the use of my personal data in accordance with Cequence Security Privacy Policy. Cequence Security will not sell, trade, lease, or rent your personal data to third parties.