Blog

The GenAI Gold Rush – Moving Fast Without Breaking Things

September 12, 2024 | 4 MIN READ

by Shreyans Mehta

Stylized stars appear to be moving across the image from left to right.

All technological disruptions of the past three decades have exhibited a similar trait wherein security took a backseat to innovation, and generative AI (GenAI) has been no exception. While enterprises and consumers are rushing to embrace this new disruptive technology, security is simply not top of mind for most of them. As such, there are few inherent security controls built into today’s GenAI systems.

APIs are the primary method of communication with and between GenAI systems. As such, any conversation about GenAI and security must focus on API security. This conversation is not as straightforward as it might seem at first blush as there are multiple perspectives:

GenAI systems abusing other systems:

  • Enterprises need to prevent resource over usage and IP theft from GenAI scraping bots
  • GenAI systems can be utilized to launch more complex attacks faster than ever

GenAI systems abused by bad actors:

  • GenAI systems need to be protected from abuse and monitored for sensitive data leakage
  • Enterprises need to control GenAI use in their organization for cost or other reasons

The OWASP community has defined the top 10 threats for deploying and managing Large Language Models (LLM) and GenAI with the OWASP LLM Top 10. As with all OWASP Top 10 lists, it is an excellent starting point, but most organizations will need to go further in their security efforts.

Cequence makes extensive use of machine learning and GenAI in its products, using it to detect and track sophisticated attacks even as they evolve and re-tool, to autonomously create API security tests, and provide security for the various GenAI use cases described above.

Protection from GenAI:

  1. Unapproved data scraping/IP theft – GenAI companies scrape data from public websites at a pace never before seen. There have been numerous reports that many GenAI scraping bots do not honor robots.txt files or “allow to scrape” lists. This can lead to increased infrastructure costs as well as potential customer friction and leaks of sensitive or competitive information.
    • Cequence can prevent excessive scraping of data by GenAI systems and regulate their behavior based on enterprise policies. Cequence identifies AI scraping bots and can natively take various mitigation actions as determined by the customer including logging, rate limiting, or blocking.
  2. GenAI systems can be manipulated to launch attacks on other networks. In such cases, they act as jump hosts for the attackers. Exacerbating this issue is the fact that LLMs are generally considered benign and are subject to less scrutiny and fewer security controls. Read our blog about how hackers can use ChatGPT to launch automated attacks.
    • Cequence can stop fraud and abuse from GenAI systems based on their pattern of behavior with third-party systems. Through its Threat and Entity Behavior Analytics, Cequence detects anomalous behavior and determines whether it is malicious, at which point action (such as blocking) is enacted.
    • Cequence can monitor user interactions to GenAI systems to flag or stop attempts to manipulate them for abuse on third-party systems.

Protection for GenAI:

  1. Sensitive data leakage – When training or using either external or self-hosted GenAI systems, sensitive data can leak outside an organization.
    • Cequence flags all forms of sensitive data being sent to or received from GenAI systems. This includes model training, model output, user queries, etc.
  2. Excessive usage and metering – Excessive usage of GenAI systems can lead to excessive spend and a “denial of wallet” situation. Excessive usage could be triggered as a result of a misconfiguration, human error, application error, or malicious intent.
    • Cequence can monitor usage of GenAI systems and meter the usage through rules and policies.
  3. OWASP LLM Top 10 – The OWASP foundation has created a list of top 10 risks for Large Language Models including GenAI systems. The list includes attacks such as prompt engineering, insecure handling, and training data poisoning.

Generative AI has great potential to change how we work and conduct business, and the clarity of that potential has accelerated adoption as well as revealed a dramatically increased attack surface. As organizations find new ways to make use of GenAI, it’s critical that security isn’t an afterthought. A holistic approach to application and API security combined with a security solution capable of addressing the use cases described above can ensure that GenAI is a net positive for your business. For more information, see our Securing GenAI page, or if you’re ready to talk, book a call with us.

Shreyans Mehta

Author

Shreyans Mehta

CTO & Co-Founder at Cequence Security

Shreyans Mehta, founder and CTO at Cequence, is an innovative, patent-holding leader in network security. Formerly at Symantec, he developed advanced technologies for real-time packet inspection and cloud analytics. Shreyans holds a Master's in Computer Science from the University of Southern California.

Related Articles