Most organizations do not have an accurate estimate of their API footprints – and why would they? With the rate at which APIs are getting churned out or updated, it’s a significant challenge for an InfoSec organization or a SOC team to be aware of, let alone up to date on, all the APIs that they are responsible to protect.
While every organization would love a process-driven approach to API lifecycle management where each API is meticulously tested in dev, testing, and staging environments before being deployed to production, the reality is far different. Not only are new APIs being squirted into externally accessible locations without the expected/requisite amount of due diligence, but older APIs that *should* be deprecated are active far longer than they need to be.
While building a comprehensive runtime inventory of your APIs is not as hard as it used to be, it does require a systematic approach to discovering your APIs, cataloging them, and then delving into the specifics of what’s authorized vs. a rogue or shadow API.
Independent of how APIs are discovered – agent-based, third-party integrations, eBPF – there is cost and complexity associated with deploying API observability. Network and application teams are loath to deploying “yet another tool” – there are legitimate concerns about applications needing to be instrumented (most vendors require this before they can start to discover APIs), deployments being spun up and perceived risk around latency, uptime, etc. Just as importantly, some organizations believe they are taking adequate measures to safeguard their APIs with their existing toolsets – why fix something that isn’t broken? (till such time that they start to see attacks on their APIs, the most popular attack vector today)
It is in this context that zero-knowledge API discovery becomes an important first step in the API Protection lifecycle. Without having to install or configure anything on-premises, security teams are afforded visibility into their publicly-accessible attack surface. And while this might be the tip of the iceberg as compared to the organization’s total attack surface, it reveals enough data to build the case for a more comprehensive API security program. After all, if an attacker can discover (and potentially exploit) APIs without any specific knowledge of your environment, maybe the business-as-usual approach isn’t good enough. At Cequence, we see this all the time with new prospects – after all, if you are seeing API attacks despite fronting them with Edge infrastructure like CDNs, WAFs, or API Gateways, clearly attackers have figured out a way past your initial line of defense.
So why doesn’t every organization sign up for API attack surface discovery? After all, attack surface discovery is a well-understood and commonly deployed construct. It’s because API discovery, much like testing, requires context – What IS an API? What constitutes an API issue? Which findings are truly relevant? The answer to each of these questions is not as simple or as straightforward as it might seem. Nor does it stay consistent from one organization to another. There is no common deployment pattern within an organization; let alone one that is shared across multiple organizations. How APIs are designed and deployed is inherently dependent on the development teams that build and deploy them.
As such zero-knowledge API discovery is inherently at risk of triggering false positives – how can you discover my APIs if you don’t know what design principles I used to develop and deploy them? And InfoSec or AppDev teams that are already overburdened with other responsibilities are weary of adding to their own workloads, especially if they must weed out reams of incorrect information before they find something of value.
It is against this backdrop that we are now offering Cequence API Spyder customers the ability to both create and customize the multitude of currently available discovery and qualification algorithms. Whether it’s API hosts, endpoints, findings, or associated network components, Spyder now allows users to configure the algorithms that determine these detection/classification criteria such that they trigger with the maximum efficacy and help provide the most accurate picture of their APIs, and where/how they are deployed. Armed with this information, security teams can not only make the case for a more holistic security posture, but also make it easier to build and catalog their run-time inventory. Whether it’s load balancers, hosting providers, or edge deployments, Spyder will help you identify your API hotspots. And with the plethora of integrations available for these network components, it will make it easier for you to go from wondering what public attack surface you must complete to building comprehensive visibility as quickly as possible.
If you are a Spyder customer, try out the new detection and classification algorithms, customize them for your environment and let us know how well they work for you!
If you haven’t tried Spyder yet, give it a try – sign up for a free trial at https://apispyder.cequence.ai/signup.
Sign up for the latest Cequence Security news
By clicking Subscribe, I agree to the use of my personal data in accordance with Cequence Security Privacy Policy. Cequence Security will not sell, trade, lease, or rent your personal data to third parties.