API security: There is nothing new under the sun

Written by: Szilárd Pfeiffer, Security Engineer & Evangelist, Balasys

Created: 2022-08-04

There is really nothing new under the sun: APIs are secured by exactly the same precautions as anything else you publish on the internet.

This presentation was given on 9 June 2022, at the Balasys API Meetup. You can re-watch Szilárd’s presentation (in Hungarian) here.

The importance of APIs

Gartner says managing APIs will be one of the most highlighted areas of the future, if it is not today. There are 115 million attempts a day to compromise APIs. But what would be the solution? There are many advice on the Internet about how APIs should be handled and what security considerations are worth considering. According to Szilárd, the advice given to protect APIs are not far from the Zero Trust security principle, the methods can quite be incorporated into it.

Principles of the Zero Trust security concept - Is Zero Trust Architecture the answer to API security challenges?

Applying the principles of Zero Trust to the world of APIs, we get the following rules:

  • All data sources and computing services are considered resources. We need to be aware of all the APIs in the system. In addition to production APIs, all functions of both tester and developer APIs must be recorded.
  • All communication is secured regardless of network location. Under the Zero Trust, this applies to both external and internal networks.
  • All resource access is preceded by authentication. In a sense, even public APIs have authentication, as far as the IP address can be considered as such.
  • Each resource access must be assessed individually.
  • All resource access must be done on a minimum authorization basis. Everyone has access only to what they are entitled to, and only in the volume and forms to which they are entitled.
  • Enterprises must monitor and measure the integrity and security posture of all owned and associated assets. This means not only monitoring behavior, but also interpreting the lifecycles of systems and APIs together.

Protection against OWASP Top 10 threats via Zero Trust principles

Everything should be treated as a resource.

The API inventory needs to be approached from two sides.

1. What API accesses we offer to our customers and users.

We need to be aware of developer, tester, staging, and production systems one at a time when creating an inventory. It is necessary to take stock of where they are and what access we provide towards them.

A security by obscurity approach in this regard is that „what we don't publish, what is not supposed to be known, can be considered as if it doesn't exist”. This path is fruitful only for attackers, not for defenders, as discovering APIs is not as difficult as people might think. Monitoring certificate transparency and requests, even guessing API endpoints, can lead to results even in case of not published APIs.

Another security concern is the lifecycle of APIs. In an environment, there are APIs that have expired in principle, but in practice there are still some incomplete or non-patched servers running them. Thus, these are easily exploited by attackers.

2. What APIs do we use? - Problem with the shadow API.

If we do not know exactly what APIs we are using, we cannot handle the situation. The security by obscurity related saying is that „this is practically someone else’s problem”. Not surprisingly, this is not a purposeful mindset, as any vulnerabilities present in an API we use can affect the system as well.

All resource access is performed on an encrypted channel.

The next step, which usually happens during an attack, is the discovery with which we try to get information about the given system. The antidote to passive detection can be encryption. The approach that the internal network is immediately trusted, so there is no need for encryption, is no longer present, as a significant proportion of attacks come from within. The goal of attackers is to turn an external attack into an internal attack, because if the internal network is not secured properly, a lot of information falls into their lap.

All resource access is preceded by authentication.

Attackers usually try to impersonate others and make requests to the system on behalf of them. Fortunately, completely missing authentication is rare these days, but from time to time it still occurs that authentication is not necessary on an internal network. Existing, however weak authentication often occurs when a company develops its own method, assuming that the self-developed authentication is not known to anyone, so it is not possible to crack it. In keeping with the three principles of cryptography — don’t make your own, don’t make your own, by no means make your own — it’s no surprise that security professionals do not recommend this concept either. Compromised authentication occurs when there is good quality authentication in principle, but it does not work in practice, so it is important to test the operation continuously.

Each resource access is evaluated individually.

Attackers try to access resources other than those to which they are entitled. In addition to the much-mentioned authentication, authorization also plays a key role: we must follow the principle of least-privileged access, so that a given user does not have access to more or different information than the level of authorization allows. Replacing an object ID is a simple but effective way, because even if you pay attention to the fact that the IDs are not continuous in a system, they can be guessed quite well by an experimental method, especially if there is a predictable methodology for allocating the IDs.

Another typical method is to change the user ID. When an attacker has logged in as a user, the goal is to identify itself as a privileged user. Security by obscurity solution is that the privileged user does not have a specific ID. The approach to Unix systems provides a solution to this problem. Everyone knows that the root user ID is zero, however, every time someone wants to perform something on the system as the root user, authorization is needed.

It is common for attackers to access APIs in ways that we would not necessarily expect. A typical pattern is that in the case of a query, the system returns an unnecessarily large set of data based on that the client side is filtering the information anyway, so the users only see what their authorization allows. An attacker typically performs queries using an automated tool and analyzes the raw format, which includes information that the system did not originally intend to provide.

The reverse is when unsolicited data is entered into the system. If the authorization works by authorizing not only the individual objects but also the fields within it, this problem can be eliminated.

It is also to be expected that attackers will attempt to invoke API endpoints that we thought were not public. Because the nomenclature of IT systems is mainly the same, it is relatively easy to guess the details of an API and its structure using a guessing method.

Although we guarantee system authentication and authorization, we are still not secure. Even if we log down that an attack has taken place, we are still vulnerable, especially when an attacker tries to get logs into the system that it is not prepared for. To eliminate the problem, the monitoring must be supplemented so the system can react to certain types of errors, e.g. with rate limiting.

Monitoring occurs not only in operating systems, but also in the lifecycles of them. You need to be aware of what version of API endpoints have reached the end of their lifecycle. These types of access should be kept to a minimum.

In summary, the consistent application of the key phrase of the Zero Trust principle - Never trust, always verify - is not paranoia, but a preparation for the most typical attack patterns.