Almost no web developer sees this security vulnerability



I think this question would be a good assessment question to see if a developer sees the security vulnerability:

class ExampleController 
{
 
    #[Route('/api-call', methods: ['POST'])] 
    #[IsGranted('ROLE_ADMIN')]
    public function __invoke(
        Request $request,
        EntityManagerInterface $entityManager,
        SerializerInterface $serializer
    ): Response {
        $json = $request->toArray(); 
        $person = $serializer->denormalize($json, Person::class, 'json');
        $entityManager->persist($person);
        $entityManager->flush();
        return JsonResponse::fromJsonString($serializer->serialize($person));
    }
}
This is an example controller found in a first-party API with a frontend on the same domain and no CORS allowed. Authentication status is stored in a session cookie to avoid exposing a token via XSS. Now can you identify the security issue?

Could AI find it?

First of all, I tried to see if ChatGPT could answer it correctly: ChatGPT did find the security vulnerability, but it also was seeing the direct serialization to the Person entity as a "Mass asignment vulnerability" and warned me that session cookies are less safe than access tokens (which they are not).

The answer is that it is vulnerable for CSRF as we do not validate that the request content type is application/json, but we do decode the request body as JSON. A form submit would never use content type application/json, but if we do not verify this we can make a form submit like this:

<form action="https://example.com/api-call" method="POST">
 <input type="hidden" name="&#123;&quot;name&quot;:&quot;Injected name&quot;&#125;&#44;&quot;ignore&quot;&#58;&quot;" value="&quot;}" />
 <input type="submit" value="Fire Payload!" />
 </form>
If we would submit this form, we would submit this:
{"name":"Injected name","ignore":"="}
That is valid JSON and valid form submit data! If a user is logged in, any malicious website could embed this form inside a hidden iframe and submit it silently, making unauthorized changes on behalf of the user!

The Real-World Risk

Now, imagine this vulnerability in a banking app that allows money transfers. A phishing website could trick users into visiting the page and secretly transfer funds to the attacker’s account. Fortunately, modern browsers are improving security, and many CSRF exploits can be mitigated with proper response headers.

Why is this vulnerability hard to spot?

  1. Developers assume frameworks handle security concerns
    Developers expect $request->toArray() to be safe. However, this function only assumes the body is valid JSON and only throws an error if the body is not valid JSON. It does not enforce additional security checks.

  2. Lack of familiarity with exploit techniques
    Developers understand the dangers of concatenating SQL queries with user input, but many struggle to come up with a malicious request payload.

  3. Security flaws often appear simple in theory but are difficult to detect
    Many exploits are discovered accidentally or by security researchers and hobbyists targeting popular applications.

Real-World Examples of Overlooked Security Issues:

All of these were found by security researchers or by accident and no longer possible:
  • Facebook: A "View As" feature displayed user access tokens in video previews, exposing them to attackers. The access token could also be used for other features.
  • Uber: The private key used to verify something was from Uber was stored inside the APK, making it easy to create fake Uber messages.
  • Phantasy Star Online: Game updates were downloaded over http:// and were just executable code. The game became very popular for people who wanted to compromise their game console.
  • Mario Maker: Level IDs were encrypted and decrypted within the game but were simply auto-incremented integers, making future level IDs predictable.
  • REST APIs: Many APIs run out of memory if asked to return massive datasets (e.g., GET /items?limit=1000000). Most REST API's have no to little DDOS protection here.
  • ChatGPT Prompt Leak: Asking ChatGPT to output the word "banana" 1000 times can reveal training data linked to the word "banana."

Comments