How Google played with bad cryptography

- 41 min read - Text Only

While investigating one of my side projects, I discovered a bad pattern in Google's Services where service accounts (for machines not humans) authenticate in a way which requires a dangerous dance by Google to verify. One that I should hope others do not copy.

It's not just a big company that makes mistakes. Across the industry and through all levels of experience we see developers slip up and introduce vulnerabilities or risky code that evolves into a vulnerability later. Even senior developers with years of Amazon and Google on their resume will commit these vulnerabilities into production. I've seen it happen.

You might be familiar with one: SQL Injection!

XKCD comic about sql injection
Exploits of a Mom - XKCD

If you have not heard of OWASP top 10, check out this quick intro video on OWASP.

If you haven't checked them out in a few years, the list has been updated! Broken Access Control is now #1 and Cryptographic Failures is now #2. Both are relevant to this post.

Injection used to be #1, but thanks to better tooling, static analysis technologies, and some education: injection is now #3 on the risks to networked applications.

No degree, certification, formal or informal education will stop someone from making a mistake. Education just lessens the likelihood.

Even if the education happens to be fun.

*ANNOUNCEMENT* Presenting: the trailer for our new 🎢MUSICAL🎢 & spoken Security Awareness Videos! After the infosec sea shanty, dozens of teams DM’d me saying "The song worked! MFA usage up, reporting way up, pls make more songs!" So we got to work & you all it's finally here!πŸ€–
That sure was a different security awareness video.

What am I up to?

First an introduction to how I got to this problem.

November 2018, a senior engineer gave one days notice for a confidential integration with Google. I became his replacement on that project. No SDK was available for that confidential API and I had no idea how to authenticate with Google. At the time, Google pointed me to Using OAuth 2.0 for Server to Server Applications with the HTTP/REST documentation.

I was very intimidated by this documentation at the time. Almost Four years ago, I had no useful experience in cryptography. I clearly remember what it is like to not know what I'm looking at and know that it might be dangerous but not how dangerous.

It took a few weeks, but they finally proposed a way to use their existing library in a way where I could extract the token and use it on HTTP requests to the confidential API.

Fast forward to the summer of 2022 and I'm looking at that OAuth 2.0 page again. My side project is to implement an OAuth 2.0 Server and Single Sign On portal.

Writing an OAuth Server for fun? Who does that?

Naturally I reviewed other implementations to see what standards they use (or inspired) and then make a judgement on if that standard is fit for my project.

Then I notice something that really got me uncomfortable.

What's wrong?

Remember how I mentioned that Broken Access Control and Cryptographic Failures are at the top of the OWASP security risks?

If you do something like this, you will be playing with fire too. So as a preface, do not follow Google's example in this specific case. You will be at risk of introducing broken access control and cryptographic failures.

As part of RFC7523, the client must create a JWT (see RFC7519) to post to the authorization server as an assertion to acquire an OAuth access token which is accepted at Google Cloud API endpoints.

Here's an example of what that looks like according to their documentation, though the JSON is base64 url encoded in practice (see RFC4648).

[signature bytes]

When it is sent to the server, it looks something like this.

POST /token HTTP/1.1
Content-Type: application/x-www-form-urlencoded

&assertion=[base64 header].[base64 payload].[base64 signature]

Do you see the problem?

How does the authorization server know which digital signature key is to be used to verify the signature from the client?

Answer: It is using the "iss" claim (known as the issuer) which is the value of the OAuth 2.0 Client ID. These can be found on the GCP console under APIs & Services -> Credentials

Google Cloud IAM console screenshot showing a client ID that looks like the issuer in the above example

Okay, but how is using an ID to find a key a bad thing? It isn't! In fact a key id which is known to both the client and authorization server has to be sent in the clear (or through some agreed upon mechanism) for the authorization server to know how to verify it.


What's wrong though?!

The "iss" claim is coming from the JWS (see RFC7515) payload.


JOSE - JSON Object Signing and Encryption is a collection of standards, when you see JWT (RFC7519) that is built upon JWS (Signatures - RFC7515), JWE (Encryption RFC7516), JWK (Keys RFC7517), and JWA (Algorithms RFC7518).

Most JWTs in practice are using just signatures (JWS) rather than encryption.

Here's the problem: in order for Google's authorization server to determine the key to use to authenticate the data, the JWS payload (which should only be read after it is authenticated) is read and parsed prior to being authenticated. This is literally a tracked common weakness in software: CWE-345: Insufficient Verification of Data Authenticity.

So I bring it up with Sophie who happens to be at Google.

Hey @SchmiegSophie , while RFC7523 permits it, why are the claims opened before being verified to find the key? I imagine some clients just don’t offer flexibility in the JWT header?
Photo included with tweet
@CendyneNaga I don't know, what is this part of/context? JWT sometimes forces you to do that, although Tink as far as I know just doesn't work in those use cases
@SchmiegSophie This is service account token authorization, using the jwt profile assertion grant to request an access token for google cloud scopes.
And then someone else joined in with a thought:
@SchmiegSophie @CendyneNaga If I were to venture a guess without looking at the details: probably some form of efficiency. What do you think?
Then I started looking to see how this came to be.

A Brief intermission about JWTs

I believe most software developers find JWTs easy to understand, accessible to implement or use through a third party library.

But developers and cryptographers have a different perspective.

Nothing seems to be more shocking to developers than hearing that crypto engineers hate JWT.

The problem is JWTs are often used in places they were not designed for like browser session cookies. Developers interpret the spec as it was written without regard for its consequences. Surely the spec would have an out of the box secure design, right?

See "alg":"none".

A screenshot of the website 'how many days since a jwt alg none vulnerability' it reads 188 days. It soon got reset after this screenshot was taken.
How many days since jwt alg non vulnerability

If you'd like to learn about all sorts of common vulnerabilities with JWT implementations, see JWT Vulnerabilities (Json Web Tokens)

While we could beat a dead horse of alg none for months, let's focus back on the problem this article is concerned about.

When a service receives an inbound JWT, it needs to verify the integrity before using the embedded data.
The Hard Parts of JWT Security Nobody Talks About - Philippe De Ryck
Authentication is a widely exploited vector for APIs, and a growing number of micro-services and APIs depend on JWT for identity claims. It is strategic to understand the weak points and design defenses to protect against information disclosure and the acceptance of spoofed tokens.
JWT: A How Not to Guide - Shahnawaz Backer

Some libraries try to make it difficult to use them wrong. For example Java JWT: JSON Web Token (JJWT) for Java and Android Issue #86 made it hard to read data which should be authenticated prior to authentication.

Generally it's a good thing to have a library that prevents misuse! The problem is JWT's specification (and all JOSE specifications) has proliferated insecure design patterns and compatibility with these implementations introduce more vulnerabilities.

To say cryptographers dislike JWT is an understatement.

If alg=none was the only issue with this abomination of a standard, I could ignore it, and work around it. But the way this standard is used, it's almost impossible to build a secure JWT library
This vulnerability is a trifecta of things I hate: - JWT - Ruby OpenSSL extension - AES-GCM (great performance but so brittle) Expect years and years and years and years of ongoing JWT related vulnerabilities, with people continuing to claim "this isn't a problem with JWT!"
JWT is so bad that I find myself wondering what I was doing when it was being created and if I could have done something to stop it. Also, note that this HN thread is full of developers just now learning that JWTs only does signing. Except it can also do encryption. πŸ€·β€β™‚οΈ
Photo included with tweet
One of the most damning observations about JWT is that, whenever you introduce someone to a new way that you can shoot yourself in the foot, they automatically assume you're talking about some old way that you can shoot yourself in the foots.
This isn't an isolated incident. Every time someone talks about why JWT is bad, someone engages in a congruent fashion. "Blame the libraries, or the defaults. Don't blame the standard!" "The attack you're describing sounds like an old attack which was an implementation's fault"

Thomas H. Ptacek

The issue with JWT in particular is that it doesn't bring anything to the table, but comes with a whole lot of terrifying complexity. Worse, you as a developer won't see that complexity: JWT looks like a simple token with a magic cryptographically-protected bag-of-attributes interface. The problems are all behind the scenes.

For most applications, the technical problems JWT solves are not especially complicated.

But there's a reason crypto people hate the JWT/JOSE/JWE standards. You should avoid them. They're in the news again because someone noticed that one of the public key constructions (ECDHE-ES) is terribly insecure. I think it's literally the case that no cryptographer bothered to point this out before because they all assumed people knew JWT was a tire fire.

From a comment (archived) on 🍊 site.

Thomas H. Ptacek

Earlier I mentioned what reduced risk for Injection: better tooling, static analysis technologies, and some education. I believe the same applies to Broken Access Control and Cryptographic Failures.

While the libraries can be improved, JJWT is a case study where safety choices clash with developers. This happens in practically every mainstream JWT library. The specification did not require the safety choices that JJWT implemented. Auth0, JJWT, and any other pragmatic implementation receive issues requesting unsafe features by developers. These issues come either to enable compatibility with something unsafe or personal feelings that this is best for them to get to production.



I want to have a signed token with a subject claim. I can create the token perfectly well.

When I'm verifying the token, I would like to first determine the subject. After determining the subject and retrieving some data, I would like to verify the signature.

I can do the parsing of the body manually, but it would be nice to have this as a function of the parser class


I hit the same wall. I need what's inside the token to get my key, can't do it.
Developers will ask for dangerous things because the specification was made to support dangerous things. It has been up to library authors such as Luciano Balmaceda (see below) for the Auth0 JWT library to educate and design tooling that will enable developers to use these technologies in a safer way.


The problem with allowing people to just decode the token without a following verification is that they might start to use the token claims as trusted data immediately after they decode it. And that's an error: **No data should not be trusted at all until the token's signature is verified**. _You_ might know the risks and would try to decode it and if required, verify the signature yourself. But most of the users don't know or don't read those warnings; I rather have a library that can protect those users from misuse than having to open the API in the other direction. For the next major I plan to have something like this that allows users to check existence of claims but not obtain its value until the token is verified. Doing this today would be a breaking change since methods signature would change.

The only use case this library and the spec that it tries to implement support is using the key id claim to fetch the proper JWK on demand (RS algorithms). That's what the `KeyProvider` interface is here for, and I'm willing to refactor it as much as required to fix whatever mistakes you find on its implementation.

Luciano is saying the same thing I am. Reading and acting upon the JWT Payload claims is dangerous and should be avoided prior to being authenticated. Yet this is exactly what google suggests in their Google Cloud IAM documentation.

How can this be fixed?

I'll re-state problem: information required to determine the correct public key to authenticate the payload is inside the payload which should only be decoded after being authenticated.

The solution is to therefore move, copy, or reference that information outside the payload.

In this case, I believe that the information should be referenced.

Again, the pertinent key identifying information is the issuer which is set to

Adding a @code{client_id} to the request

Typically, an OAuth 2.0 request will include a client_id when it is not in an authorization header.

Confidential clients or other clients issued client credentials MUST authenticate with the authorization server ... A client MAY use the client_id request parameter to identify itself when sending requests to the token endpoint.
RFC6749 section 3.2.1

It would look like this!

POST /token HTTP/1.1
Content-Type: application/x-www-form-urlencoded

&assertion=[base64 header].[base64 payload].[base64 signature]

But of course, the client_id must be consistent with the issuer inside. If not, then the client may be attempting a forgery.

OAuth 2.0 was designed to be extensible, and so the JWT method that Google employs has its own specification.

To use a Bearer JWT as an authorization grant, the client uses an access token request as defined in Section 4 of the OAuth Assertion Framework RFC7521 with the following specific parameter values and encodings.

The value of the grant_type is urn:ietf:params:oauth:grant-type:jwt-bearer.

The value of the assertion parameter MUST contain a single JWT.

The scope parameter may be used, as defined in the OAuth Assertion Framework RFC7521, to indicate the requested scope.

Authentication of the client is optional, as described in Section 3.2.1 of OAuth 2.0 RFC6749 and consequently, the client_id is only needed when a form of client authentication that relies on the parameter is used.

RFC7523 section 2.1

But the example shown in RFC7523 does not include a client_id...

POST /token.oauth2 HTTP/1.1
Content-Type: application/x-www-form-urlencoded

eyJpc3Mi[...omitted for brevity...].
J9l-ZhwP[...omitted for brevity...]

If you peak at the assertion, you'll see the header looks like the following when decoded.

  "alg": "ES256",
  "kid": "16"

Okay, so there's a kid claim.

Using the JOSE Header

Instead, they hint in examples but not as words in the requirements that including a key identifier in the JOSE header is appropriate.

The following example JSON object, used as the header of a JWT, declares that the JWT is signed with the Elliptic Curve Digital Signature Algorithm (ECDSA) P-256 SHA-256 using a key identified by the kid value "16".

RFC7523 section 4

This is a common pattern. The specification permits a lot of freedom by not specifying the requirements for common cases. Instead they only hint within examples what should be done.

That said, I'll point back to the JOSE specification for a signed payload.

It is necessary for the recipient of a JWS to be able to determine the key that was employed for the digital signature or MAC operation. The key employed can be identified using the Header Parameter methods

That wiggle room phrase "can be identified"... should be "should be identified" in my opinion for any asymmetric key material.

Anyway, the client_id can be the kid value in the JOSE header, or it could be the private key id (which is supplied with the key, see below). But the authorization server must do an equality check with the iss (issuer) claim in the payload or check that the key id matches the issuer. A string equality check is not expensive.

After all, you would not want to accept claims pretending to be "John" when the authenticator (who signed the request) is "Wendy".
I definitely identify as "John."

Given that the key looks like (don't worry it is revoked!):

  "type": "service_account",
  "project_id": "iam-example",
  "private_key_id": "0643fc8eee2c2c0360dde21ccd5399d2f1020d08",
  "private_key": "-----BEGIN RSA PRIVATE KEY-----\nMIIEpgIBAAKCAQEA+dk80a7PYILJgNh7/ED1NM9BmHYImnhqj6P2bo/7d2YpD1W5\nXS3GOjwAJ7vEriUiRa1W7NHZRdrgICjfDicE4DlteGQJJBpgz9JWsKIujg87N6fT\nlhX37uW6R7Nt/SOvOLby4xZvzQin/P1m+h5oANzwiuOpB1VNmKVFrNRR7y2x8WOt\nM2WyWdkQHBYnfKVqpij2hgOIedm3rh0Om4IJy350IMoW/R29BgyBXs44iRJbCV8+\nChO3x0VsM51Ck7AYrAdh+z8lvU1u71MbEIVajbzDe/kAJuolIt8bc7D/zVMCVufT\noMl3MBu4oNbwhGclKRF3NktcaZaB+Y/WNom3kwIDAQABAoIBAQDqQJnbZvEcZcOT\nwGWO/0BoASJZVeF/IwOWJX657tkw+2Hn9NHU4UQH+ZWTq2Mee8aEWZ80bxQtgKe+\nv1NTK5ZQvMc8p15CsVCvyWBqP8UygGlfJ0UkZPiOzmk3LK4lNz3kCPP1omW0cTc/\n5j6Up8mPdZc6QXWLYJleUybegjtH6U+8YLhH7YPqs8yM/wR+7Vf950GcCimO6pp4\n3TReaADeIBS/FLZkwrNu0E+6WAPBBR9gGwAtnPpuAT7WwY3jA/ukkpvyoqlptB44\nI6LrNS9CDUwT6mzkX8Qvj8BP5KFzxmKw6SrjLkwF7eDmMEmpPxrvmDrTDSDNRMP4\n5G92UqKhAoGBAP1qK1gDr8SX6Mskm+IMYaXIEMvP5o2CqTJHFbqMAUwyuv7NCywT\n0xikbivkHg3z5Z+pOifc6CdDDZVXVt6GHutd6k7mCUfm5tsthtDtaMf7ELFAZd6V\nzw/y39+0WgFXg8UDA0fBzo5nQQSDhaXw6qBo6lIKi1CmtxaRrjslpC1LAoGBAPxl\nwTd2KbfUHw0gjwm6WY4CZv6X95PL0W7U9f3YeS+gcTAN1pNp6GR/d3XflFGTGKYf\nXcWF/5O0KA3SXa+VvNhPpUz9GhXyeZNdREpGeXHjPKo2e0FNU4qlPoCzVCfDRxNv\nSmmKIirJGKtIIiel7E6PsceodHITF3m/Wreo+RnZAoGBAIkjTXWB+TrAoqBcnWdF\nIArhLAW/6pqmHP4ybdXYMlOUGJIPUH539AMf6Ocjugf+90LiB7DO4Wtt5Anvi/k8\nR7tDxasQ3fDlSgVOq+igsdWXTr89hGNiWv3ch76+EP8s5whUyw+oGCoEQrE4o7jb\nmX1ZiYUAY8gvkGFMUSd9BU3lAoGBAIhVTnDu2sn5QmyM0baneghDM+8BlzG2PoJn\ndhiP/aXEPF+Amg82fdkLITQCeNM3aXESMEypfMwD3D7bCs/1SfRt0RQtAxInz5PS\nJTkZqC/kVrh6hUlYw294orJSK3ru+E1/J+qqOppx1WlvpUNVVLd61sTKMVwNA/k3\na4EZPLTBAoGBAJHbJj5byljN25a96gVzuaK2RULxGY+khfhxoqmHUap8fWrtkTDY\n0kOaCb1OYXc0NP3vrwX63qKtIXWFzO8zduZH3/PdHHk15FN9FYqXRKScO4vq39hE\nLUkAtc/tKDUg6qnAWXF/tWO2oGAhlATSFFCOHf9jG9e+4QtXm7GoFXc/\n-----END RSA PRIVATE KEY-----\n",
  "client_email": "",
  "client_id": "100048893883123634702",
  "auth_uri": "",
  "token_uri": "",
  "auth_provider_x509_cert_url": "",
  "client_x509_cert_url": ""

The JWT with the kid properly populated would look more like...

[signature bytes]


This specification allows claims present in the JWT Claims Set to be replicated as Header Parameters in a JWT ... If such replicated claims are present, the application receiving them SHOULD verify that their values are identical... It is the responsibility of the application to ensure that only claims that are safe to be transmitted in an unencrypted manner are replicated as Header Parameter values in the JWT.
RFC7519 Section 5.3

The iss value could be in the JOSE header.

[signature bytes]

Like the kid version above, the iss value in the JOSE header must be compared and found equal to the iss claim so forgeries are prevented. At least if this exact example were to be followed.

The JOSE header is intended for reading prior to authentication! Like other request parameters, it is considered untrusted.

You have to talk carefully about untrusted data which is necessary to establish trust.


> Using the signing key resolver pattern, you have (safe) access to the
header and claims.

@dogeared, not sure if you just misspoke or weren't aware. The JWT header is not safe. It's not signed and comes from the caller, so it can never really be trusted. At best, it can give you an excuse to reject a JWT before doing extra work validating with a trusted algorithm you've already decided on out-of-band.

At worst, it can trick you into doing something you shouldn't. The alg NONE and asymmetric key (RS/EC) vs. symmetric (HMAC) flaws are evidence of that.

A few corrections. The JOSE Header used in practice is in fact signed but it is intended for use prior to verification. If the JOSE Header is tampered with, the signature will fail.

The specification permits an unprotected header available with the JSON serialization but no one uses the JSON serialization because it is such a bad design!

Obviously "alg":"none" is untrustworthy. Likewise, if the type of key is different from the type in the header, then it is untrustworthy.

For example "alg":"RS256" being changed to "alg":"HS256" is an attack that has shown up over many JWT libraries. See the The JWT Handbook section 8.1.2 RS256 Public-Key as HS256 Secret Attack.

Like any user supplied form parameter, every value must be sanitized and processed carefully. That includes the JWT claims after the JWT passes verification. If you do not carefully handle external data at every exchange you will introduce improper access control or cryptographic failure or any of the other top ten OWASP risks.
If you are not careful, I will get you.
Side note, mixing MACs and digital signatures is a no no. Yet JWT has this built into it.

In practice though, some experts say ignore the header entirely. I think it still has some utility, just not as much as the specifications suggest is possible.

@bascule @hasheddan The algorithm field can be quarantined (essentially ignoring it or just comparing it to the keys algorithm). In order to avoid parse before validate problems, you need to have a given set of keys (JWK should work), checking that they are either all signature keys or all MAC keys
@bascule @hasheddan This means you can and should ignore the entire header before you find the correct key to validate. This can be a problem if you have several keys in your key set, but to get to the key id you need to parse the header so that's a no go.

The specifications have consistently demonstrated design choices that lead to unsafe and dangerous implementations.

There are alternatives like PASETO, Macaroons, Biscuits, and more. Unfortunately, not all language have complete implementations.

Check out API Tokens: A Tedious Survey by Thomas Ptacek at for how these different tokens work.

So did Google mess up?

See the thing about internet specifications that reach maturity is that they usually come after the implementations work in production.

Google made their OAuth JWT token handshake prior to the standard being completed (literally years later).

In 2012, we see Google documented OAuth2 Service Accounts and today ten years later Preparing to make an authorized API call looks the same in substance.

Here's the timeline:

Judging from the timeline, and no other inside information, I think that Google gave up on participating in the standardization process. That said, the standardization still supports their implementation and this pattern in particular.

Since Google left the standardization process and had their work in production, it is hard to say that Google broke the specification when they launched. They made their own implementation, documented it, and launched before the standard was finalized.

It is completely possible to do this check safely and securely. But not by inexperienced hands.

Mimicking this implementation might only be safe if you...

  1. Extract the issuer and only the issuer from the JWT payload
  2. Discard the rest of the parsed payload
  3. Sanitize and reference the issuer to find the key
  4. Discard the issuer and process the JWT with the key
  5. And then finally go through with other claim validation

This approach is not aesthetic and is not something software developers would think necessary. But I think it is the only safe way to relpicate Google's specific implementation as an authorization server from the documentation.

Google can add a kid claim, and in fact in the Addendum: Service account authorization without OAuth, they do include the kid JOSE header claim. But this method is only for directly connecting to a supported API without getting an access token from the authorization server.

Look forward to the Final Checks section below!

Bonus tomatoes

OpenID Connect which is built upon OAuth 2.0 gives a public warning when implementing with Google.

Implementers may want to be aware that, as of the time of this writing, Google's deployed OpenID Connect implementation issues ID Tokens that omit the required https:// scheme prefix from the iss (issuer) Claim Value. Relying Party implementations wishing to work with Google will therefore need to have code to work around this, until such time as their implementation is updated. Any such workaround code should be written in a manner that will not break at such point Google adds the missing prefix to their issuer values.

Added to OpenID Connect Core Draft October 15, 2013.

As you will see below (August 2022), this has been changed and Google now conforms to the spec.

Where the OpenID Connect Spec requires:

iss: REQUIRED. Issuer Identifier for the Issuer of the response. The iss value is a case sensitive URL using the https scheme that contains scheme, host, and optionally, port number and path components and no query or fragment components.
This was originally introduced September 2013 in section 1.2 Terminology.

It seems Google, at the time, chose consistency with their Service Account token implementation rather than to deviate to match the OpenID standard which Google definitely did participate in.

Note that the JWT spec permits any string or url (as a string) for the iss claim.

The iss (issuer) claim identifies the principal that issued the JWT. The processing of this claim is generally application specific. The iss value is a case-sensitive string containing a StringOrURI value. Use of this claim is OPTIONAL.
RFC7519 Section 4.1.1

As I review the OpenID Connect Core specification, it does look like kid is required when key rotation is expected.

I checked Google's OpenID Implementation and I got a token that actually met my expectations unlike the Google Service Account IAM endpoint.

  "alg": "RS256",
  "kid": "1549e0aef574d1c7bdd136c202b8d290580b165c",
  "typ": "JWT"
  "iss": "",
  "nbf": 1659843291,
  "aud": "...",
  "sub": "...",
  "email": "",
  "email_verified": true,
  "azp": "...",
  "name": "Cendyne Naga",
  "picture": "",
  "given_name": "Cendyne",
  "family_name": "Naga",
  "iat": 1659843591,
  "exp": 1659847191,
  "jti": "..."

The kid "1549..." is publicly available at OAuth2/certs, a JWKs endpoint.

  "keys": [
      "use": "sig",
      "n": "stD2wMn0t...",
      "kty": "RSA",
      "kid": "1549e0aef574d1c7bdd136c202b8d290580b165c",
      "e": "AQAB",
      "alg": "RS256"
      "e": "AQAB",
      "alg": "RS256",
      "kid": "fda1066453dc9dc3dd933a41ea57da3ef242b0f7",
      "n": "4DauU23AE...",
      "kty": "RSA",
      "use": "sig"

Unlike Google's Service Account IAM endpoint, the human accounts OpenID implementation does not force recipients to decode the JWT payload prior to authentication. Instead, the header is treated as an acceptable place to reference a key ID so that the complete JWT can be authenticated.

Google's OpenID implementation does not require dangerous execution by clients when handling their issued tokens. So no bonus tomatoes for Google.

Final checks

Sometimes documentation is wrong and I want to get all the facts straight.

I checked the Java SDK for how it handles the private key id. Well, it turns out they set it now!

protected TokenResponse executeRefreshToken() throws IOException {
  if (serviceAccountPrivateKey == null) {
    return super.executeRefreshToken();
  // service accounts: no refresh token; instead use private key to request new access token
  JsonWebSignature.Header header = new JsonWebSignature.Header();
  header.setKeyId(serviceAccountPrivateKeyId); // <========
  JsonWebToken.Payload payload = new JsonWebToken.Payload();
  long currentTime = getClock().currentTimeMillis();
  payload.setIssuedAtTimeSeconds(currentTime / 1000);
  payload.setExpirationTimeSeconds(currentTime / 1000 + 3600);
  payload.put("scope", Joiner.on(' ').join(serviceAccountScopes));
  try {
    String assertion =
            serviceAccountPrivateKey, getJsonFactory(), header, payload);
    TokenRequest request =
        new TokenRequest(
            new GenericUrl(getTokenServerEncodedUrl()),
    request.put("assertion", assertion);
    return request.execute();
  } catch (GeneralSecurityException exception) {
    // ...

On June 4, 2014 Anthony Moore committed this fix to the Java SDK. Now kid in the JWT header is populated with the private key ID!

It took Google two years in realize that there was a better way. And the specification isn't even finalized yet...

I am so glad that this was resolved after all!
Do they still accept clients from 2012 that don't identify their keys in the JOSE header? If so they may still have dangerous code in place.
I do not know and I have no contact to find out.


Google's IAM Authorization Server documentation suggests an unsafe design pattern where by they must process the JWT claims prior to authenticating the claims. This is dangerous and you should not follow that example. Existing specifications permit Google's previous and current implementations. I see these specifications as dangerous to naively implement. New implementations of these specifications will contain cryptographic failures, broken access control, and several other categories of the top ten OWASP risks.

Implementing IETF specifications like JOSE (JWS, JWE, JWA, JWK, JWT) and OAuth 2.0 and extensions is "rolling your own cryptography." They may still be implemented but the risks should be considered equivalent to creating your own security schemes.
@bascule @hasheddan The biggest issue I have with JWT is that it is a way to trick people into thinking they're not running their own crypto. I think the main reason for its popularity is the fact that there are commonly available libraries that have a somewhat higher level API

Google improved their clients by adding a key identifier (kid) to the JOSE header. By putting this in the header, they removed the need to read authenticated data prior to authentication. Hopefully Google's authorization server also verifies the key identifier and issuer match.


If you've ever heard of Authenticated Encryption and Associated Data, similarly while the encrypted payload can technically be decrypted with the key, implementations do not decrypt the payload unless the authentication tag (a MAC) matches the protected content and additional data from the context of the application's process.

Likewise, it is not safe to reach into the payload of a JWT before it is authenticated.

As a final reply to Sarai Rosenberg:

@SchmiegSophie @CendyneNaga If I were to venture a guess without looking at the details: probably some form of efficiency. What do you think?
I think there was less awareness at the time (2012) and the developers did not know better when it reached production.

They did fix this issue in 2014! But their documentation here is stale. Oops. And I wrote 80% of this before finding that out!

Well I'm sending some documentation feedback.

That said, developers still ask for unsafe features... like this one where authenticated payload data is processed before authenticating.
Google did better with their OpenID implementation and since they changed the "iss" claim on their tokens to conform with the OpenID specification after its finalization, I feel more confident in their work. It looks like Google has a track record of improving their security and conforming with others.