Device OAuth Flow is Phishable

- 28 min read - Text Only

Have you ever tried to type a username and password on a smart TV with a remote? It sucks. Get the password wrong and it is a bad time.

There is a solution!

Device Authorization enables a second device to bestow access with a user's active consent. In short, the device can present a one-time passcode (OTP) to the user, and they can transcribe that into an authorization web page or app on their phone or computer. After authorizing the device, it then appears logged in and can perform its functions with the user's preferences.

One problem: transcribing device codes is phishable, and it trains users to think this is a safe activity. Device auth codes effectively bypass unphishable two-factor security.

In this article, I cover why device authorization codes exist, their benefits, and superior technologies that replace them. Next, I share how these codes and similar patterns can be and have been abused. After that, I share what technologies should be used for local applications and I wrap up by remarking how security culture changes over time... slowly.

Before Device Auth Flow

These devices are not made for convenient input. Any means to authenticate is an afterthought to the inputs it offers. After all, a printer shouldn't be an intimidating monolith of buttons — it just needs to be obvious how to use 95% of its functionality. So, we end up in situations where textual input is through number pads, arrow buttons, or even down to a dial and a push button. I have entered the WiFi password into this printer before. It was the definition of a bad time.

A picture of a printer, or more specifically the dial pad on the printer with the alphabet set onto buttons. For example 2 has A B C. What if you had to input a password on this?

The situation isn't much better when you have a remote to do this on a screen. Sure, it might show up alphabetically or with a QWERTY interface, but inputting text is time-consuming, awkward, and error-prone.

Device Authorization Codes

Wouldn't it just be easier if an input-constrained device could borrow another for this infrequent use case? Rather than the complicated machinery of having a cell phone or computer somehow hook up as a keyboard to the constrained device, we have a prompt to input a one-time code into a trusted website or application on a cell phone or computer where it should already be authenticated.

Spotify

This idea caught on, and we now have an OAuth standard for it! RFC8628 OAuth 2.0 Device Authorization Grant describes a process where the input-constrained device requests a device code, user code, and verification URI (such as spotify.com/pair). The user code is that short passcode for the user to transcribe and the device code acts like a browser cookie, so the service remembers the device. This code only lasts so long, though — usually 5 to 30 minutes. Both the user code and device code are generated by the service, which prevents an attacker from predicting the codes a specific device will use.

The device will regularly ask the service if it has been authorized yet until the token expires. In the meantime, the user is expected to follow those instructions to authorize the device to access their profile and act on their behalf with the service.

RFC8628 specifically states what this device authorization flow was designed for:

This OAuth 2.0 [RFC6749] protocol extension enables OAuth clients to request user authorization from applications on devices that have limited input capabilities or lack a suitable browser. Such devices include smart TVs, media consoles, picture frames, and printers, which lack an easy input method or a suitable browser required for traditional OAuth interactions.
The device authorization grant is not intended to replace browser-based OAuth in native apps on capable devices like smartphones.
The operating requirements for using this authorization grant type are: … The user has a secondary device (e.g., personal computer or smartphone) from which they can process the request.

This process is specifically tailored to devices with limited input and limited purpose. These devices delegate the authentication and authorization to a second device. The user code exists solely to match these devices together for this authorization handoff. Afterwards, the device receives an access token and refresh token to continue acting on behalf of the user with the service.

With that high-level view in mind, a familiar pattern emerges. Recall Uber's breach: a privileged user was convinced to grant access to a remote threat. This grant went out of bounds by jumping between the threat actor's client and the Uber contractor's phone. Neither were on the same network, connected by Bluetooth or WiFi, or plugged in to one another with USB or thunderbolt. In other words, the push notification by design facilitates an out-of-bound authentication. Similarly, any token or password that a user has to transcribe is information that goes out-of-bounds between devices or processes.

Avoiding out-of-bounds

If you're looking to move on from TOTPs and OTPs, WebAuthn is the choice to make. It binds the authentication to the device or a security key, the website it is authenticating to, and the user who is authenticating. In fact, it may even become the cornerstone of a passwordless future.

🔑How does a FIDO security key limit the hacks we're seeing in the news now?🔑 Beyond fun to work with @Yubico & partner with @Twitter to answer that question + demo how social engineering is used to steal passwords & siphon out MFA codes to gain admin access with @EvanTobac.

That said, what about this two-device scenario? I've seen some interesting designs by Google here. For a Chromecast, it starts up with a custom WiFi name like Chromecast5361 and an app is used to set up and authenticate the device with both the network and Google as a service. Sure, there is a code, but it does not authenticate the device with the service! This code helps a user distinguish devices from one another, which sounds a lot like authentication.

A screenshot of a chromecast setup screen. It shows four characters which are to be verified on another device.

A screenshot from an iphone, showing the second step of linking a chromecast. It says to look and verify the code is the same on this screen and the chromecast.

On the Apple TV, the YouTube app offers two methods to add a user account. The first involves an on-screen keyboard, but that's not interesting for our discussion. The second is another app-based integration which operates in-bounds over the network.

I was amazed at how fluidly this experience flowed. There was no code entry, just confirmation as if logging into any other app. Without a code or password, the chances of being phished is further minimized.

The closest thing I've seen in Apple's ecosystem is their phone migration process. I wish setting up a new device with iCloud was as smooth as this.

Four iphone screenshots showing a migration between two devices. The old device recognizes that there is a new device and there is a complicated particle animation to link the two together. The new iphone asks for the PIN code and how to transfer the data, either by cloud, or directly over wifi.

An in-bounds experience requires a lot more technology lift to be successful. However, the results are clear: it is convenient, more secure, and does not involve passwords or passcodes! I have hopes that like WebAuthn, an in-bounds solution for setting up input constrained devices is standardized and available outside of Google's ecosystem.

Device auth codes gone wrong

Remember what this design was intended for: authorizing devices with limited input and limited purpose using a second device. When these out-of-bounds codes break this contract, the consequences are scary. Jenko Hwong has been presenting on this topic for several years now. At DEFCON 30, I attended his presentation "OAuth-some Security Tricks: Yet more OAuth abuse." Unfortunately, I cannot find a recording of this specific presentation, but a few others have surfaced since.

For your privacy, this youtube video was not automatically loaded.
Click this area to load an embedded youtube video.

A few of the slides below come from Jenko's presentations.

A slide from Jenko's presentation showing a phishing email pretending to be from Microsoft Office 365. There is an offer to increase disk space to 1 Terabyte. It says to go to an official microsoft link and enter in a product code. In truth, the product code is a device one time password and by filling this out, you are giving permission to the remote device or application to access all of your account.

While device authorization codes have a limited lifetime by design, opportunistically prompting the user to enter a remote user code into the official device authorization process can be catastrophic. Jenko shows how Office 365 authorization can be phished and then exchanged for Active Directory or Azure cloud access. This is possible because Microsoft considers first-party access tokens special and, for user convenience, they can be used across other applications. By permitting a device authorization to act with unlimited purpose, Microsoft has given phishers a valuable tool that is actively exploited.

Another slide from Jenko's presentation. It shows a logging screen on device code authorization phishing. There is little to no effectual logging for device code authorizations for administrators.

Speaking of lateral movement by a threat actor, Microsoft is opening the door to linking personal Microsoft accounts and Active Directory accounts. Why? So that people who use Bing rewards can collect them on both their workforce and personal accounts and redeem them on either.

Microsoft now offers the ability to link Azure Active Directory accounts to personal Microsoft accounts. It will be enabled by default, so Threat Actors can compromise both your business and your home life, essentially doubling the capabilities of Threat Actors. Very cool
Photo included with tweet
sarcastic-clap
This is the most petty case I have ever seen in breaking open a security boundary. There might be a way to do this securely out of band. But the next feature that gets added like this will likely wedge this security risk wide open.

While I praised Google's security for the end user above, their cloud CLI had made a terrible blunder by intentionally including an out-of-bounds authorization code that the user must copy and paste from the web browser into the console to authorize the gloud cli tool.

Three screenshots side by side. The first says to choose an account. The second says Google Cloud SDK wants to access your account. The third shos a code to copy and paste into a remote application. Note that this copy and paste means that secure material goes out of bounds.

Similar to the microsoft phish above, this happens in reverse. You acquire secure material and put it into a google cloud branded (quite poorly branded I must say) form with instructions and a text box. By entering in that secure material out of bounds, the threat can access your google cloud account and all of its resources as if it were a command line on your own machine.

When a Jenko Hwong (via Netskope) brought this to Google's attention, Google was receptive to eliminating this security risk! Immediately they restricted lateral access and announced a sunset period for clients using the out-of-bounds authentication. However, Google did not acknowledge Jenko or Netskope in their announcement.

talk-w-bubble
This out-of-bounds profile is no longer available, it was sunset as of October 3rd, 2022. In their sunset announcement, they reference the security considerations in RFC8252 which specifically advises adoption of RFC7636 Proof Key for Code Exchange (PKCE). You'll see why this is important soon.

Authorizing local applications in-bound

If you are writing or maintaining a CLI tool or desktop application, do not rely on the user to copy a code into your application. Instead, host an HTTP server on a random local port which the authorization server can redirect to. Your local application contacts the authorization server over HTTPS with certificate verification, but since it itself is serving HTTP, we need one more protection to ensure this process is completely in-bound and secure against local threats.

To recap: we're talking about the user, the authorization service, and the process that wishes to be authorized. The user may authenticate with the authorization service, but the user is not authenticating with the process. The process performs a handshake with the authorization service, which authenticates and confirms consent to authorize that process. For an in-bound authorization to occur, a user with an honest client must be able to authorize that client without those credentials being recovered by a threat actor.

CLI tools like wrangler are public clients. There is no way to confirm the identity of a public client in the hands of an honest user or a malicious imposter. This is important, but we will be focusing on honest public clients. However, what if a malicious proxy or local packet monitor, such as Wireshark, recovers the data and it races to claim the access token first? In this case, we need to bind the process asking for authorization with the authorization service so that no other process can recover the credentials. This can be done with PKCE!

By using OAuth with PKCE, Cloudflare's wrangler CLI achieves an in-bound authorization on the same device without risk of local threats recovering the credential.

Thankfully, Google Cloud's gcloud CLI has changed to do the same! Both Cloudflare and Google are using PKCE with a SHA256 challenge to bind the honest local process to the authorization flow that occurs. This is far safer than Google's previous state where they asked the user to copy an authorization code grant from the browser into the console.

A screenshot of google cloud command line interface app logging in. It shows the URL to copy (if needed) and includes a code challenge and a code challenge method in the request query parameters. These parameters are essential to the Proof Key for Code Exchange approach.

By the way, you can spot if a local OAuth client is using PKCE by looking for code_challenge and code_challenge_method. It is best when code_challenge_method is S256 (SHA-256 hash) rather than plain. In short, a client using PKCE creates a commitment (code_challenge) and later will verify it with the authorization server, only then will the client receive the access token.
talking
laptop2
PKCE technically allows the commitment to be plain, so the prover just has to present the same plain value again. However if the traffic is being intercepted by a proxy, this mechanism has little value. If the commitment is made with a hash, then anyone intercepting traffic cannot guess what it is when it is revealed as a plain value at the end for the access token.
wink2
At this time, the only options are S256 and plain.
You might be wondering, what if the redirect_uri is changed away from localhost to an attacker's endpoint? OAuth requires that the redirect_uri matches an allow list. These days, it is advised to have exact string matching. That is: no other paths, no prefix paths, no hostname postfixes; "abc" must be "abc".
notes
The only exception to this is public clients with a local hostname. Why? Local ports may have conflicts. This is the only area that is permissible to differ. For public clients that are expected to redirect to a local URI, a remote URI (that is, to an attacker) is not permitted.
ok

Vulnerable auth code culture

In the bad old days, users would put their bank credentials into their financial planning software — oh, wait… that's still happening.

Let me try again.

In the bad old days, users reused passwords across many sites, and now — well, that's still happening, too.

In the bad old days, users would get two factor codes over their phone and no one knew what SIM swapping was… what can I say?

Look at the evolution of our culture around security, and you will find it sluggishly moves forward towards more secure practices — but there's a long tail of poor practices perpetuated by capable organizations. For example, I hear that Mint, a financial planning Software as a Service, would take your bank credentials directly. Hopefully they have some official integration process in place now, but I won't be trying them out. These tech companies lead by example, thereby convincing their users that it is acceptable and safe to enter their usernames and passwords into alternative places.

While many tech leaders are still leading by poor example, we are seeing WebAuthn grow in adoption and accessibility. This improves the authentication story, but what about the authorization story?

OAuth is a step in the right direction for authorization. It specifies patterns for cross platform secure implementations resilient to known attacks. But, like any framework, there are ways to extend OAuth that make it insecure. Google found out the hard way by having authorization code grants copied out of bounds into the console. Microsoft is still finding out the hard way by allowing their official clients to use device auth codes for Office 365.

Threats aren't targeting your passwords and SIM cards for the sake of its contents. They are going after what they can access under a privileged authorization. After all, if the information the threat wants is in the clear, attackers wouldn't need to bother with impersonating someone. Authentication is just the entry point to get authorization to access something. When an attacker can easily attack the authorization phase of accessing a resource, they will gladly take the easier path.

All it takes is for the authorization server to permit a widely-scoped authorization request through and for one fatigued user to make a mistake and copy and paste that code to a threat actor. Or, for a threat actor to send a link tied to an existing device auth code for an official tool. If privileged team members log in every day with a tool that asks them to paste a code into an official device authorization page, how much of a stretch is it to phish them to paste in one more code from somewhere else?

nightmare
This threat story keeps me up at night. Jenko's talk at DEFCON 30 alerted me to this risk and I have seen device auth codes with my own eyes at work.

Conclusion

Device auth codes solve an important usability problem, one that should only be used for limited-input devices with limited purpose. However, SMS MFA and TOTP codes can be phished, so can device auth codes. But unlike MFA tokens, device auth codes are cheaper to phish. No passwords, malware, or brute forcing needed.

Google has proven that secure and convenient in-band limited-input device authorization is possible. I hope that more companies adopt this technology and that it becomes standardized in the future so that device auth codes are deprecated and, eventually, disabled.

Command line tools are improperly using device auth codes instead of local redirection. This promotes an unsafe cultural expectation that this behavior is secure when in fact it is vulnerable to phishing. Instead, these tools should use local OAuth with PKCE. Thankfully, some cloud providers and services already do utilize this method.

We need better from tech companies, especially those we trust to run our businesses. If you see a CLI tool that asks you to copy tokens to authorize it, then escalate immediately and demand better from that software as a service or cloud provider.