Device OAuth Flow is Phishable - 2022-12-12
Have you ever tried to type a username and password on a smart TV with a remote? It sucks. Get the password wrong and it is a bad time.
There is a solution!
Device Authorization enables a second device to bestow access with a user's active consent. In short, the device can present a one-time passcode (OTP) to the user, and they can transcribe that into an authorization web page or app on their phone or computer. After authorizing the device, it then appears logged in and can perform its functions with the user's preferences.
One problem: transcribing device codes is phishable, and it trains users to think this is a safe activity. Device auth codes effectively bypass unphishable two-factor security.
In this article, I cover why device authorization codes exist, their benefits, and superior technologies that replace them. Next, I share how these codes and similar patterns can be and have been abused. After that, I share what technologies should be used for local applications and I wrap up by remarking how security culture changes over time... slowly.
Before Device Auth Flow
These devices are not made for convenient input. Any means to authenticate is an afterthought to the inputs it offers. After all, a printer shouldn't be an intimidating monolith of buttons — it just needs to be obvious how to use 95% of its functionality. So, we end up in situations where textual input is through number pads, arrow buttons, or even down to a dial and a push button. I have entered the WiFi password into this printer before. It was the definition of a bad time.
The situation isn't much better when you have a remote to do this on a screen. Sure, it might show up alphabetically or with a QWERTY interface, but inputting text is time-consuming, awkward, and error-prone.
Device Authorization Codes
Wouldn't it just be easier if an input-constrained device could borrow another for this infrequent use case? Rather than the complicated machinery of having a cell phone or computer somehow hook up as a keyboard to the constrained device, we have a prompt to input a one-time code into a trusted website or application on a cell phone or computer where it should already be authenticated.
This idea caught on, and we now have an OAuth standard for it! RFC8628 OAuth 2.0 Device Authorization Grant describes a process where the input-constrained device requests a device code, user code, and verification URI (such as spotify.com/pair). The user code is that short passcode for the user to transcribe and the device code acts like a browser cookie, so the service remembers the device. This code only lasts so long, though — usually 5 to 30 minutes. Both the user code and device code are generated by the service, which prevents an attacker from predicting the codes a specific device will use.
The device will regularly ask the service if it has been authorized yet until the token expires. In the meantime, the user is expected to follow those instructions to authorize the device to access their profile and act on their behalf with the service.
RFC8628 specifically states what this device authorization flow was designed for:
This OAuth 2.0 [RFC6749] protocol extension enables OAuth clients to request user authorization from applications on devices that have limited input capabilities or lack a suitable browser. Such devices include smart TVs, media consoles, picture frames, and printers, which lack an easy input method or a suitable browser required for traditional OAuth interactions.
The device authorization grant is not intended to replace browser-based OAuth in native apps on capable devices like smartphones.
The operating requirements for using this authorization grant type are: … The user has a secondary device (e.g., personal computer or smartphone) from which they can process the request.
This process is specifically tailored to devices with limited input and limited purpose. These devices delegate the authentication and authorization to a second device. The user code exists solely to match these devices together for this authorization handoff. Afterwards, the device receives an access token and refresh token to continue acting on behalf of the user with the service.
With that high-level view in mind, a familiar pattern emerges. Recall Uber's breach: a privileged user was convinced to grant access to a remote threat. This grant went out of bounds by jumping between the threat actor's client and the Uber contractor's phone. Neither were on the same network, connected by Bluetooth or WiFi, or plugged in to one another with USB or thunderbolt. In other words, the push notification by design facilitates an out-of-bound authentication. Similarly, any token or password that a user has to transcribe is information that goes out-of-bounds between devices or processes.
If you're looking to move on from TOTPs and OTPs, WebAuthn is the choice to make. It binds the authentication to the device or a security key, the website it is authenticating to, and the user who is authenticating. In fact, it may even become the cornerstone of a passwordless future.
That said, what about this two-device scenario? I've seen some interesting designs by Google here. For a Chromecast, it starts up with a custom WiFi name like
Chromecast5361 and an app is used to set up and authenticate the device with both the network and Google as a service. Sure, there is a code, but it does not authenticate the device with the service! This code helps a user distinguish devices from one another, which sounds a lot like authentication.
On the Apple TV, the YouTube app offers two methods to add a user account. The first involves an on-screen keyboard, but that's not interesting for our discussion. The second is another app-based integration which operates in-bounds over the network.
I was amazed at how fluidly this experience flowed. There was no code entry, just confirmation as if logging into any other app. Without a code or password, the chances of being phished is further minimized.
The closest thing I've seen in Apple's ecosystem is their phone migration process. I wish setting up a new device with iCloud was as smooth as this.
An in-bounds experience requires a lot more technology lift to be successful. However, the results are clear: it is convenient, more secure, and does not involve passwords or passcodes! I have hopes that like WebAuthn, an in-bounds solution for setting up input constrained devices is standardized and available outside of Google's ecosystem.
Device auth codes gone wrong
Remember what this design was intended for: authorizing devices with limited input and limited purpose using a second device. When these out-of-bounds codes break this contract, the consequences are scary. Jenko Hwong has been presenting on this topic for several years now. At DEFCON 30, I attended his presentation "OAuth-some Security Tricks: Yet more OAuth abuse." Unfortunately, I cannot find a recording of this specific presentation, but a few others have surfaced since.
A few of the slides below come from Jenko's presentations.
While device authorization codes have a limited lifetime by design, opportunistically prompting the user to enter a remote user code into the official device authorization process can be catastrophic. Jenko shows how Office 365 authorization can be phished and then exchanged for Active Directory or Azure cloud access. This is possible because Microsoft considers first-party access tokens special and, for user convenience, they can be used across other applications. By permitting a device authorization to act with unlimited purpose, Microsoft has given phishers a valuable tool that is actively exploited.
Speaking of lateral movement by a threat actor, Microsoft is opening the door to linking personal Microsoft accounts and Active Directory accounts. Why? So that people who use Bing rewards can collect them on both their workforce and personal accounts and redeem them on either.
While I praised Google's security for the end user above, their cloud CLI had made a terrible blunder by intentionally including an out-of-bounds authorization code that the user must copy and paste from the web browser into the console to authorize the
gloud cli tool.
When a Jenko Hwong (via Netskope) brought this to Google's attention, Google was receptive to eliminating this security risk! Immediately they restricted lateral access and announced a sunset period for clients using the out-of-bounds authentication. However, Google did not acknowledge Jenko or Netskope in their announcement.
Authorizing local applications in-bound
If you are writing or maintaining a CLI tool or desktop application, do not rely on the user to copy a code into your application. Instead, host an HTTP server on a random local port which the authorization server can redirect to. Your local application contacts the authorization server over HTTPS with certificate verification, but since it itself is serving HTTP, we need one more protection to ensure this process is completely in-bound and secure against local threats.
To recap: we're talking about the user, the authorization service, and the process that wishes to be authorized. The user may authenticate with the authorization service, but the user is not authenticating with the process. The process performs a handshake with the authorization service, which authenticates and confirms consent to authorize that process. For an in-bound authorization to occur, a user with an honest client must be able to authorize that client without those credentials being recovered by a threat actor.
CLI tools like wrangler are public clients. There is no way to confirm the identity of a public client in the hands of an honest user or a malicious imposter. This is important, but we will be focusing on honest public clients. However, what if a malicious proxy or local packet monitor, such as Wireshark, recovers the data and it races to claim the access token first? In this case, we need to bind the process asking for authorization with the authorization service so that no other process can recover the credentials. This can be done with PKCE!
By using OAuth with PKCE, Cloudflare's
wrangler CLI achieves an in-bound authorization on the same device without risk of local threats recovering the credential.
Thankfully, Google Cloud's
gcloud CLI has changed to do the same! Both Cloudflare and Google are using PKCE with a SHA256 challenge to bind the honest local process to the authorization flow that occurs. This is far safer than Google's previous state where they asked the user to copy an authorization code grant from the browser into the console.
Vulnerable auth code culture
In the bad old days, users would put their bank credentials into their financial planning software — oh, wait… that's still happening.
Let me try again.
In the bad old days, users reused passwords across many sites, and now — well, that's still happening, too.
In the bad old days, users would get two factor codes over their phone and no one knew what SIM swapping was… what can I say?
Look at the evolution of our culture around security, and you will find it sluggishly moves forward towards more secure practices — but there's a long tail of poor practices perpetuated by capable organizations. For example, I hear that Mint, a financial planning Software as a Service, would take your bank credentials directly. Hopefully they have some official integration process in place now, but I won't be trying them out. These tech companies lead by example, thereby convincing their users that it is acceptable and safe to enter their usernames and passwords into alternative places.
While many tech leaders are still leading by poor example, we are seeing WebAuthn grow in adoption and accessibility. This improves the authentication story, but what about the authorization story?
OAuth is a step in the right direction for authorization. It specifies patterns for cross platform secure implementations resilient to known attacks. But, like any framework, there are ways to extend OAuth that make it insecure. Google found out the hard way by having authorization code grants copied out of bounds into the console. Microsoft is still finding out the hard way by allowing their official clients to use device auth codes for Office 365.
Threats aren't targeting your passwords and SIM cards for the sake of its contents. They are going after what they can access under a privileged authorization. After all, if the information the threat wants is in the clear, attackers wouldn't need to bother with impersonating someone. Authentication is just the entry point to get authorization to access something. When an attacker can easily attack the authorization phase of accessing a resource, they will gladly take the easier path.
All it takes is for the authorization server to permit a widely-scoped authorization request through and for one fatigued user to make a mistake and copy and paste that code to a threat actor. Or, for a threat actor to send a link tied to an existing device auth code for an official tool. If privileged team members log in every day with a tool that asks them to paste a code into an official device authorization page, how much of a stretch is it to phish them to paste in one more code from somewhere else?
Device auth codes solve an important usability problem, one that should only be used for limited-input devices with limited purpose. However, SMS MFA and TOTP codes can be phished, so can device auth codes. But unlike MFA tokens, device auth codes are cheaper to phish. No passwords, malware, or brute forcing needed.
Google has proven that secure and convenient in-band limited-input device authorization is possible. I hope that more companies adopt this technology and that it becomes standardized in the future so that device auth codes are deprecated and, eventually, disabled.
Command line tools are improperly using device auth codes instead of local redirection. This promotes an unsafe cultural expectation that this behavior is secure when in fact it is vulnerable to phishing. Instead, these tools should use local OAuth with PKCE. Thankfully, some cloud providers and services already do utilize this method.
We need better from tech companies, especially those we trust to run our businesses. If you see a CLI tool that asks you to copy tokens to authorize it, then escalate immediately and demand better from that software as a service or cloud provider.