Breaking Into Secure Facilities With OSDP

- 7 min read - Text Only

Facilities like hospitals, banks, data centers, airports, power and natural gas plants, and government institutions secure their properties with authorization hardware built to use the Open Supervised Device Protocol (OSDP). Unfortunately, there are both design weaknesses and poor practices which can be realistically exploited in the real world. OSDP advertises itself as an encrypted protocol, yet many installations use unencrypted modes. While it has defenses against trivial replay attacks, it has such a small counter inside that with enough samples one could replay communications on the wire. It also uses a truncated Message Authentication Code (MAC), which exposes OSDP systems to brute-force attacks. And lastly, OSDP is by design easy to misuse: installers can leave the controller perpetually in "install" mode which allows any device to ask for secret credentials for another device without any encryption on a shared communication line.

An example badge scanner that unlocks a garage door. It uses OSDP.

This talk summary is part of my DEF CON 31 series. The talks this year have sufficient depth to be shared independently and are separated for easier consumption.


This next talk was on the DEF CON official track and I viewed it from the hotel room before setting off for the day. AltF4 and Shad0 from Bishop Fox share several problems with OSDP and how the technology can be exploited and implementation issues in the real world.

The presentation

Several scanners connect to a common communication bus and each scanner controls an individual door. Weigand, the previous choice technology, was not encrypted and would allow a threat actor to connect, listen, and replay signals on the wire. At face value, OSDP mitigates a simplistic replay attack using counters, authenticated messages from each door scanner, where the message is encrypted. It mitigates the low effort attacks that the Weigand was vulnerable to. With more effort, these attacks are still possible.

RFID Badge Setup, shows an encrypted tunnel between a reader and controller (when this is not reality most of the time)

Encryption is a recommendation, not a requirement. This recommendation is not broadly followed. A threat actor can connect to one door to collect information from all doors on the same wire bus and gather information which can be used for a precise and targeted attack.

In practice, the counter has an incredibly short range of only one-and-a-half bits, as it loops between 1 to 3! When the unit resets, it starts at 0, increments to 1, then to 2, then 3, and back to 1. With enough door authorizations collected, a threat actor could replay a message from one door scanner to the authorization server.

The authentication is weak with a 32 bit Message Authentication Code (MAC). It is calculated with AES CBC using one key for all but the first block, and another key for the last block. While the construction is unique, the bigger issue is how few bits are sent.

… in order to reduce the message size and transmission time overhead, the messages will contain only a partial MAC. For messages … the first four bytes of the computed MAC are actually sent.
you-dense

The full length of the MAC is 128 bits. With a small counter size and MAC size (32 bits), these messages can be brute-forced.

popcorn
FIPS does allow for truncated MACs so long as the amount of authentications permitted is limited and the time between attempts is limited. An unmonitored OCSP installation may allow for unlimited attempts.
clap
NIST is now considering that at least 96 bits are required to authenticate messages. Should NIST change the requirements to use at minimum 96 bits for authentication tags and MACs in general, we should see an improvement of hardware and digital products purchased by the U.S.

Additionally, a mistake in practice is that installers do not always turn off "install mode" on the control servers. When the system is in install mode, any door authorization device can request its key material in the clear on a shared data wire. Any malicious device can, if it knows the hardware ID of another device, request its key material. Additionally, any malicious device can passively listen to a shared communication wire to gather key material.

Their recommendation — which I agree with — is to bring door authorization devices to the server and exchange key material on an isolated control-plane connection in person. In their words: "Don't install in production."

For more detail from the authors, check out Dan Petro's write up: Badge of Shame - Breaking Into Secure Facilities with OSDP.

Final thoughts

OSDP costs $200 if you want the specification. Obscuring designs from researchers and students will repeat the history of avoidable vulnerabilities in systems that protect hospitals, infrastructure, and government facilities, which OSDP was intended to prevent over its predecessor, Weigand. For example, Tetra's radio cryptography is insecure, inaccessible, and used by healthcare, emergency services, law enforcement, and infrastructure.

The Security Industry Association should improve their standards going forward with cryptographers involved and have their cryptography, at least, freely available for inspection. Cryptography students love to examine what is available, for free. Just, do not treat them like Threema did when they published Three Lessons from Threema.

There’s a new paper on Threema’s old communication protocol. Apparently, today’s academia forces researchers and even students to hopelessly oversell their findings. Here’s some real talk: threema.ch/bp/new-paper-on-old-threema-protocol
@ThreemaApp You've got to be kidding, right? Even if the findings are more theoretical in nature and the problems have been fixed, the arrogant way you chose to communicate about this makes me lose a lot of trust in your product. Some humility would have been in order.