Review of TLS Mastery by Michael W Lucas - Part 1

- 33 min read - Text Only

This week I received a physical copy of TLS Mastery by Michael W Lucas, an author with 25 publications attributed to him on Amazon. Of his non fiction publications, most are focused on configuring and maintaining dedicated servers and server software with a bias towards BSD and software also available on Linux platforms. Reviews on Michael's Amazon author page describe his work as intended for the system administration space.

As a software engineer, as an architect, as a system administrator, as a self-taught cryptographer, I wanted to see if there were gaps in my practical understanding of TLS technology and administration.

TLS Mastery

The following is the description included for the book on the purchasing page:

Transport Layer Security, or TLS, makes ecommerce and online banking possible. It protects your passwords and your privacy. Let’s Encrypt transformed TLS from an expensive tool to a free one. TLS understanding and debugging is an essential sysadmin skill you must have.

Two years ago, I was responsible for bringing 5000+ white label ecommerce websites onto HTTPS, the fear that Google Chrome would display all HTTP websites as scary motivated this project. However when doing the research in how to accomplish this in an automated way, Amazon Certificate Manager simply did not have the capacity for this many certificates, and no API capable appliances existed to terminate mass amounts of domains to an origin.

Further, due to cost concerns and limited window of development time, Lets Encrypt was chosen as the certificate authority. I wrote a centralized tracking and renewal daemon and white label onboarding API which used a RFC8555 library that fulfilled ACME challenges over Route 53. (This is necessary due to wildcard subdomains.) Finally, certificates were delivered to HAProxy TLS termination nodes that were contacted by an AWS Network Load Balancer. At that point, an HTTPS connection was sent to an origin cluster that fulfills requests on each white label domain. Each TLS termination node has a daemon that synchronizes certificates to the local file system when the master daemon has dirtied the certificate pool, upon reloading the daemon (which is a parent process to HAProxy) will issue a reload signal.

This process has remained practically untouched since 1 month post deployment and has had flawless uptime since. Although my mail server uses certbot, there were no solutions at the time that could manage this quantity with an API to keep things in sync with a Customer Relations Management platform.

I'm quite proud of delivering such a reliable product on time with my own design. This project is one of several that lead to a promotion as team lead for core platform development at my employer.


But I digress, back to the book!

Chapter 0

As I read the book, I began to highlight, and even cross out text along the way. The first text I highlighted is as follows.

If you're seriously interested in the innards of cryptography, this book will not satisfy you.

While I read the next half of this book, I found his assertion rang true.

What initially gave me a bad taste at first was Michael beating the dead horse that is Federal Information Processing Standards (FIPS) being a perpetually out of date requirements certification.

Yeah, it's out of date, yes FIPS-140-2 still approves SHA-1, but later (December 2016) DoD PKI policy prohibits use of SHA-1. FIPS may not have the latest and greatest Ed25519 and Chacha20-Poly1305, but that was not worth my eye time. Perhaps it, as text, was intended to charm audiences that are in less favorable system settings where the choices they can make are dictated already.

Another nitpick: the last part of Chapter 0 mentions:

You'll learn when to use TLS and when it doesn't matter.

In retrospect, I don't think he covered "when it doesn't matter".

Chapter 1

Ah hah, I know this part, cryptography. Or at least, I made a best effort last year to learn this part.

Michael first introduces the three main principles of cryptography:

  • Integrity
  • Confidentiality
  • Non-repudiation

At a surface level, he began on target.

But then things started to get a bit weird.

Public key cryptography takes thousands of time more computing power to encrypt, decrypt, and validate than symmetric algorithms. It is slow, expensive, and raises everyone's electric bill.
That last sentence sounds like bitcoin.

Several inaccuracies soon follow.

A MAC is a hash encrypted with a symmetric encryption key known only to the sender and the recipient.


No, although a key is involved, either shared or through key-agreement, encryption is not involved in its definition. Rather, a MAC is a construct resistant to forgery under chosen plaintext attacks, which involves a key generator, a signer, and a verifier. Further, MACs are for data integrity and authenticity of the message the authentication code, also known as a tag, is for.

Two examples, first where he might be confused, and second where he certainly misuses.

Let's take GCM, as in AES-GCM, the key for GHASH is AES(key, [0...0]) All further computations however in GCM do not involve encryption.

And for the second example, let's take HMAC, a shared secret is padded or hashed, then XOR'd with a constant. This repeats with a nested hash operation with another constant followed with the message. No encryption was involved in this MAC.

He then goes to describe how a digital signature is used in public key cryptography.

A digitally signed message is considered authentic. It has integrity, authentication, and non-reputiation. Software digitally signs a message by generating a HMAC of the message, encrypting that hash with the private key, .... Anyone with the public key (that is, everyone) can decrypt the HMAC, independently compute the hash of the message, and compare the two.
An HMAC involves a shared secret, it cannot be used to verify message integrity in a public scenario, only in private scenarios. If Michael just kept using "hash" it would have been fine. In fact, in RFC7515 (JWS), Appendix A.2 shows what a digital signature is like, only public information and normal hashing is utilized to verify the signature. An HMAC, involving a secret, cannot be used in public contexts.
Now you might raise a voice and say, what about HMAC JWTs! Yes! Clients cannot verify HMAC JWTs! Only servers can, and which share the secret! For example, a server might set a cookie with an HMAC JWT to attest the user has met some business criteria to be reviewed upon a future request to the same server. Clients may however verify RSA and EC signatures if they know the public key. Clients can receive an HMAC tagged token and pass it back. If the client modifies it, it will be rejected as per the main principles of cryptography: Integrity.

The next section that got me.. concerned was.

Cryptographers constantly test algorithms. Finding flaws in cryptographic algorithms makes cryptographers happy. They get most happy when they discover a way to break a previously trusted algorithm.
First, this sounds really immature, even a bit projecting.

While some security analysts do in fact enjoy dissecting gadgets used in exploits or find academic pride in shattered: reducing SHA-1 into a 110 GPU-years problem (which is achievable with 10-40 thousand USD on amazon).. This isn't what all cryptographers are focused on, finding a vector like shattered in cryptographic primitives and protocols is like winning the lottery.

I certainly can't speak for all cryptographers, especially when I'm not recognized as one. But that left me irritated.

Moving on, some useful things did come up, such as describing openssl sub commands with examples, as well as a few utility websites such as,

Finally, I think he could have given more detail on Server Name Indication.

Chapter 2

This chapter isn't even 10 pages, it's quite short.

It's titled TLS Connections, and it mostly goes over the openssl s_client. Though for whatever reason, Michael, the author, can't make up his mind about the use of the -crlf flag, which is to send \r\n instead of \n, during the interactive client experience. Instead, he somehow aliases -connect to be its opposite, when -connect specifies the host to connect to.

Modern OpenSSL uses either the -connect or the -crlf options to connect to network ports. The -connect command treats ENTER as a carriage return with a line feed, while the -crlf treats ENTER as a carriage return.

The man page for openssl s_client specifies..

this option translated a line feed from the terminal into CR+LF as required by some servers.

The next bit that struck me as odd.. was a bit of complaint on HTTP and HTTPS using differing ports followed by:

Email has selfishly claimed TCP ports 25, 465, and 587

There's 65k ports dude.

If you're annoyed about setting up firewall rules for many ports at once, you're preaching to the choir but say it constructively.

He then goes on to say

Server-to-server SMTP [TLS] exists mostly to prevent large scale email capture by systems like the United States government's Carnivore.
Okay, tin foil hat. Bank websites are secure for reasons other than to prevent the government from accessing your bank details. There are vastly more relevant reasons to pull out and plant in this text.

The rest of the chapter discusses intricacies of using openssl s_client, trying out different ciphers from the client side, TLS versions including different ciphers, including or excluding TLS versions from the client options.

Perhaps my expectations are excessive, but this appears to be the end of debugging TLS connections. There are several online services that examine TLS behavior and supply feedback, such as Qualys SSL Labs or CDN77 TLS Checker or for other cases such as email SecureEmail CheckTLS.

SSL Simulation

If he broke out wireshark and illustrated which bytes correspond to things, I would've been impressed. That said, if you're curious as to the actual byte by byte communication of a TLS connection, check out The Illustrated TLS Connection and The New Illustrated TLS Connection. It might make your eyes widen at the loops TLS 1.3 goes through to appear like TLS 1.2 until the last minute.

Michael, the author, seems to blame this on "middle boxes", describing them like some sort of corporate proxy appliance that all users have their traffic go through. Though, I think the reality is less conspiratorial.

Chapter 3


Around six months ago, my employer's analytic and reporting appliance (used for preparing and presenting data to stakeholders and shareholders) started to fail TLS validation in the browsers. It expired! After bouncing between a whole tech team, through the entire chain of IT, it ultimately went to the director of engineering and back down to me to get them on their way for an upcoming review our employer's performance in the market.

As I did not have access to the server, and those who did lacked access or ability to withdraw certificate files from it, I ultimately had to create a Certificate Sign Request (CSR), private key, etc. from scratch and pass it to the person with financial and company authority to interact with a Certificate Authority (CA).

Once the certificate chain was retrieved, it was not in a compatible format. So part two that day involved me transforming between PEM and DER and handing it back. All teams involved were quite pleased with the end result. Though it'll probably flop when it expires in another six months.


The beginning of this chapter enumerates different types of certificates, from server, client, issuing certificates (that is: certificates which can issue certificates), as well as root certificates.


If you've ever set up a docker image that uses curl or some other process involving HTTPS calls, you may have to install the CA certificates extracted from Mozilla.

Why doesn't curl just come with it? Well usually the operating system does. But in the case of docker, the operating system among the container layers is a lightweight collection of binaries with install processes and default configuration consistent with the distribution it is labeled as. And I've not seen any bump container versions over newer CA bundles.

The author begins to describe Certificate standards, the organizations responsible for them, as well as the standard ASN.1. He correctly mentions that ASN.1 is used in other protocols such as LDAP.

Though, his commentary on ASN.1 is a bit odd.

Fortunately, you don't need to understand the innards of ASN.1. Accept that they have been unfavorably compared to the Cliffs of Insanity by more than one developer and move on.

As a developer that has written a universal DER ASN.1 decoder and encoder, this sounded like loser talk to me. ASN.1 is like a Protocol Buffers (the schema), which is then encoded using DER which is like CBOR / RFC8949 but invented at or before 1991, and then adopted as a standard. See A Layman's Guide to a Subset of ASN.1, BER, and DER for a useful introduction on the data structure. Beware: this document may have even been written before you were born.

Aside, searching for details in the cryptography space is absurdly riddled with spam and malware on search engines. Contents of the A Layman's Guide to a Subset of ASN.1, BER, and DER somehow seep into the weirdest creases of the internet.

Bad Google Search Results


Michael then continues to say

ASN.1 is built by arranging objects into a tree. Each branch and leaf of the tree is identified by a numerical Object Identifier, or OID.

This is incorrect, I believe that Michael either came to this notion by all X.509 extensions being labeled with an OID, or Michael is confusing DER tags with OIDs.

The ISO has made this standard (or the latest version) unavailable to the public without payment. I consider the ISO to be a detriment to society with their regressive participation in standards forming and adoption.

To demonstrate what a tag is, let's parse out the contents of the following.

(hex/encode (asn1/encode {:type :octet-string :value "Cendyne"}))
 Octet String     Bytes      C e n d y n e
       04          07
  ┌──┬─┬─────┐ ┌────────┐   ┌──────────────┐
  │00│0│00100│ │00000111│   │43656E64796E65│
  └──┴─┴─────┘ └────────┘   └──────────────┘

    8765 4321
  ┌──┬─┬─────┐ ┌────────┐   ┌──────────────┐
  │CC│X│ tag │ │Length  │   │Content Here  │
  └─┬┴┬┴─────┘ └────────┘   └──────────────┘
    │ │
    │ └─►0 Primitive
    │    1 Constructed
   00  Universal
   01  Application
   10  Context Specific
   11  Private
Inspiration for the above ascii chart can be attributed to Figure 35 of Introduction to the subset of ASN.1 required for MMS. (Manufacturing Message Specification (MMS, ISO/IEC 9506))

In the above ascii chart, the tag is 0x04, though you may see that other sections of the tag, specifically the first 3 bits, are reserved for other meanings. If you're wondering, yes the tag can be larger than 5 bits, see how 34dnB0NlbmR5bmU looks in the ASN.1 JavaScript decoder.

So, what is this about a tree?

Well if you dump the certificate response from my website, grab the first certificate in the dump, then ask openssl to display it you'll see:

        Version: 3 (0x2)
        Serial Number:
    Signature Algorithm: ecdsa-with-SHA256
        Issuer: C=US, O=Cloudflare, Inc., CN=Cloudflare Inc ECC CA-3
            Not Before: Apr  9 00:00:00 2021 GMT
            Not After : Apr  8 23:59:59 2022 GMT
        Subject: C=US, ST=California, L=San Francisco, O=Cloudflare, Inc.,
        Subject Public Key Info:
            Public Key Algorithm: id-ecPublicKey
                Public-Key: (256 bit)
                ASN1 OID: prime256v1
                NIST CURVE: P-256
        X509v3 extensions:
            X509v3 Authority Key Identifier:

            X509v3 Subject Key Identifier:
            X509v3 Subject Alternative Name:
      ,, DNS:*

But really here's what it looks like as an object.

(({:constructed true :tag 0 :type :context-specific :value "2"}
  {:type :sequence :value "1.2.840.10045.4.3.2"}
  ( {:type :set :value ("" {:type :printable-string :value "US"})}
    {:type :set :value ("" {:type :printable-string :value "Cloudflare, Inc."})}
    {:type :set :value ("" {:type :printable-string :value "Cloudflare Inc ECC CA-3"})})
  ( {:type :utc-time :value "210409000000Z"}
    {:type :utc-time :value "220408235959Z"})
  ( {:type :set :value ("" {:type :printable-string :value "US"})}
    {:type :set :value ("" {:type :printable-string :value "California"})}
    {:type :set :value ("" {:type :printable-string :value "San Francisco"})}
    {:type :set :value ("" {:type :printable-string :value "Cloudflare, Inc."})}
    {:type :set :value ("" {:type :printable-string :value ""})})
  (("1.2.840.10045.2.1" "1.2.840.10045.3.1.7")
    {:bits 520 :encoding :base64-url :type :bit-string :value "...."})
  {:constructed true :tag 3 :type :context-specific :value (
    ("" {:encoding :base64-url :type :octet-string :value "MBaAFKXON-rrsHUOlGeItEX62SQQh5Yf"})
    ("" {:encoding :base64-url :type :octet-string :value "BBTLamRKw5PEuj3gLuL_iq0PddlTOA=="})
    ("" {:type :octet-string :value "03\x82\\x82\\x82\r*"})
    ("" {:type :boolean :value true} {:encoding :base64-url :type :octet-string :value "AwIHgA=="})
    ("" {:encoding :base64-url :type :octet-string :value "MBQGCCsGAQUFBwMBBggrBgEFBQcDAg=="})
    ("" {:encoding :base64-url :type :octet-string :value "..."})
    ("" {:encoding :base64-url :type :octet-string :value "..."})
    ("" {:encoding :base64-url :type :octet-string :value "..."})
    ("" {:type :boolean :value true} {:encoding :base64-url :type :octet-string :value "MAA="})
    ("" {:encoding :base64-url :type :octet-string :value "..."}))})
 {:type :sequence :value "1.2.840.10045.4.3.2"}
 {:bits 568 :encoding :base64-url :type :bit-string :value "..."}

Do you spy the Subject Alternative Name in there? It's OID and has some special binary format in it. That said, you can still make out the certificate DNS in use: and *

The consistent use of OIDs and their payloads for X.509 extensions is also why I think this is a possible way Michael misunderstood ASN.1.

LetsEncrypt actually has a wonderful blog post about understanding the contents of an ASN.1 message as encoded with DER: A Warm Welcome to ASN.1 and DER.

Michael then goes on to describe root certificate bundles and who maintains them. I never knew Adobe of all companies had their own. The Mozilla bundle, which comes with FireFox, is also frequently used for servers too.

The closer we get to certificate Content, what things mean, like delegated permissions or that modern browsers reject certificates with an issued date older than 398 days the more Michael shows he's returning towards his area of competence.

Back in the day, companies could acquire 3 year certificates, for less money than three 1 year certificates! Sounds like a deal, right? Except stolen long lived certificates allowed for long periods of time where a stolen certificate could be active. The infrastructure for certificate revocation and proactive checking (see OCSP) is in a far better state these days than downloading large (70MB) CRL files for which certificates were revoked. Simply, the ask to download the certificate revocation list had become too burdensome for devices and consumers, so tech people relied on simply did not do it. Or, it has a cached version from months ago that shipped with the browser.

In this chapter, after all the blunders Michael has made, I finally start warming up to his content.

And then a surprise!

Financial institutions often use certificates with OV [organization validated] validation. An application that receives an EV [extended validated] certificate might flag the user somehow; some browsers turn the address bar green on some sites using an EV cert. In the real world, users almost never notice this--and when they do, they get alarmed.

I never thought seeing a green bar with a secure lock logo would scare people.

Browsers no longer make a big deal about EV certificates, but here's how you can at least differentiate them.

Domain Validation

Domain Validated

Extended Validation

Extended Validated

Anyway, after describing how certificates can sign other things, he introduces Intermediate Certificate Authorities.. the justification is quite outside my expectations, and really pushes my bullshit meter.

The private key is often kept tightly locked up and can only be accessed by select corporate officers. Select corporate offices dislike interrupting their pleasures to carry out their corporate responsibilities, when they should rightfully delegate all the routine labor. That's where intermediate CAs come in.

I don't run a CA, but I at least understand the theory that it's practically impossible to revoke a certificate authority root certificate But, not terrible (practically speaking) to revoke an intermediate certificate authority. My understanding is that intermediate certificates, exist as a practical safety mechanism for continued operations. If a root certificate is compromised, the company running it is hosed. They get kicked off every certificate bundle and fold over. If an intermediate certificate is hosed, it's a controversy, a bunch of certificates have to be reissued, and the CA's customers may grumble by having to update their certificate files. But then again, due to cross signing, this may not always be the case. And the CA can continue with less respect in the world.

Anyway, he continues on, just about to introduce cross signing before he drops this smelly bomb.

Even sysadmins fiercely adamant that all user software must be updated often bear responsibility for decrepit mission critical systems accessible only via Internet Exploder 6.

Okay, well my IPMI controller for my NAS uses a java applet with an RSA 1024 key, which requires extra flags to enable... But the Internet Exploder 6 joke was funny in 2010. It's 2021 man. If you need IE6 you get a VM you only turn on once a year with Windows XP, restore it from a snapshot, use it for the purposes you need in an isolated environment, and then shut down and destroy the VM.

Moving on. While he considers cross signing to create a Twisted Tangle of Trust and later Tangle of Terror, my reaction was that he had a bad experience due to a bad vendor. This was then validated by ending the section with:

If your software tells you that the Chain of Trust isn't trusted, but examination of the certificate shows multiple intermediaries and roots, your software needs an update or perhaps flat-out repairing. If your vendor has no update, you need a new vendor.

I should hope to the heavens that no vendor packages certificates as part of the binary of their appliance software...

Well, that's outside of my experience, so I'll have to leave it be.

The next section gave me a chuckle however.

The full certificate validation process is baroque and cumbersome, with a whole bunch of exceptions and loops and dead ends. If you require details, RFC 5280 contains enough of them to destroy any hope that true love exists.
Michael, have you ever looked at code to handle calendars and dates? Check out Jewish calendar calculation in PHP. It'll blow your mind. More on this can be found at Hebrew Calendar.

Later on in the encoding section, he gives comfort in converting keys, certificates, etc. between formats, be it DER or RFC7468 (PEM) and he gives useful openssl commands to do some surface level encoding exchange.

What's weird to me is that he then describes DER correctly, despite me feeling like he totally missed it earlier when describing ASN.1 (the schema).

DER is a binary format. Everything is encoded with a tag, a length, and then the data.
Thank goodness, my opinion was about to sink into the mud.

He briefly mentions how PGP won over S/MIME in the email encryption space, but I guess tha comes down to a proactive set of utilities that just works everywhere. Though it's no longer viewed as very maintainable.

Ah! PKCS#12, the beast I'd love to take on some day! His description is accurate so far as I understand the source material. (which by the way is impossible to find now besides weird links. If the one above breaks, I have a personal cache.)

Now here's a thing I didn't catch. He mentions a -nodes flag, I kept reading it like node js, turns out it actually means NO DES, as in the encryption cipher. This appears to be an inherited name for just not encrypting the content when used in conjunction with the -text flag. Good on you Author!

Though, he seems to think that private keys are all RSA keys these days. At least he labels PKCS#8 and PKCS#1 v1.5 (see section 7.2) correctly.

The next portion of this chapter gets back to information on TLS certificates that I am less familiar with, such as identifying legacy (version 1 and 2) certificates from modern ones.

He then asserts something interesting under the section Wildcard Certificates.

Wildcard certificates are headed towards obsolescence.
There goes my bullshit alarm again.

As a practitioner of software and systems, wildcard certificates are staying with us! Just because ACME makes it easy to get subdomains as needed, and this being more secure (assuming the certificates are deployed to differing hosts without lateral movement) in the event of a breach, does not make it fundamentally as user friendly as the certificate CloudFlare deploys on my behalf for this website. Try adding 5000 certificates to Amazon Certificate Manager (ACM), I dare you. That number is not made up, it's how many I have in production right now.

He then wraps up with CAA Records, or Certificate Authority Authorization, which specifies CAs authorized to sign certificates for a domain. For example, let's say has a CAA record saying only digicert can sign. If someone hacks into a server, and can pass HTTP ACME checks on it they could get their own certificate for through LetsEncrypt. However, a responsible CA would check for CAA records on to see if it is authorized to do so. Given LetsEncrypt is not in the CAA record, then it will refuse to issue the certificate.

This post has become quite lengthy, so the review will continue in a part two, yet to be written.

Overall, at this point in the book (about 45% done), my opinion is.. Michael appears to have experienced using the openssl command line functions, but lacks fundamental understanding in the topics that make TLS possible. He then covers up this with some discomforting humor, not fit for me as the audience.

I am most likely not his intended audience.