macOS OCSP “telemetry”—Explainer and Mitigation with Noise

Dongsung "Donny" Kim
10 min readNov 18, 2020

--

tl;dr: Scroll down to the last section.

“Why is Apple spying on us?”

The rumbles from those with tinfoil hats have reached its peak over the last couple of days. On macOS Big Sur’s launch day, Apple’s OCSP server got extremely slow, and people noticed it. Jeffrey Paul’s post “Your Computer Isn’t Yours” accused Apple of collecting Date, Time, Computer, ISP, City, State, Application Hash whenever you launch an application, on a computer you own, in the form of OCSP requests. They are sent in plaintext, so now internet companies can use it as telemetry! Then oh PRISM! They are in on it too! You can’t even turn it off, then they crippled LuLu! When the article reached Hacker News, there was no coming back. It was an uproar. I knew it! Apple doesn’t care about your privacy! Liars! Some pitchforks are in order.

A while later, Jacopo Jannone’s post “Does Apple really log every app you run? A technical look” attempted to debunk some of the claims. There is no Application Hash—only a Developer ID Certificate serial number. This is merely to check if the app is a known malware, to check if that specific developer’s certificate is good or bad. Well, that’s good enough then, isn’t it? Relax, there’s nothing to worry about. It’s just metadata. Hacker News, as always, didn’t really calm down.

Meanwhile, TLS PKI infrastructures have been relying on OCSP since 1999. For many of the HTTPS websites you visit on Firefox, there’s OCSP requests going out in plaintext, for everyone to see, to probably-PRISM-friendly US-based corporations like VeriSign. An eavesdropper can simply query crt.sh to find out what website you visited. Don’t forget there’s SNI—domain name, all in plaintext. South Korea censors porn with this. ISPs and enterprises love it.

An OSCP request for a mozilla.org certificate, sent out by Firefox in plaintext, captured by Wireshark. You can query the serial number like this.

What should we do? Should we just burn the whole thing down? That’d be a fitting response considering we’re still in 2020.

Complexity in trust

Ew, gross.

The main criticism I see is “Why won’t they just slap HTTPS on it?” OCSP exists because there are times you need to distrust an already-issued certificate. Sounds easy enough. But then how do you trust something, and how do you distrust something?

Every Public Key Infrastructure—Apple’s Developer ID, HTTPS SSL/TLS—is built on a concept called chain of trust. It works like this: When you visit an HTTPS website, you are presented with a TLS certificate issued for the domain. The browser validates the certificate’s cryptographic signature using its issuer’s certificate. But if the issuer is not yet trusted, the issuer’s certificate’s signature is validated using the issuer’s issuer’s certificate. If that’s also not yet trusted as well, then the issuer’s issuer’s issuer’s certificate gets involved. If that’s not trusted… you get the picture.

Then where does the chain end? Typically, with a trustedroot CA.” Copies of root stores are stored on the disk itself; each one contains a list of trusted-by-default root CA certificates. Mozilla, Chrome, Microsoft, Apple, and others individually maintain such root store programs. CAs send applications to the programs, then get meticulously audited to be included in the stores. Naturally, there are cases where one CA is trusted by one program, but not by a different one. A great example would be the Government of Korea.

Example: My domain’s certificate was issued by Let’s Encrypt Authority X3, whose certificate was issued (cross-signed) by DST Root CA X3, whose certificate is listed in (trusted by) the root stores, all individually maintained by at least five different parties. Diagram on the right created by crt.sh.

Then you might wonder, “How do you trust a root store?” Every platform takes a different approach, but here we focus on macOS. Updates include /System/Library/Security/Certificates.bundle which contains the list: certsTable.data. /System is in the read-only SSV, so not even a root user can change the content as long as SIP is on by default. trustd reads the data, then serves it to other system services over XPC.

Then you wonder, “How do you trust trustd?” Well, code signature, of course! The binary was cryptographically signed by Apple’s Software Signing certificate. How do you trust the certificate? Well, you validate the certificate’s signature using the certificate’s issuer’s certificate; Apple Code Signing Certification Authority. How do you trust that certificate? Well, you validate the issuer’s certificate’s signature using the issuer’s issuer’s certificate; Apple Root CA. Wait, how do you trust a root CA when the code in question is responsible for the root store?

Code signature information of trustd.

At the end of the long chain of trust, the Apple Root CA public key is hardcoded within the Boot ROM of Apple’s chips; T2, A series, and presumably M1. This is the “hardware root of trust” where Apple Root CA is trusted is an axiom. Using this axiom, the Boot ROM verifies the iBoot signature, which verifies the UEFI firmware signature, which verifies the boot.efi signature, which verifies the macOS kernel signature, which verifies the launchd signature, which verifies the trustd signature, which reads the root store data from the system volume where SSV/SIP are enforced by the kernel.

OCSP comes into action after this whole ordeal. For many reasons, previously trusted certificates get constantly revoked by their issuers. Sometimes they were issued as a test, sometimes they get compromised, and sometimes they get revoked by mistake.

You visit an HTTPS website, you are presented with a TLS certificate. Its cryptographic signature itself looks great—nothing wrong with its chain of trust. But then, the client would ask the issuer’s OCSP responder. If the issuer says “It bad,” the certificate is now dead to the client.

The request includes serial number of the certificate, and its issuer hashes. The response includes the good or bad, when to ask again, and the signature of the response signed by the issuer. All in plaintext, over the standard HTTP 80. RFC 6960 merely states TLS MAY be used. Why is that?

One reason is the cost. There is a separate RFC 5019 for The Lightweight OCSP Profile for High-Volume Environments, which is what trustd primarily follows.

This document addresses the scalability issues inherent when using OCSP . . . clarifying OCSP client and responder behavior that will permit:
1) OCSP response pre-production and distribution.
2) Reduced OCSP message size to lower bandwidth usage.
3) Response message caching both in the network and on the client.

It’s understandable if you think about it. There are millions of certificates issued by a handful of issuers. Every client periodically sends out OCSP requests again for every certificate they encounter. This is an infrastructure nightmare. You have no choice but to depend on pricey CDNs, and network-level cache—something TLS is simply not designed to support.

There’s OCSP stapling to mitigate this problem. Rather than a client sending out OCSP requests, the HTTPS website server periodically contacts the issuer for a signed OCSP response, which then get sent back to the client in a TLS handshake. While being significantly cost-saving, this is built as a TLS extension: It fundamentally assumes the certificate is for a server. If there is no server, like an app, this cannot work.

The other possible reason is recursion within a PKI. Let’s say you have a TLS certificate for a website to verify. Looking into it, it directs you to its issuer’s OCSP responder. You go to the issuer’s OCSP responder over HTTPS. Now you have a different certificate to verify: the issuer’s OCSP responder’s TLS certificate. Looking into that certificate, it directs you to its issuer’s OCSP responder, which is itself.

TLS certificate for ocsp.entrust.net is directing its OCSP queries to ocsp.entrust.net.

This problem could also be mitigated by OCSP stapling, in this case by the issuers’ OCSP responders, preferably with Must-Staple. Since it isn’t a requirement, there could be times things simply might not work properly. The client might need to fall back to HTTP, or rely on CRL.

From WWDC18, I think. I miss a crowd. Any crowd.

It seems to have been like that since forever. You install or launch a signed app. trustd checks its OCSP response cache. If there is no entry, trustd sends a plaintext OCSP request to Apple, with the app’s Developer ID certificate serial number. The response is cached for 5 minutes—later changed to 12 hours. It’s part of what Gatekeeper does. (Probably works the same on iOS as well.)

Here, we can speculate Apple’s decision process. Apple is handling Apple PKI quite like TLS PKI. A signed app might as well be trustworthy as any HTTPS website. Apple should have the capability to distrust any Developer ID at any moment. Every single app could be distrusted at any moment.

A side note: In that sense, I think trustd traffic being exempted from NetworkExtension framework is somewhat understandable. A now-distrusted firewall or VPN could reject OCSP requests aimed at itself. Sounds like a rare but plausible scenario in my opinion. But Apple Maps and App Store exempted too? I disagree with that decision, along with many others.

Despite all of this, broadcasting a number that can be linked to what I use on my computer for everyone to see sounds like a legitimate privacy concern, especially in this post-Snowden post-consent era. As a response, Apple announced they’ll be changing stuff. Specifically,

* A new encrypted protocol for Developer ID certificate revocation checks
* Strong protections against server failure
* A new preference for users to opt out of these security protections

Now, could they just slap HTTPS on it? The simple answer I think is Yes. I believe this simply was an oversight. As discussed, there were seemingly two potential problems with OCSP over TLS: cost and recursion. Given that Apple is the wealthiest company in the world, I’m certain they could (albeit not so easily) swallow the cost.

But how about recursion? One thing to note here is that we’re handling two separate PKIs: Apple PKI and TLS PKI. Then, use HTTPS for Apple PKI OCSP, and HTTP for TLS PKI OCSP. For example: When a new app is installed, trustd would try to verify the Apple Developer ID certificate, by sending an OCSP request with its serial number to https://ocsp.apple.com. While trying to establish an HTTPS session, the TLS certificate for ocsp.apple.com would need to be verified. The TLS stack would send a OCSP request with its serial number to http://ocsp.apple.com. No recursion. This is assuming OCSP stapling is not in play—But it is already in place!

Sending status_request extension request. The payload — Apple PKI OCSP request — gets encrypted by the TLS layer, and as of TLS 1.3, the extension response — TLS OCSP response — is also encrypted.

There are still some considerations. In fear of a bad keychain, they could opt to trust only the root store. When there’s a TLS middlebox detected, they could fallback to HTTP. Or they might opt to trust only the Apple Root CA, after probably moving the old OCSP infrastructure to Amazon. They might want to renew all the Developer ID certificates since they have HTTP OCSP URIs. In the mean time, the URIs would need to be hard-coded for an upgrade to HTTPS, which is not really desirable. I don’t even know how things will end up with BridgeOS. Etc, etc. It’s always not that easy.

But all in all, the problem seems solvable. Although sending Apple OCSPs over TLS cannot fix other TLS OCSP responders, or the unencrypted TLS SNI (until ECH eventually happens,) at least this particular issue can be tackled.

What can you do now?

I think the whole concern boils down to these three perspectives.

I trust Apple, the internet provider, the government, and everything in between.

Nothing to do here. Enjoy your ride on the internet. But please do reconsider.

I don’t trust Apple.

Fair sentiment to a degree. What you can do is block Apple’s OCSP host by adding this line to /etc/hosts: 0.0.0.0 ocsp.apple.com. But do understand you’re basically opting out of Apple’s built-in malware protection. It’s putting your machine at risk—or full control, depending on who you ask.

I trust Apple, but I don’t trust anything in between.

What we want to defeat here is the eavesdroppers in the middle of the network who wants to use the OCSPs as telemetry. How do we mitigate this until the promised secure channel gets implemented in the near future? Well, if it can be used as telemetry, let’s put some noise into it.

Noise. Can you spot the “real” dots?

Loosely inspired by the Noiszy project, apple-ocsp-noiser sends out an OCSP request to http://ocsp.apple.com with a random legitimate or nonexistent serial number for every random period of time. Since the issuer hash is always Apple, we don’t need to change that. Whether the serial number exists or not, Apple responds with the status of “good,” since that random serial number has never been revoked after all. Any eavesdropper would not be able to differentiate them; a request could be real or fake, and the serial number in it can be legitimate or nonexistent.

apple-ocsp-noiser in action.

There is still room for improvement. One is to add more legitimate serial numbers in the random pool. A well-equipped eavesdropper might have a partial or full database of (serial number, developer name) pair. They might simply discard any OCSP request traffic with a number they don’t know, then sort out the real requests. If we knew more legitimate serial numbers, we could send out less nonexistent requests, they wouldn’t be able to easily discard requests. Your contribution is always welcome.

It’s been a lot. I’d love to hear your thoughts, criticisms, and nitpicks. Thanks for reading.

Thanks to Jeff Johnson, Jeffrey Paul, Jacopo Jannone, Patrick Wardle, Filippo Valsorda, and Betty for proofreading.

--

--