IETF 100: War for the Middle

IETF 100:  War for the Middle

IETF 100 took place a few weeks ago in Singapore, and across multiple venues, there was one topic that surfaced repeatedly, and opinions vary so widely that we might almost consider it a holy war: middleboxes.

Some people I've attempted to describe the problem to hadn't heard that term, so let me describe it generally this way:

Middlebox: Anything on the network which is involved in relaying traffic, but isn't in active cooperation with either the sender or the receiver.

These can be malicious - man-in-the-middle attacks, scammers, etc. They can be functional - connection acceleration devices, routers. They can be enforcers of policy - virus scanners, content filters, enterprise data security, or payment portals.

But what they aren't is trusted. I'm explicitly excluding from the category of "middlebox" anything that's actually trusted by one of the endpoints which wants to have its traffic examined. I'll explain why in a moment.

The Privacy / Deployability Folks

The first camp, which is more prevalent at the IETF, are those who want to conceal as much as possible. There are two sub-camps here with different motivations, but both lead to the same ideals: A packet layout and content which is, to the greatest extent possible, indistinguishable from random Internet noise. The payload is encrypted, as much of the control information is encrypted, and what can't be encrypted is randomized + integrity-checked.

The first sub-camp out of which this arises are the privacy advocates. From this standpoint, any information revealed to a middlebox becomes available to one or more governments, data-hungry corporations, etc. While some of this information leakage may have little value by itself, the argument goes that the aggregate of seemingly irrelevant data becomes very relevant in unexpected ways. Pointing out that equivalent information is already available in other venues is considered to be a "leaky boat fallacy": just because the boat has many leaks already doesn't mean more leaks are okay or that any leak isn't worth plugging. This has been latent in the IETF community for a long time, but the Snowden revelations in recent years have catalyzed this segment of the group.

The second sub-camp (with which I identify) are the anti-ossification advocates. We've previously seen that the network has become "ossified," that changes to protocols are challenging because network hardware has been built which makes assumptions about what those protocols should look like. It is effectively impossible to deploy a protocol on top of IP which isn't either TCP or UDP, even though IPv4 and v6 both support a wide variety of next protocols. It is effectively impossible to deploy new TCP options universally, new HTTP Content-Encodings, etc. The result of this has been new HTTP features usable only over HTTPS, complex fallback strategies for new TCP options, and tunneling new transport protocols over UDP. Anything a middlebox can see, it can (and will) make assumptions about, making them difficult or impossible to change later.

Regardless of the philosophy where this originates, it leads to the same goal: If it's not absolutely vital that it be exposed to the network to enable routing, it should be:

  • Encrypted, if possible
  • Obfuscated and integrity-protected otherwise

This has led to such diverse efforts as:

  • QUIC
    • All control data (acknowledgements, flow control, end-of-connection messages, etc.) is encrypted in QUIC after the handshake completes
    • Handshake packets are encrypted with a version-specific key which provides no confidentiality against attack but prevents inspection by a middlebox not built for the specific version of the protocol
  • TLS
    • Attempts to encrypt the Server Name Indication extension so it can only be read by the server terminating the TLS session
    • Elimination of non-PFS ciphers in TLS 1.3
  • DNS over HTTP (DoH)
    • Intends to enable clients to make DNS requests over an encrypted channel which cannot be differentiated from regular web traffic

The Security / Operations Folks

The other side of the fence, of course, are the middlebox vendors and their customers. Note again that this doesn't include CDNs like Akamai, because we are in direct cooperation with the origin - and from the client's perspective, we are the origin. We terminate the TLS connection with a valid certificate, read only data sent directly to us, and serve only content that we're authorized to serve.

However, there are many services which, rightly or wrongly, are considered essential to the operation of a network which depend on access to data. The ongoing argument in the IETF is to what extent the existing techniques need to continue to work, versus new techniques being designed to address the existing use-cases in a post-encryption world. The two most vocal stakeholders here are vendors of security-focused infrastructure (Symantec, for example) and network operators.

The security infrastructure folks emphasize situations in which the users have limited rights - libraries, prisons, and schools are the most oft-cited examples. (You don't want your better encryption technology to expose your child to porn at school, do you?) Less-emphasized are the situations where the users' rights are less limited, such as work environments or legally-mandated intercepts. This community is usually bluntly refused at the IETF - RFC 2804 states that the IETF will not build provisions for "wiretapping," briefly defined as obtaining the contents of communications without the knowledge or consent of either.

I don't foresee that changing, but the stakeholders of the Encrypted Traffic Inspection group within IEEE-SA (not a standards-setting group, the IEEE would like to emphasize) made an appearance at IETF 100, trying to hold an event called PATIENT (Protecting against Attacks Tunneled In Encrypted Network Traffic) following the plenary. However, the plenary ran long, so attendance at this event was mixed.

The reason the plenary ran long, ironically, was that the other half of this camp was making an appeal to the IAB and the IESG for greater attention to their needs. The network operators assert that they rely on traffic inspection for various troubleshooting and monitoring requirements, such as identifying high-loss network segments, sources of packet reordering, or unexplained increases in RTT. There was a suggestion the next day that perhaps there needs to be a Security Operations Working Group to discuss the operational aspects of increased encryption, just as there are Operations working groups for other protocols with deployment hurdles (DNS and IPv6 are notable examples).

There was great outcry on the QUIC mailing list, insisting that when a user calls with a problem ("Netflix is glitchy," for example), the operators need to be able to inspect the problematic flows and identify the source of performance limitations. This started with support for the proposed "spin bit," permitting RTT measurements; but as the discussion continued, others wanted loss signals, reordering signals, congestion signals, etc.

Ted Hardie, IAB chair, made a great analogy in one of his responses here. He said that in the world of Ethernet, network tools often relied on being able to plug into the cable (the "bump in the wire" model). When Wi-Fi came along, with point-to-point encryption between client and AP and no cable to interrupt, new tools had to be designed. We're in the same sort of transition now, where we'll need to build new operations tools to deal with the fact that encryption is moving lower and lower in the network stack.

My Opinion

I believe that avoiding ossification and ensuring the continued ability of the Internet to evolve is the highest goal, particularly in QUIC. Therefore, we encrypt as much as possible, with as much security as possible.

However, operations is important. I don't want us to be in a world in which it's necessary to force traffic back to TCP in order to get the necessary measurements on the network. While each bit of explicit tooling we add to the protocol for network management makes me a little nervous (what are we leaking accidentally by leaking something intentionally?), I'm willing to consider exposing some basic things, like RTT measurement and perhaps path-closure.

I think there's one fundamental tool, however, that the folks on the "Let's Not Encrypt" side are missing - or deliberately ignoring? - and that's endpoint cooperation. Both Chrome and Firefox have the ability to dump the session keys negotiated for TLS connections to a file format which can be imported by Wireshark to permit inspection of encrypted traffic during network analysis. Is it really impossible to ask the customer to send in that file - or is it just a new element of their tech support flow to accommodate? I don't know, but I suspect it's somewhere less than totally impossible.

That's why I don't include trusted intermediaries as middleboxes. If you're actively cooperating with someone, you can share your keys with them if you want. I don't want anyone able to remotely view what I'm doing on my computer at will, but if I'm asking Helpdesk for support, I'm accepting the price that they'll have at least some visibility into the thing I'm asking about.

For the security management pieces, I believe the right answer is end-host monitoring. In the lack-of-rights scenarios, the monitoring entity (employer, prison, school) also owns the machines in question. They can choose to permit only their managed devices on the network, then monitor activity from the device itself without any need to compromise the network. (This isn't a complete solution, since applications can view the OS itself as an untrusted intermediary, but the OS also has mechanisms to restrict the user to trusted applications. It's a trade-off of how invasive the monitoring entity chooses to be.) But regardless, transparency to the user of what will be monitored by whom is critical, and best accomplished by presence on the device.

As for operational monitoring, certain of these capabilities also exist at the IP layer. True, they don't currently operate well across network domains, but perhaps this is the impetus needed to require such cross-domain operation in the terms of peering agreements or other leverage that the community of network operators would have on each other.

In short, let's use this as an opportunity to raise the level of security on the Internet, not an excuse to retard it.