Revenir au blog

Follow and Subscribe

Disponible uniquement en anglais

Cette page n'est actuellement disponible qu'en anglais. Nous nous excusons pour la gêne occasionnée, merci de revenir sur cette page ultérieurement.

Improving HTTP with structured header fields

Mark Nottingham

Senior Principal Engineer, Fastly

The HTTP community has been busy modernising the web’s protocol over the last decade, with multiple revisions of the core specification, a number of extensions, HTTP/2, and now HTTP/3. Unfortunately, the way we define and use HTTP header fields hasn’t changed much since the beginning, with underspecified headers (and lots of different ways to handle them) causing interoperability issues, developer pain, and even security problems. But help is coming.

What’s wrong with HTTP headers

Most web developers are familiar with HTTP headers like Content-Length, Cache-Control, and Cookie; they carry metadata for requests and responses. Typically, it will be information the sender can’t put into the message body for some reason, or information they want recipients to be able to access without having to search through the body.

Because headers need to be handled by lots of different clients and servers, proxies and CDNs — often more than once in the lifetime of a message — you’d expect them to be easy to handle, efficient to parse, and to have well-defined syntax.

Unfortunately, these things are rarely true. HTTP defines header values (more properly, field values since they can also occur in trailer fields, after the body) as a “sequence of octets” (i.e., bytes) with few constraints, although it recommends they be ASCII bytes. It also suggests headers be defined in ABNF, and that multiple fields with the same name can be combined on the same line if you separate their values with commas.

That’s about it. 

As a result, every header field has its own, unique definition that you need to know to parse its value. Some field authors use ABNF to do this; others use examples. Some just make you guess based upon the values you’ve seen before.

For example, consider the Age header. It’s part of the core HTTP specifications, so it should be well-defined, and it’s just a simple integer.

Age: 42

It is specified by this ABNF:

Age = delta-seconds
delta-seconds  = 1*DIGIT
DIGIT          =  %x30-39   ; 0-9

That seems straightforward at first — one to many instances of the digits between 0 and 9. Consider, though, what an implementation should do if it encounters any of these Age headers:

Age: 0, 60
Age: 60, 0
Age: 50m
Age: abc234
Age: 60;ms=212

It’s not so simple, as tests against real caches that need to use Age show.

So, while examples or ABNF might be a sufficient definition when the same person is writing the code that generates and consumes the header and there’s no one else, it’s horrible for interoperability if there are multiple implementations generating and parsing the value. 

Each header author has to remember to address a laundry list of questions about how to handle things like duplicate values, case normalisation, whether it’s a single item or a list, and so on. Often, they don’t address these things, which means implementers choose for themselves — often in different ways.

Insufficiently specified headers are also a source of security issues; if implementations parse headers differently, they can be tricked into behaving differently, leading to attacks like Response Splitting

Browser vendors have become concerned enough about these issues to start defining headers like CSP algorithmically. That is, they painstakingly define parsing and serialisation algorithms and then create test cases. This approach has less ambiguity about the field’s syntax and fewer differences between implementations. However, it’s still a one-off; it only helps clarify the algorithms for that specific header. It’s also exhausting for the author of the specification to go to the effort and to make sure it’s correct — so most header authors don’t bother. It also creates a lot of busy work for implementers, since they’ll need to implement each new header’s parser separately.

Introducing Structured Fields

These problems have been pretty clear to the HTTP Working Group for a while, and a few years ago we started to try to define something better that people can use for new fields. After a few attempts, we settled on an approach originally called Structured Headers, but we now (more properly) call it Structured Fields.

Structured Fields is a library of well-defined data types that are potentially useful in HTTP headers and trailers, including Strings, Tokens, Booleans, Integers, Decimals, and Byte Sequences as atomic “Item” types, and Lists and Dictionaries of those Items. Importantly, it defines exacting parsing and serialisation algorithms for each type, along with error handling and a detailed test suite — all to help assure interoperability.

That allows the author of a new header field to define it in terms of those types. For example, they can say “it’s a List of Strings”, and people will know how to parse and generate the header unambiguously using an off-the shelf library, rather than writing header-specific code.

Example-Header: "blue", "sort of red", "green"

Each Item can also have Parameters, or key/value pairs of extra information. Parameters are an important extensibility mechanism that allow headers to evolve over time.

Example-Header: "blue"; websafe, "sort of red"; author="sue", "green"

There’s also a limited form of recursion; lists and dictionary values can contain lists too, like:

Example-Header: people=(joanna stacy), places=("new york" "rome")

Each of the items in the inner lists can be parameterised, as can the inner lists themselves.

You might notice that these headers look a lot like many existing HTTP fields. That’s by design; not only is it comfortable for developers, it allows many existing fields to be generated by a Structured Fields implementation, and often they can be parsed by them too. For example, many Cache-Control headers are valid Structured Fields, even though it’s not defined as one:

Cache-Control: max-age=3600, immutable

Unfortunately, you can’t use Structured Fields for existing headers yet, and you can’t tell if a given field is a Structured Field or not just by looking at it; you have to know its definition, because Structured Fields is just for new fields, at least for now.

Better performance with Structured Fields

Making it easier to specify new fields plus making them safer and more interoperable to handle is a significant improvement to HTTP. What if Structured Fields could help HTTP performance as well? There are two ways they might be able to help. To be clear, these are speculative benefits, but they are nonetheless interesting to talk about.

The first is parsing efficiency. Because traditional HTTP headers are textual, a parser has to touch each byte in the string, sometimes multiple times, and sometimes copying and re-copying it into different parts of memory. This is an inherently inefficient process and one of the reasons HTTP/2 and HTTP/3 are binary rather than textual protocols.

Before Structured Fields, there wasn’t much we could do about that, because HTTP headers are defined so loosely. The well-defined data types in Structured Fields changes that. Now, we can define a new, binary serialisation of any headers that use them.

Binary Structured Fields is a straw-man proposal to define such a serialisation. It uses the HTTP/2 (and /3) SETTINGS mechanism to negotiate support for the alternative serialisation, and exploits Structured Fields’ similarity to many existing header fields’ syntax to backport it onto a set of already widely used header fields, falling back to opaque text if they fail to parse.

How much will binary serialisation help performance? It should reduce latency of request handling and improve scalability, thanks to an anticipated reduction in CPU load. We don’t have real-world numbers yet, but if you consider the path many headers take — from JavaScript to the browser, and then onto a CDN, through multiple CDN nodes to the origin server, then to the application code itself. The potential of cumulative savings are attractive.

The second way Structured Fields might help performance is through improved compression efficiency. HTTP/2 introduced HPACK compression for header and trailer fields. While its predecessor, SPDY, used GZIP, it was found to be insecure because of the CRIME attack. So, HPACK (and its successor, QPACK) compress fields by referencing the entire field value; if any one part of it changes, it can’t use a previous reference (with sometimes surprising impact on compression efficiency).

That whole-value granularity was chosen because there was no way for a generic parser to understand the structure of a field value; to be secure, we had to be certain an attacker couldn’t probe a secret by guessing parts of it.

With Structured Fields, there’s now potentially a way for the compression algorithm to operate on the individual data types in a field, rather than the entire value. 

For example, consider the following Cache-Control field:

Cache-Control: max-age=3600, s-maxage=7200, must-revalidate

With HPACK and QPACK, the entire field value is stored in the dynamic table, and it can only be referenced by a future message with exactly the same value. If we were to parse it as a Structured Field and store the individual data types, we could store:

  • max-age

  • 3600

  • s-maxage

  • 7200

  • must-revalidate

Each of these could then be referenced separately when they occur in a future header, making the compression algorithm more granular and possibly more efficient.

Early prototyping suggests the efficiency gains using this technique are pretty small for web browser connections because their headers tend to be highly repetitive, and replacing what was a 1-byte reference in HPACK with multiple bytes (one for each type in the field value) can actually hurt.

However, for connections that carry traffic from multiple clients — such as traffic seen by reverse proxies and CDNs upstream to the origin server — the benefits may be more apparent; more experimentation is needed.

Improving HTTP in the long term

If the backporting technique described above catches on, a future version of HTTP (or extensions to HTTP/2 and HTTP/3) could reduce the number of unstructured headers in use dramatically. 

The Binary Structured Fields draft describes two ways to do this. If the field’s syntax is compatible with Structured Fields — at least most of the time — it can be sent as one, falling back to a plain-text header when that fails.

Headers that don’t have compatible syntax need another approach. For example Date, Last-Modified, Expires and similar headers can never be valid Structured Fields. However, it is possible to represent a date as an integer, and Structured Fields can convey an integer.

So, a header like this:

Date: Thu, 09 Apr 2020 09:06:50 GMT

Might be represented as this on a hop that knows about the appropriate translations:

SF-Date: 1586423210

That gives us a way to eventually send all common headers and trailers as Structured Fields, letting HTTP’s message metadata catch up with the modernised transport of HTTP/2 and HTTP/3.

Using Structured Fields today

The Structured Fields specification is in the very last stages of standardization, which means it should become an RFC soon. We already have multiple implementations, including in Chrome, since many of their newer security headers (for example, Fetch Metadata) are Structured. There’s also an extensive test suite that’s used by many of the implementations to assure they’re correct.

In the meantime, you can play with many of the implementations to get a sense of how they work. For example, the Python library http_sfv allows you to parse them from the command line.

If you define new headers — whether they’re for the web overall or just your HTTP API —  you can start using Structured Fields once the RFC is published.