Revenir au blog

Follow and Subscribe

Disponible uniquement en anglais

Cette page n'est actuellement disponible qu'en anglais. Nous nous excusons pour la gêne occasionnée, merci de revenir sur cette page ultérieurement.

Cache Me If You Can: HTTP Cache API Edition

Ajay Bharadwaj

Product Manager, Fastly

Fastly's programmable cache is one of the superpowers of our Compute platform. Want to customize an image to delight every user of your application? We've got you covered. And doing that in the context of an HTTP request/response flow shouldn't be a burden. Fastly Compute solves that with the simplicity of our new HTTP Cache APIs, a set of SDK APIs, usable from within Compute code.

How does it work?

By default, when HTTP requests pass through Fastly’s edge, origin responses are automatically cached (unless explicitly overridden via Cache-Control). Subsequent requests for that same resource can then be delivered from cache without having to connect to the backend. 

HTTP Cache APIs allow developers to do something they have never been able to do before with Compute. Now, you can inject customizations into this default flow, opening the doors to new use cases and possibilities for Compute customers.

What you need to know

HTTP Cache API: The TL;DR

  • Provides first-class HTTP support in your preferred language 

  • Enables fine-grained, programmatic access to Fastly’s cache

  • Supports customizing dynamic content delivered by Fastly Compute

  • Allows developers to customize caching behavior with ease

The HTTP cache interface builds on the foundation established by the Simple Cache and Core Cache interfaces and is optimized for caching requests/responses within the flow of the HTTP specification. It is a fully integrated API that gives developers capabilities such as the ability to make changes to the cache properties of an object, adjust headers like Cache-Control, and more, all within the context of an HTTP flow.

Use cases our customers love

Here is a simple JavaScript example of modifying the Fastly TTL while setting a Surrogate-Control for downstream caches, and a Cache-Control header for managing browser caches:

const backendResp = await fetch(clientReq, {
  backend: 'example_backend',
  cacheOverride: new CacheOverride({
    afterSend(resp) {
      resp.ttl = 300;
      resp.headers.set('Cache-Control', 'max-age=86400'); // Rules for browsers
      resp.headers.set('Surrogate-Control', 'max-age=31536000'); // Rules for downstream caches
      resp.headers.delete('expires');
    },
  }),
});

And, there are countless other architectures you can implement. Here are a few to get you started:

  • Customizing cache controls of a backend response prior to caching. Developers can set the cache TTL based on a non-cache-related response header like Content-Type.
    Benefit: This flexibility allows for more intelligent and efficient caching strategies. By setting cache TTLs based on content type, developers can optimize performance and resource usage. For example, static assets like images or CSS files can be cached for longer periods, while dynamic content can have shorter TTLs. This results in improved website speed, reduced origin load, and a better user experience, all while maintaining content freshness appropriate to each asset type.

  • Modify the response headers or body before storing it in the cache. For example, developers can cache the locally generated HTML version of a response, rather than the raw JSON returned by the origin.
    Benefit: This capability enables developers to store optimized versions of content in the cache, reducing processing time and improving delivery speed. By caching the HTML version instead of raw JSON, subsequent requests can be served faster, as the transformation step is eliminated. This not only improves performance but also reduces computational load on both the origin server and the edge, leading to cost savings and a more responsive application overall.

  • Determining whether to cache at all, based on the origin content. This is great for situations where you might not want to cache an HTTP 200 OK response.
    Benefit: This level of control ensures that only valid and useful content is cached, maintaining the integrity and reliability of the cache. By avoiding caching responses that contain errors despite a 200 OK status, developers can prevent the propagation of incorrect data to users. This results in a more consistent user experience, reduces the risk of serving stale or erroneous content, and helps maintain the overall quality and accuracy of the application's output.

For more information and additional code examples visit our developer portal.

Time to start caching in

Fastly’s cache APIs are perfect for extending the boundaries of personalization and rendering cutting-edge experiences. These new cache controls smoothly integrate with existing workflows, making simple customizations extremely easy to add.

Ready to take the HTTP Cache APIs for a spin? Start using it today or join the conversation in the forum to learn how we can help you get more out of your cache!