In June, we hosted Fastly Altitude, our first customer summit. You can read more about it on our blog, which includes a recap and videos of all the talks.
At the event, we chatted with existing customers — some large, some small, some old, and some new — about features they'd like to see and ways we could improve Fastly. We heard that we should be more communicative about our feature and product releases, so we're going to be doing quarterly release notes on everything that's happened over the previous 12 weeks. Depending on our release schedule, we may alter the reporting time period.
Here’s our update for Q2-Q3 2015.
Traffic growth has continued at a steady pace; we've gone from 1.3 million requests and 380 gigabits per second at the end of March 2015 to 2 million requests and 900 gigabits per second as of today.
Over the last six months, the engineering team has focused on making sure that all our systems scale as traffic continues to grow.
In addition to upgrading our entire fleet to our latest POP design, our networking team has been hard at work bringing up new POPs. In the last quarter, we've added POPs in:
Stockholm (hometown of CEO Artur Bergman)
Osaka (we also opened up a new office in Tokyo)
Seattle (we’re huge fans of rain — and Grey's Anatomy)
Denver (go Broncos — go creepy lazer devil horse by the airport)
Melbourne (gives us an excuse to go see the Melbourne Formula 1 race)
Miami (we have a weakness for Cubanos)
We will continue to expand the network, with more POPs coming soon in São Paulo, Cape Town, Perth, and other exciting locations. Check out our network map for more info.
A lot of the last quarter has been taken up with performance enhancements and architecture upgrades, including changes to geographical redundancy so the platform is faster and more reliable.
We've also rolled out new features. Some of the big ticket items include:
We’re always seeking ways to give our customers more control at the edge. With this in mind, we we recently announced Edge Dictionaries, which give you the ability to create dictionaries (key/value pairs that your VCL can reference) inside your Fastly services. This allows you to make real-time decisions from every server in the Fastly network.
Read more: Announcing Edge Dictionaries: Make faster decisions at the edge
To help with large files and media streaming, we increased our client keepalive value.
Previously, we'd close the keepalive connection to a client after 10 seconds. After talking to our various media customers, we upped that value to 10 minutes, which helps massively for things like chunked video and music streaming.
We improved the algorithm we use for maintaining keepalives to your origin servers. You’ll now see better throughput and lower latency on cache misses. You can set the keepalive settings on your server up to really high values — we have customers who’ve gone over 60 minutes.
We added support for the tentative HTTP caching directives Stale-While-Revalidate and Stale-If-Error.
Stale-While-Revalidate allows customers to dictate that even though an object's time to live (TTL) has expired, it will be served stale while Fastly’s caches fetch it in the background.
Stale-If-Error works exactly the same as our existing serve-stale feature — we simply added support for understanding the cache directive in an origin response.
Read more: Serving stale content
Soft Purge differs from our regular purge in that it doesn't immediately remove the file from our cache — it just sets its TTL to zero. This means that you can purge while still getting the benefits of serve-stale.
Read more: Soft purges
In addition, we expose various variables in VCL that allow you to make smart choices about serving your content depending on the state of the cache.
For example, you can now do something like this:
sub vcl_error {
#FASTLY error
/* handle 503s */
if (obj.status >= 500 && obj.status < 600) {
/* deliver stale object if it is available */
if (stale.exists) {
return(deliver_stale);
}
/* otherwise, return a synthetic */
/* include your HTML response here */
synthetic {"<!DOCTYPE html><html>Please replace this text with the error page you would like to serve to clients if your origin is offline.</html>"};
return(deliver);
}
}
The variables are documented here under "Miscellaneous VCL features” on Fastly docs.
Previously, all cache nodes would independently verify whether or not an origin server was available by making a health check request. As the number of POPs (and hence cache nodes) in our network increased, that meant more and more traffic was sent back to our customers’ origin servers.
We’ve made a change to the way this works and now the results of a health check are shared among the machines in a POP, massively reducing the number of origin requests needed.
A commonly requested feature for VCL is the ability to handle multiple cookie headers, especially among users who work with layered technology stacks that inject multiple Set-Cookie headers. This doesn't fit in well with the standard VCL syntax, so we've added some extensions to help our customers.
Read more: Response Cookie handling
Similarly we've been asked for help dealing with sub-fields in HTTP headers (such as Accept-Encoding) to help customers parse and manipulate them without resorting to a jumble of regular expressions, which end up making your VCL look like a cat walked over your keyboard.
Read more: Isolating header values without regular expressions
Being able to deal with multiple languages is incredibly important to many businesses. Trying to manage internationalization and text localization using GeoIP means that customers travelling to Japan will suddenly be presented with a page full of Kanji.
It’s better to use the Accept-Language header, and we give you the tools to do that sanely.
Read more: Accept-Language header VCL features
Varnish Modules (VMODs) are incredibly useful extensions to core Varnish. We've ported various VMODs to our distributed Varnish, including:
Std, which has a variety of useful functions
Boltsort, which allows you to sort query params
Date and time, which have various date-time related variables and functions
Let us know if there are other VMODs you’d like to see by emailing support@fastly.com.
We expose various properties about the backend being used (specifically name, IP, and port), which can be helpful in debugging.
Fastly's CDN is certified as a PCI DSS Level 1 Service Provider. If you’re interested in caching content in a manner that fully meets PCI requirements, and other key enterprise offerings in our ecommerce package, please contact sales-ecommerce@fastly.com.
We upped the file size limit for cacheable objects up to 5GB and rolled out Streaming Miss, which means that when there's a cache miss, we'll start streaming the origin response straight back to the client rather than waiting until we've loaded the whole object and written it to cache. Technically, Streaming Miss was released last fall, but we wanted to include it here as a reminder of the Fastly features that are available to all customers.
Read more: Improving the delivery of large files with streaming miss and large file support
We’ve completely overhauled http://docs.fastly.com so that it’s more user-friendly. As always, any feedback can be sent to support@fastly.com and is greatly appreciated by the documentation team toiling daily in the word mines.
Ongoing performance improvements, memory reduction, and edge cases fixing
Upgrading OpenSSL whenever a CVE horror show surfaces to ruin the day of SysAdmins everywhere
Fix some ugly weirdness regarding “100 Continue” responses and Expect headers
We attended and sponsored many conferences over the past few months. In June 2015, we did a roadshow with our partners at Google Cloud Platform that had us speaking at one-day events in New York, San Francisco, Tokyo, London, and Amsterdam in the space of 13 days. Sadly, the team involved in that have all died due to terminal jetlag but we've erected a tasteful memorial to them in our new offices.
Follow our events page to keep track of which conferences we’ve attended and which ones we’ll be going to in the coming months.
We’re continuing our efforts to support open source projects by donating our services. Here are some projects that have recently started using Fastly:
As always, if you have an open source project that can use Fastly services, please reach out to community@fastly.com. Want to chat with Fastly engineers and other customers using our product? Check out our Community Forum.