Integration with backend technologies
Most times when Fastly receives a request from an end-user, we deliver a response that we fetch from your server, which we call a backend, or origin. Fastly interacts with thousands of varied backend technologies and supports any backend that is an HTTP/1.1 compliant web server. A huge variety of different kinds of software products and platforms can be used, including:
- Traditional web servers: You install and run your own operating system and web server, such as Apache, NGINX or Microsoft IIS (on your own physical hardware, or a virtualized infrastructure provider such as AWS's EC2 or Google Compute Engine)
- Platforms as a service: Heroku's platform-as-a-service (or equivalent products such as Google App Engine or DigitalOcean App Platform) manage routing, operating systems and virtualization, providing a higher-level environment in which to run web server apps.
- Serverless platforms: Serverless functions (such as Google Cloud Functions or AWS Lambda, sometimes known as 'functions as a service') can be extremely cost effective backends that only charge you when they are invoked, and can scale effortlessly - but as an even higher level abstraction, they offer less flexibility.
- Static bucket storage: Services such as Amazon S3 or Google Cloud Storage are popular and relatively inexpensive ways to connect Fastly to a set of static resources with no compute capability at all.
Creating backends
Backends can be configured statically in multiple ways. Refer to core concepts for instructions. Dynamic backends can be configured at runtime in Compute services.
IMPORTANT: When setting up a static backend, configure the host header override and SSL hostname. These should almost always be set to the same hostname as you are using to identify the location of the backend. Learn more.
Dynamic backends
The ability to send requests to a backend defined at runtime creates the potential for an open proxy, which can be a security risk, so we recommend that for most applications, Fastly services should have backends defined statically. However, it is possible to define backends at runtime in Compute services, using methods available in Compute language SDKs:
- Rust
- JavaScript
- Go
Selecting backends
Static backends are defined in the same way in all Fastly services, but the way you select the backend to use for a particular fetch operation differs significantly between VCL and the Compute platform.
VCL
In VCL services, the req.backend
variable indicates which backend to use when forwarding a client request. By default, Fastly will generate VCL that will assign the request to the backend for you, so if you have only one backend, there's nothing more to do. If you have more than one backend and don't want to write custom VCL, you can configure all backends to have automatic load balancing, or assign Conditions to each backend in the web interface.
If you do want to use custom VCL, you first need to know what the "VCL name" is for the backend. This name is a normalized version of the name given to the backend in the web interface or API, usually (but not always) prefixed with F_
. To discover the names assigned to the backends in your service, click Show VCL in the web interface, or download the generated VCL via the API, and locate and examine the backend definitions. For example, if your backend is called "Host 1", the VCL name would most likely be F_Host_1
.
Add your VCL code to select a backend after the #FASTLY...
line in the appropriate subroutine of your VCL. Usually, backend assignment is done in vcl_recv
, but can also be done in vcl_miss
or vcl_pass
.
#FASTLY RECVif (req.path ~ "^/account(?:/.*)?\z") { set req.backend = F_Account_Microservice;}
WARNING: It is not possible to override the default backend using VCL snippets because VCL snippets are inserted into generated VCL before the default backend is assigned, so the default assignment would overwrite your custom one.
Interaction with shielding
In VCL services, backends may be configured to perform shielding, in which a fetch from a Fastly POP to a backend will first be forwarded to a second nominated Fastly POP, if the request is not already being processed by that nominated "shield" POP. When shielding is used, it is important to allow Fastly to choose the shield POP instead of the backend server when appropriate. This happens automatically if you use conditions to select backends, but if you use custom VCL, see shielding with multiple origins in our shielding guide.
Fastly Compute
In Compute services, fetches that use static backends must explicitly specify the backend for each fetch, and the identifier for the backend is exactly as it appears via the API or web interface.
- Rust
- JavaScript
use fastly::{Error, Request, Response};const BACKEND_NAME: &str = "custom_backend";
#[fastly::main]fn main(req: Request) -> Result<Response, Error> { Ok(req.send(BACKEND_NAME)?)}
HINT: If you are using a Compute service with a static bucket host like Google Cloud Storage or Amazon S3, consider using a starter kit designed to work with static hosting services.
Dynamic backends can be referenced in the same way as static backends, using the backend constructor specific to the language SDK you are using. In JavaScript, dynamic backends can also be used implicitly, by omitting a backend property in the fetch()
call:
/// <reference types="@fastly/js-compute" />import { allowDynamicBackends } from "fastly:experimental";allowDynamicBackends(true);async function app() { // For any request, return the fastly homepage -- without defining a backend! return fetch('https://www.fastly.com/');}addEventListener("fetch", event => event.respondWith(app(event)));
The Compute platform does not currently support automatic load balancing or shielding.
Overriding the Host
header
If you use a hostname (rather than an IP address) to define your backend, Fastly will only use the hostname to look up the IP address of the server, not to set the Host
header or negotiate a secure connection. By default the Host
header on backend requests is copied from the client request. For example, if you own www.example.com
and point it to Fastly, and create a second domain of origin.example.com
that points to your origin, then the web server running on your origin and serving origin.example.com
must also be able to serve requests that have a Host: www.example.com
header. Fastly will also use the client-forwarded hostname to establish a secure connection using Server Name Indication.
This is often undesirable behavior and may not be compatible with static bucket hosts or serverless platforms. Therefore, when creating backends on your Fastly service, consider setting all of the properties address
, override_host
, ssl_sni_hostname
and ssl_cert_hostname
to the same value: the hostname of the backend (e.g., "example.com").
The CLI command fastly backend create will do this automatically. Using the API, web interface or VCL code, you must set these properties separately.
This is also essential for service chaining, and for many hosting providers, such as Heroku and AWS S3.
Static bucket providers
Most bucket providers require a Host
header that identifies the bucket, and often the region in which the bucket is hosted:
Service | Host header |
---|---|
Amazon S3 | {BUCKET}.s3.{REGION}.amazonaws.com |
Alibaba Object Storage Service | {BUCKET}.{REGION}.aliyuncs.com |
Backblaze (S3-Compatible mode) | {BUCKET}.s3.{REGION}.backblazeb2.com |
DigitalOcean Spaces | {SPACE}.{REGION}.digitaloceanspaces.com |
Google Cloud Storage | {BUCKET}.storage.googleapis.com |
Microsoft Azure Blob Storage | {STORAGE_ACCOUNT_NAME}.blob.core.windows.net |
Wasabi Hot Cloud Storage | {BUCKET}.s3.{REGION}.wasabisys.com |
Serverless and PAAS platforms
Most platform-as-a-service providers require that requests carry a Host
header with the hostname of your app, not the public domain of your Fastly service.
Service | Host header |
---|---|
Heroku | {app-name}.herokuapp.com |
Modifying the request path
In some cases, you may need to modify the path of the request URL before it is passed to a backend. There are a few possible reasons for this, the two most common of which result from using a static bucket provider:
- Bucket selection: Where the bucket provider requires the URL path to be prefixed with the bucket name.
- Directory indexes: Some providers do not support automatically loading directory index files for directory-like paths. For example, the path
/foo/
may return an "Object not found" error, even though/foo/index.html
exists in the same bucket. If your provider doesn't support automatic directory indexes, you can add the appropriate index filename to the path.
The following providers require path modifications to select the right bucket:
Service | Path modification |
---|---|
Backblaze (B2 mode) | /file/{BUCKET}/{PATH} |
HINT: If a bucket provider supports selecting a bucket using both a path and a hostname, we recommend using the hostname method.
In VCL services, path modifications are best performed in vcl_miss
, which has access to the bereq
object, to avoid mutating the original client request. In a Compute program, the modification can generally be done on a request instance or a clone of it, before sending it to a backend:
- Fastly VCL
- Rust
- JavaScript
let path = req.get_path(); let page = if path.ends_with('/') { "index.html" } else { "" }; let path_with_bucket = format!("/{}{}{}", BUCKET_NAME, path, page); req.set_path(&path_with_bucket);
In VCL services with shielding enabled or which use restart
, care should be taken to do path modifications only once. To ensure that the modification only affects the request just before it is sent to the origin, check the value of the req.backend.is_origin
variable.
Redirecting for directory indexes
Some static bucket providers do not support automatically redirecting a directory request that doesn't end with a /
. For example, a request for /foo
where the bucket contains a /foo/index.html
object, will often return an "Object not found" 404
error. If you wish, you can configure Fastly so that in such cases, we retry the origin request, theorising that 'foo' might be a directory, and if we find an object there, redirect the client to it:
- Fastly VCL
- Rust
- JavaScript
if resp.get_status() == StatusCode::NOT_FOUND && !path_with_bucket.ends_with("/index.html") { let orig_path = retry_req.get_path().to_string();
path_with_bucket = format!("/{}{}/index.html", BUCKET_NAME, &orig_path); retry_req.set_path(&path_with_bucket);
// Send the retry request to backend let resp_retry = retry_req.send(BACKEND_NAME)?;
if resp_retry.get_status() == StatusCode::OK { // Retry for a directory page has succeeded, redirect externally to the directory URL. let new_location = format!("{}/", orig_path); let resp_moved = Response::from_status(StatusCode::MOVED_PERMANENTLY) .with_header(header::LOCATION, new_location); Ok(resp_moved) } else { Ok(resp_retry) } } else { Ok(resp) } // end-section dir-redirect}
Customizing error pages
When a backend is not working properly or a request is made for a non-existent URL, the backend may return an error response such as a 404
, 500
, or 503
, the content of which you may not be able to control (or predict in advance). If you wish, you can replace these bad responses with a custom, branded error page of your choice. You can encode these error pages directly into your Fastly configuration or, if your service has a static bucket origin, you could use an object from your static bucket to replace the platform provider's error page.
This code example demonstrates both of these mechanisms:
- Fastly VCL
- Rust
HINT: Some static bucket providers will allow you to designate a particular object in your bucket to serve in the event that an object is not found. If they don't support that, this is a good way to implement the same behavior using Fastly and get support for a range of other error scenarios at the same time.
IMPORTANT: If you are implementing directory redirects and custom error pages, ensure the directory redirect happens first.
Setting cache lifetime (TTL)
In general, it makes sense for the server that generates a response to attach a caching policy to it (e.g., by adding a Cache-Control
response header). This allows the server to apply precise control over caching behavior without having to apply blanket policies that may not be suitable in all cases. However, if you do prefer to apply caching policies based on patterns in the URL or content-type, or indeed a blanket policy for all resources, you can use your Fastly configuration to set the TTL. See HTTP caching semantics for more details.
Static bucket providers
Static bucket providers often allow caching headers to be configured as part of the metadata of the objects in your bucket. Ideally, use this feature to tell Fastly how long you want to keep objects in cache. For example, when uploading objects to Google Cloud Storage, use the gsutil
command:
$ gsutil -h "Content-Type:text/html" -h "Cache-Control:public, max-age=3600" cp -r images gs://bucket/images
Setting caching metadata in this way, at the object level, allows for precise control over caching behavior, but you can often also configure a single cache policy to apply to all objects in the bucket
HINT: If your bucket provider can trigger events when objects in your bucket change, and you can attach a serverless function to those events, consider using that mechanism to purge the Fastly cache when your objects are updated or deleted. This allows you to set a very long cache lifetime across the board, and benefit from a higher cache hit ratio and corresponding increased performance. We wrote about how to do this for Google Cloud Platform on our blog.
Web servers
If using your own hardware, or an infrastructure provider on which you install your own web server (such as AWS's EC2, or Google Compute Engine), you will have a great deal more flexibility than with a static bucket host, and somewhat more than with a platform as a service provider. The most important thing to consider when using your own web server installation is the caching headers that you set on responses that you serve to Fastly, most commonly Cache-Control
and Surrogate-Control
.
Apache: Consider making use of the
mod_expires
module. For example, to cache GIF images for 75 minutes after the image was last accessed, add the following to a directory.htaccess
file or to the global Apache config file:ExpiresActive OnExpiresByType image/gif "access plus 1 hours 15 minutes"NGINX: Add the
expires
directive:location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {expires 1h;}Alternatively, if you need more flexibility in modifying headers you can try the HttpHeadersMore Module.
Removing metadata
Some hosting providers, particularly static bucket providers include additional headers when serving objects over HTTP. You may want to remove these before you serve the file to your end users. This is best done in vcl_fetch
, where the changes to the object can be made before it is written to the cache:
- Fastly VCL
- Rust
- JavaScript
- Go
Ensuring backend traffic comes only from Fastly
Putting Fastly in front of your backends offers many resilience, security and performance benefits, but those benefits may not be realized if it is also possible to send traffic to the backend directly. Depending on the capabilities of your backend, there are various solutions to ensure that there is no route to your origin except through Fastly.
IP restriction
We publish a list of the IP addresses that make up the Fastly IP space.
Restricting access to requests coming only from Fastly IPs is not by itself an effective way to protect your origin because all Fastly customers share the same IP addresses when making requests to origin servers. However, since IP restriction can often be deployed at an earlier point in request processing, it may be useful to combine this with one of the other solutions detailed in this section.
Shared secret
A simple way to restrict access to your origin is to set a shared secret into a custom header in the Fastly configuration
- Fastly VCL
- Rust
- JavaScript
use fastly::{Error, Request, Response};
const BACKEND_NAME: &str = "backend_name";#[fastly::main]fn main(mut req: Request) -> Result<Response, Error> { req.set_header("edge-auth", "some-pre-shared-secret-string"); Ok(req.send(BACKEND_NAME)?)}
To make this solution work, you must configure your backend server to reject requests that don't contain the secret header. This is an effective but fragile solution: if a single request is accidentally routed somewhere other than your origin, the secret will be leaked and is then usable by a bad actor to make any number of any kind of request to your origin.
Per-request signature
Consider constructing a one-time, time-limited signature within your Fastly service, and verify it in your origin application:
- Fastly VCL
- Rust
- JavaScript
use fastly::{Error, Request, Response, ConfigStore};use std::time::{SystemTime, UNIX_EPOCH, Duration};use hmac_sha256::HMAC;
const BACKEND_NAME: &str = "origin_0";
#[fastly::main]fn main(mut req: Request) -> Result<Response, Error> { let config = ConfigStore::open("config"); if let Some(secret) = config.get("edge_auth_secret") { let pop = std::env::var("FASTLY_POP").unwrap_or_else(|_| String::new()); let now = SystemTime::now().duration_since(UNIX_EPOCH).unwrap_or(Duration::ZERO).as_secs(); let data = format!("{},{}", now, pop); let sig = HMAC::mac(data.as_bytes(), &secret.as_bytes()); let signed = format!("{}, {}", data, base64::encode(sig)); req.set_header("edge-auth", signed); } Ok(req.send(BACKEND_NAME)?)}
This is slightly harder to verify than a constant string, but if a request leaks and a signature is compromised, it provides only short term access to make a single kind of request.
Proprietary signatures for cloud service providers
Static bucket providers like Amazon S3 cannot be programmed to support arbitrary signature algorithms like the one above, but they do support a specific type of signature for authentication to protected buckets and individually protected objects.
Although Amazon's signature was created for its S3 service, it is widely supported as a compatibility convenience by many other bucket providers including Backblaze (in S3-Compatible mode), DigitalOcean Spaces, Google Cloud Storage, and Wasabi Hot Cloud Storage. See the AWS documentation for more details.
This and other proprietary signatures can be constructed in Fastly services using VCL or the Compute platform. The following examples in our solutions gallery provide reference implementations: