Pub/Sub at the edge with Fanout

IMPORTANT: The content on this page uses the following versions of Compute SDKs: Rust SDK: 0.10.4 (current is 0.11.2, see changes), JavaScript SDK: 3.21.2 (current is 3.28.0, see changes), Go SDK: 0.1.7 (current is 1.3.3, see changes)

Fanout is a publish/subscribe message broker operating at the Fastly edge, powered by Pushpin. This makes it easy to build and scale real-time/streaming applications which push data instantly to browsers, mobile apps, servers, and other devices.

When Fanout is not involved, an HTTP request from a client that arrives at Fastly's network results in a short-lived response, formed from some combination of content retrieved from the cache, fetched from a backend server, and/or synthesized by edge code. When edge code passes a request to Fanout, Fanout keeps that connection open indefinitely and subscribes the connection to one or more channels, waiting for data to be published to those channels. You publish data to a channel by calling an API, and Fanout distributes that data to all the clients which have a subscription to that channel.

Fanout can be used to upgrade any regular HTTP request into an event-driven response using a transport such as HTTP Streaming (e.g., Server-Sent-Events) or Long-Polling. Fanout can also be used to handle a WebSocket request, resulting in a bidirectional stream.

This architecture has some advantages over more proprietary streaming services:

  • Integrated into your Fastly service, so any pre-processing you do to requests to your Fastly service (e.g. for authentication), you can do to streaming requests too.
  • Any HTTP client or server can be used - including serverless / functions-as-a-service.
  • Any HTTP response can be turned into an event stream (e.g. a progressive JPEG, a log file, or an API endpoint that performs asynchronous operations).
  • No need to use a separate domain for streaming data.

Quick start

To use Fanout, you need a paid Fastly account containing a compatible Compute service. A free trial can be enabled in the 'Fanout' section of the service configuration options in the web interface.

There are many different ways of using Fanout, but to quickly see what it can do, use one of the fully-featured Compute starter kits:

Create a project from template

To create a Compute project with Fanout pre-configured, use fastly compute init:

  1. Rust
  2. JavaScript
  3. Go
$ fastly compute init --from=https://github.com/fastly/compute-starter-kit-rust-fanout

You can then compile and publish it to a live Fastly service using fastly compute publish:

$ fastly compute publish
Create new service: [y/N] y
Domain: [some-funky-words.edgecompute.app]
Backend (hostname or IP address, or leave blank to stop adding backends):
✓ Creating domain 'some-funky-words.edgecompute.app'...
✓ Uploading package...
✓ Activating version...
SUCCESS: Deployed package (service 0eBOC1x5Q0HHadAlpeKbvt, version 1)

Add a backend

Fanout communicates with a backend server to get instructions on what to do with each new connection. To make it easier to get started, the starter kit is configured to allow the Compute service to itself act as the backend. To make use of this, add a backend called self to your service that directs requests back to the service itself:

$ fastly backend create --name self -s {SERVICE_ID} --address {PUBLIC_DOMAIN} --port 443 --version latest --autoclone
$ fastly service-version activate --version latest -s {SERVICE_ID}

The {SERVICE_ID} and {PUBLIC_DOMAIN} should be replaced by the values shown in the output from the publish step.

IMPORTANT: If you use the web interface or API to create the backend, ensure to set a host header override if your server's hosting is name-based. Learn more.

Enable Fanout

Fanout is an optional upgrade to Fastly service plans. If you have not yet purchased access, contact sales, or start a free trial by enabling the toggle in the web interface on any Compute service.

If your Fastly account has full access to Fanout, it can be enabled on an individual service in the web interface or by enabling the fanout product using the product enablement API or the Fastly CLI:

$ fastly products --enable=fanout

Authenticating

To make the API calls to perform the publishing actions described in the section below, you'll need a Fastly API Token that has the global scope for your service.

Test the service

The starter kit project is set up to create long-lived connections for requests to /test/stream, /test/sse, /test/long-poll, and /test/websocket. The project uses itself as the backend to start the streams, and subscribes all clients to a channel called test.

You can now test the starter kit using any of the supported transports:

  1. HTTP Streaming (incl. Server-Sent Events)
  2. HTTP Long polling
  3. WebSockets

In one terminal window, make an HTTP request for /test/stream:

$ curl -i "https://some-funky-words.edgecompute.app/test/stream"
HTTP/2 200
content-type: text/plain
x-served-by: cache-lhr7380-LHR
date: Tue, 23 Aug 2022 12:48:05 GMT

You'll see output such as the above but you won't return to the shell prompt. Now, in another terminal window, run:

$ curl -H "Fastly-Key: {YOUR_FASTLY_TOKEN}" -d '{"items":[{"channel":"test","formats":{"http-stream":{"content": "hello\n"}}}]}' https://api.fastly.com/service/{SERVICE_ID}/publish/

The published data includes an http-stream representation of your data, which Fanout can use for streaming connections. The event you published appears on your curl output:

hello

You can continue to publish more messages, and they will be appended to the streaming response.

Server-Sent Events (SSE) is a specialized application of HTTP Streaming that uses the Content-Type value of text/event-stream on the response.

The starter kit provides a test endpoint for SSE. Make a HTTP request for /test/sse:

$ curl -i "https://some-funky-words.edgecompute.app/test/sse"
HTTP/2 200
content-type: text/event-stream
x-served-by: cache-lhr7380-LHR
date: Tue, 23 Aug 2022 12:48:05 GMT

Now, in another terminal window, run:

$ curl -H "Fastly-Key: {YOUR_FASTLY_TOKEN}" -d '{"items":[{"channel":"test","formats":{"http-stream":{"content": "event: message\ndata: {\"text\": \"hello world\"}\n\n"}}}]}' https://api.fastly.com/service/{SERVICE_ID}/publish/

The event you published appears on your curl output:

event: message
data: {"text": "hello world"}

The pattern created by the starter kit is well suited to use cases where you'll know at the edge what channels the client should be subscribed to, and means your origin only deals with publishing events, rather than also negotiating the setup of streams.

Next steps

Now you have an operational Fanout message broker operating on a Fastly service. Consider how you might want to modify this setup to suit your needs:

  • Learn more about subscribing, including examples of the front-end JavaScript code you need to interact with streams.
  • Learn more about publishing, including simple libraries that can abstract the complexity of message formatting for you.
  • If you only need one kind of transport (e.g., WebSockets, and not SSE), feel free to remove the code that enables the other transports.
  • If you prefer to have your origin server do the stream setup, then most of the edge code is no longer needed. See Connection setup below.
  • If you intend to use the new service in production, you'll want to add at least one origin server and a domain, and consider how you want the Fastly platform to cache your non-streamed content.

Connection setup

Fanout connections are created by explicitly calling the appropriate handoff method in your preferred Compute language SDK. Fanout then queries a nominated backend to find out what to do with the request. It's up to the backend to tell Fanout to treat the request as a stream and to provide a list of channels that client should subscribe to.

Workflow for connection setup

IMPORTANT: The Compute application decides what kinds of requests to hand off to Fanout (3), and the backend application decides what channels to subscribe the client to (5). This backend can also be a Compute service - it can even be the exact same service.

What to hand off to Fanout

You should be selective about requests you hand off to Fanout, e.g., by checking its URL path or headers. Handing off a request that isn't intended to be a stream may still work, because Fanout will relay that request to origin, and if the response is not GRIP or WebSocket-over-HTTP, Fanout will simply relay it back to the client and close the connection. However, passing all requests through Fanout is not recommended, for a number of reasons:

  • Only limited changes to requests (e.g., to request headers) will be reflected when handing off to Fanout.
  • The request will not interact with the Fastly cache, so even content that is not intended to be streamed will not be cached.
  • Responses from origin will be delivered directly to the client by Fanout, and will not be available to the Compute program.

As a result it usually makes sense to hand off requests only when they target a known path or set of paths on which you want to stream responses:

  1. Rust
  2. JavaScript
  3. Go
use fastly::{Error, RequestHandle};
fn main() -> Result<(), Error> {
let req = RequestHandle::from_client();
if req.get_url().ok().path().starts_with("/test/") {
// Hand off stream requests to the stream backend
return Ok(req.handoff_fanout("stream_backend")?)
}
// Forward all non-stream requests to the primary backend
Ok(req.send("primary_backend")?.send_to_client())
}

Responding to Fanout requests

Fanout communicates with backends by forwarding client requests and interpreting instructions in the response formatted using Generic Realtime Intermediary Protocol (GRIP). When a client request is handed off to Fanout, Fanout will forward the request to the backend specified in the handoff. The backend response can tell Fanout how to handle the connection lifecycle, using GRIP instructions.

  1. HTTP Streaming (incl. Server-Sent Events)
  2. HTTP Long polling
  3. WebSockets

Fanout forwards regular HTTP requests to the backend. Client request headers that are added, removed, or modified on your Request will be reflected in the Fanout handoff.

If the backend wants to use the request for HTTP streaming (including Server-Sent Events), it should use GRIP headers to instruct Fanout to hold that response as a stream:

HTTP/1.1 200 OK
Content-Type: text/plain
Grip-Hold: stream
Grip-Channel: mychannel
Grip-Channel: anotherchannel

The GRIP headers that are relevant to initiating HTTP streams are:

  • Grip-Hold: Set to stream to tell Fanout to deliver the headers of the response to the client immediately, and then deliver messages as they are published to subscribed channels.
  • Grip-Channel: A channel to subscribe the request to. Multiple Grip-Channel headers may be specified in the response, to subscribe multiple channels to the request.
  • Grip-Keep-Alive: Data to be sent to the client after a certain amount of activity passes. The timeout parameter specifies the length of time a request must be idle before the keep alive data is sent (default 55 seconds). The format parameter specifies the format of the keep alive data. Allowed values are raw, cstring, and base64 (default raw). For example, if a newline character should be sent to the client after 20 seconds of inactivity, the following header could be used: Grip-Keep-Alive: \n; format=cstring; timeout=20.

Messages to be published to a client in a Grip-Hold: stream state must have an http-stream format available. Learn more about publishing.

Server-Sent Events (SSE) is a specialized application of HTTP Streaming that uses the Content-Type value of text/event-stream on the response.

HTTP/1.1 200 OK
Content-Type: text/event-stream
Grip-Hold: stream
Grip-Channel: mychannel

Compliant Server-sent events clients (such as the EventSource API built into web browsers) will send a Last-Event-ID header with new connection requests. If you care about ensuring clients do not miss events during reconnects, consider parsing this header and including missed events in the initial response along with the Grip-Hold header, allowing subsequent events provided via the publishing API to be appended by Fanout later to the same response.

Validating GRIP requests

If the backend is running on a public server, then it's a good idea to validate that the request is actually coming though Fastly Fanout. The Grip-Sig header value can be used to do this. Grip-Sig is provided as a JSON Web Token, and tokens signed by Fastly Fanout can be validated using the following public key. If the token cannot be fully verified for any reason, including expiration, then the backend should behave as if the header wasn't present.

This is the public key we use for signing GRIP requests:

-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAECKo5A1ebyFcnmVV8SE5On+8G81Jy
BjSvcrx4VLetWCjuDAmppTo3xM/zz763COTCgHfp/6lPdCyYjjqc+GM7sw==
-----END PUBLIC KEY-----

Many language ecosystems provide libraries for validating JWTs. If your backend uses JavaScript, for example, the jsonwebtoken module can be used:

import * as jwt from 'jsonwebtoken';
const FANOUT_PUBLIC_KEY = `-----BEGIN PUB...`;
// Assuming ExpressJS or similar
app.get('/chat-stream/:user', (req, res) => {
jwt.verify(req.header('Grip-Sig'), FANOUT_PUBLIC_KEY);
});

NOTE: If your backend is running on JavaScript on Fastly Compute, the runtime does not support PEM-formatted keys at this time. Use the equivalent JSON Web Key (JWK) format instead:

const FANOUT_PUBLIC_KEY = {
"kty": "EC",
"crv": "P-256",
"x": "CKo5A1ebyFcnmVV8SE5On-8G81JyBjSvcrx4VLetWCg",
"y": "7gwJqaU6N8TP88--twjkwoB36f-pT3QsmI46nPhjO7M"
};

You can import this key and then use it with a library such as jose to validate a JWT:

import * as jose from 'jose';
const cryptoKey = await crypto.subtle.importKey(
'jwk', FANOUT_PUBLIC_KEY, { name: 'ECDSA', namedCurve: 'P-256' },
false, ['verify']);
const result = await jose.jwtVerify(req.headers.get('Grip-Sig'), cryptoKey);

Using a Fastly Compute service as a Fanout backend

The starter kits for Fanout illustrate using a single Compute service as both a normal Fastly service that receives end-user traffic and also as the backend used by Fanout to negotiate streams. This is achieved by adding a backend called self to the Compute service using the public domain of the service and then specifying that backend name when handing off to Fanout. You could also use another, different Compute service as the Fanout backend.

Depending on your use case, it may make sense to use a Fastly Compute service as the Fanout backend, or it may be better to use the same origin server you use for non-streaming requests.

Consider using a Fastly Compute service to provide the Fanout backend if:

  • you have a small number of channels
  • the request from the client specifies the channels they want to subscribe to
  • the server publishing messages doesn't need to know how many subscribers there are

Consider using your own origin server as the Fanout backend if:

  • it's not possible to know ahead of time whether a request will turn into a stream or not
  • you need to know within your origin infrastructure how many subscribers there are
  • it's important that clients don't miss messages that are published while the client is reconnecting
  • you want to apply an authentication check to stream requests (and the authentication layer is implemented at your origin)
  • you are using long-polling

When Fanout relays requests to the backend, the path is preserved, and a Grip-Sig header is added. The path therefore remains a good way to identify requests related to streaming endpoints, and the Grip-Sig header can be used to differentiate between requests from a client and requests relayed from Fanout:

  1. Rust
  2. JavaScript
  3. Go
fn main() -> Result<(), Error> {
let req = RequestHandle::from_client();
// Request is a stream request
if req.get_url().ok().path().starts_with("/test/") {
if req.get_header_value("Grip-Sig").is_some() {
// Request is from Fanout
return Ok(handle_fanout(req, "test").send_to_client());
}
// Not from Fanout, route back to self through Fanout first
return Ok(req.handoff_fanout("self")?);
}
// Forward all non-stream requests to the primary backend
Ok(req.send("primary_backend")?.send_to_client())
}

The handle_fanout function (handleFanout in JavaScript and Go) invoked in the example above is a user-provided function that should return a GRIP HTTP or a WebSockets-over-HTTP response. The Fanout starter kits contain example implementations.

Subscribing

Fanout is designed to allow push messaging to integrate seamlessly into your domain. When clients make HTTP requests or WebSocket connections that arrive at Fastly's network, what happens next depends on the instructions provided by your backend. These instructions can include subscribing the client to one or more channels.

For HTTP-based transports (such as Server-Sent Events and long polling), this is done with response headers. For example:

Grip-Hold: stream
Grip-Channel: mychannel

For the WebSocket transport, this is done by sending GRIP control messages as part of a WebSockets-over-HTTP response. For example:

c:{"type": "subscribe", "channel": "mychannel"}

It's important to understand that clients don't assert their own subscriptions. Clients make arbitrary HTTP requests or send arbitrary WebSocket messages, and it is your backend that determines whether clients should be subscribed to anything. Your channel schema remains private between Fanout and your backend server, and in fact clients may not even be aware that publish-subscribe activities are occurring.

HINT: Your application's design may still allow for client requests to specify channel names. A path such as /stream/departure-KR4N81 to get a real time stream of departure status for a flight booking, for example, is passing the name of the desired channel in the path. If the backend deems the client to be entitled to data from that channel, it could extract this token from the path and pass it to Fanout in a GRIP subscription instruction.

If your client is a web browser, you will use JavaScript to initiate streaming requests to the backend:

  1. HTTP Streaming (incl. Server-Sent Events)
  2. HTTP Long polling
  3. WebSockets

Modern web browsers have support for reading from streams via the fetch function and ReadableStream interface:

const eventList = document.querySelector('ul');
const response = await fetch('/test/stream');
const reader = response.body.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) {
return;
}
const newElement = document.createElement("li");
newElement.textContent = value;
eventList.appendChild(newElement);
}

Web browsers have built-in support for Server-Sent Events via the EventSource API:

const evtSource = new EventSource('/test/sse');
const eventList = document.querySelector('ul');
evtSource.onmessage = (event) => {
const newElement = document.createElement("li");
newElement.textContent = `message: ${event.data}`;
eventList.appendChild(newElement);
};

If your SSE events include an id: property, the EventSource will add a Last-Event-ID header to each request, which can be used to deliver missed messages when a new stream begins.

Publishing

Messages are published to Fanout channels using the publishing API. To publish events, send an HTTP POST request to https://api.fastly.com/service/{SERVICE_ID}/publish/. You'll need to authenticate with a Fastly API Token that has the global scope for your service.

Messages can also be delivered during connection setup (often to provide events that the client missed while not connected), and in response to inbound WebSocket messages. Events delivered in this way go to the client making the request (or sending the inbound WebSocket message), and do not use pub/sub channel subscriptions.

IMPORTANT: Unlike other Fastly APIs, the publishing endpoint requires a trailing slash: publish/.

Publish requests include the messages to be published in a JSON data model:

PropertyTypeDescription
itemsArrayA list of messages to publish
└─ [i]ObjectEach member of the array is a single message
   └─ idStringA string identifier for the message. See de-duplicatioon.
   └─ prev-idStringIdentifier of the previous message that was published to the channel. See sequencing.
   └─ channelStringThe name of the Fanout channel to which to publish the message. One channel per message.
   └─ formatsObjectA set of representations of the message, suitable for different transports.
      └─ ws-messageObjectA message representation suitable for delivery to WebSockets clients.
         └─ contentStringContent of the WebSocket message.
         └─ content-binStringBase-64 encoded content of the WebSocket message (use instead of content if the message is not a string).
         └─ actionStringA publish action.
      └─ http-streamObjectA message representation suitable for delivery to Server-Sent events clients.
         └─ contentStringContent of the SSE message. Must be compatible with the text/event-stream format.
         └─ content-binStringBase-64 encoded content of the SSE message (use instead of content if the message is not a string).
         └─ actionStringA publish action.
      └─ http-responseObjectA message representation suitable for delivery to Long-polling clients.
         └─ actionStringA publish action.
         └─ codeNumberHTTP status code to apply to the response.
         └─ reasonStringInformational label for HTTP status code (delivered only over HTTP/1.1)
         └─ headersObjectA key-value map of headers to set on the response.
         └─ bodyStringComplete body of the HTTP response to deliver.
         └─ body-binStringBase-64 encoded body content (use instead of content if the body is not a string).

Minimally, a publish request must contain one message in at least one format, with the content property (for http-stream or ws-message) or the body property (for http-response specified). An example of a valid publish payload is:

{
"items": [
{
"channel": "test",
"formats": {
"ws-message": {
"content": "hello"
}
}
}
]
}

This can be sent using curl as shown:

$ curl -H "Fastly-Key: {YOUR_FASTLY_TOKEN}" -d '{"items":[{"channel":"test","formats":{"ws-message":{"content":"hello"}}}]}' https://api.fastly.com/service/{SERVICE_ID}/publish/

WARNING: If you are migrating to Fastly Fanout from self hosted Pushpin or fanout.io, you may be using a GRIP library in your server application. These libraries currently are not compatible with Fastly Fanout.

Publish actions

Published items can optionally specify one of three actions:

  • send: The included content should be delivered to subscribers. This is the default if unspecified.
  • hint: The content to be delivered to subscribers must be externally retrieved. No content is included in the published item.
  • close: The request or connection associated with the subscription should be ended/closed.

Sequencing

If Fanout receives a message with a prev-id that doesn't match the id of an earlier message, then Fanout will buffer it until we receive a message whose id matches the value, at which point both messages will be delivered in the right order. If the expected message is never received, the buffered message will eventually be delivered anyway (around 5-10 seconds later).

De-duplication

If Fanout receives a message with an id that we've seen already recently (within the last few seconds) the publish action will be accepted but no message will be created. This happens even if the message content is different from any prior messages which had the same id.

This feature is typically useful if you have an architecture with redundant publish paths. For example, you could have two publisher processes handling event triggers and have both send each message for high availability. Fanout would receive every message twice, but only process each message once. If one of the publishers fails, messages would still be received from the other.

Limits

By default, messages are limited to 32,767 bytes for the “content” portion of the format being published. For the normal HTTP and WebSocket transports, the content size is the number of HTTP body bytes or WebSocket message bytes (TEXT frames converted to UTF-8).

Inbound WebSockets messages

Unlike HTTP-based push messaging (e.g. server-sent events), WebSockets is bidirectional. When clients send messages to Fastly over an already-established WebSocket, Fanout will make a WebSockets-over-HTTP request to the Fanout backend, with a TEXT or BINARY segment containing the message from the client.

POST /test/websocket HTTP/1.1
Sec-WebSocket-Extensions: grip
Content-Type: application/websocket-events
Accept: application/websocket-events
TEXT 16\r\n
Hello from the client!\r\n

The response from the backend may include TEXT or BINARY segments, which will be delivered to the client that sent the message (disregarding the channel-based pub/sub brokering). TEXT segments may also include GRIP control messages to instruct Fanout to modify the client stream, for example to change which channels it subscribes to.

HTTP/1.1 200 OK
Content-Type: application/websocket-events
TEXT 0C\r\n
You said Hi!
TEXT 45\r\n
c:{"type": "subscribe", "channel": "additional-channel-subscription"}\r\n

The starter kits for Fanout include an example of handling inbound WebSockets messages by echoing the content of the message back to the client.

Libraries and SDKs

Libraries created to make it easier to interact with servers using GRIP exist for many languages, and make initializing streams and publishing easier in your preferred software platform or framework.

Best practices

To get the most out of using Fanout on Fastly, consider the following tips:

  • Avoid stateful protocol designs: for example keeping a client's last received message position in the server instead of in the client. These patterns will work, but they will be hard to reason about. It's best if the client asserts its own state.
  • Don't keep track of connections on the server: Very rarely is it important to know about connections, rather than users. If you're implementing presence detection, better to do at the user/device level using heartbeats independently of connections.