Get started with Fastly logging and Compute@Edge | Fastly
Looking to explore edge computing? We recently launched a free trial to give you the opportunity to try out our serverless platform without commitment.
Developing Compute WebAssembly (or Wasm) applications for Fastly edge caches requires a paradigm shift — from configuration of CDN behavior via VCL (the Varnish Configuration Language) to a form of programming that’s closer to building traditional software, whether in Rust, JavaScript, or AssemblyScript.
Whether you’ve arrived here from VCL or are new to Compute altogether, you will eventually want to know how to emit diagnostic text output from your Wasm applications, for the purpose of one-time troubleshooting or continuously capturing key indicators.
There are two ways to emit output from Compute Wasm apps, and they happen to correspond to two distinct categories of target destination for capturing or retaining the emitted messages.
Traditional STDIO
The output to stdout and stderr file descriptors is captured by the host environment and directed to a destination that you are able to tail, similar to output redirected to a terminal or to a file. Compute terms this functionality “log tailing,” and it’s supported by the Fastly CLI. Log tailing will also capture and display unhandled runtime errors from your Wasm app.Logging framework
The Compute logging framework supports the objectives of adding formal structure, codifying intent, and configuring a destination system for the emitted output. The destinations for your log messages can be any of the integrations available as part of our log streaming platform, targeting Apache Kafka, S3, and Splunk, among many others. You can classify your logs according to importance, e.g. informational, warning, or error category. Once at their destination, the logs can be processed with automation according to this criteria and arbitrary others.
In this blog post, we’ll walk you through the basic steps of outputting messages to STDIO and tailing that output with the Fastly CLI, as well as configuring a log streaming endpoint, emitting logs in your application, and confirming the delivery of those logs to your target logging destination. For log streaming, we’ll use Azure Event Hubs, a service bus that is compatible with the Apache Kafka protocol, as the targeted log aggregator. If you choose an alternative target, be sure to adjust the log endpoint configuration accordingly, as discussed in our log streaming documentation.
Initialize a project
To make this tutorial a bit easier to follow, we’ll stick to CLI tools exclusively. We assume here you have created a Fastly customer account, installed the Fastly CLI, and installed Rust. If not, now is a great time to do so! Please note, in this tutorial, we’ll be referencing version 1.0.0 of the Fastly CLI. If you are using a different version, some command flags may differ. And although we’ll stick to Rust, the same objectives are achievable with JavaScript and AssemblyScript.
First, we’ll create a capable CLI token. Assuming you are the administrator of your account, execute the below command to generate the API token for all subsequent configuration commands of the CLI tool in this tutorial. Be sure to specify an appropriate expiration date and time for your token.
fastly auth-token create --password=<your-Fastly-acct-password> \
--name='My WASM API Token' --scope=global \
--expires 2021-12-31T23:59:59Z
You should see output as something starting with this:
SUCCESS: Created token ‘<token-string>’ ...
The string between the two single quotes is the value of your Fastly API token. Now, run the following command and specify the created token when prompted for it:
fastly configure
You should now be set for creating, configuring, and administering your Fastly services and Compute Wasm applications.
Create a Wasm service with the following command:
fastly service create --name=MyService --type=wasm
Note, the above command creates specifically a Wasm Fastly service as opposed to a traditional VCL service. If you are interested, you should now also be able to view this new service within the Fastly web app.
At a minimum, your service will need a dedicated domain name in order to have its service configuration activated. First, we’ll need to know the service ID of your new service. Run this command to obtain the service identifier:
fastly service list
The response will contain the service identifier in the ID column. Also, note your service’s type in the generated output; it should be wasm.
Run the command below, providing a concrete name for the subdomain, to assign a domain to your Wasm service:
fastly domain create --name=<your-name>.edgecompute.app \
--version=latest --service-id=<your-Fastly-service-ID>
Note, our platform requires Compute Wasm services to have domain names with top-level suffixes of edgecompute.app, because this top-level domain is associated with our CDN infrastructure. However, for your production Compute deployments, you can assign whichever aliases fit your needs by means of CNAME records with your domain name service provider.
To facilitate a more interesting example, we’ll assign a couple backend origin hosts to our Wasm service, namely www.w3.org and www.iana.org. These will stand in for your actual origin server, that will be serving your web content. Run the following two commands to add these two backends:
fastly backend create --name=backend_name --address=www.w3.org \
--override-host=www.w3.org --use-ssl --ssl-sni-hostname=www.w3.org \
--ssl-cert-hostname='*.w3.org' --version=latest \
--service-id=<your-Fastly-service-ID>
fastly backend create --name=other_backend_name \
--address=www.iana.org --override-host=www.iana.org --use-ssl \
--ssl-sni-hostname=www.iana.org \
--ssl-cert-hostname='*.iana.org' --version=latest \
--service-id=<your-Fastly-service-ID>
With the backend origin services configured, the remaining step for this initial Wasm app setup is to create and upload actual Wasm code. Fortunately, the CLI makes this task incredibly easy for you via its Wasm starter kits.
Run the following command, confirming your user name, and select the “Default starter for Rust” (or similarly named option) to use:
fastly compute init --language=Rust --directory=./MyEdgeComputeApp \
--name=MyEdgeComputeApp \
--description='My first Compute@Edge app'
Change directories into MyEdgeComputeApp and review its contents, particularly the files under ./src.
Edit ./fastly.toml
in the project root directory to match this listing for its service backend setups:
[setup]
[setup.backends]
[setup.backends.backend_name]
address = "www.w3.org"
description = "A backend able to serve `/articles` path"
port = 443
[setup.backends.other_backend_name]
address = "www.iana.org"
description = "A backend able to serve `/anything` path"
port = 443
You can remove the above configuration altogether and rely on the CLI exclusively to manage your backends, but the CLI Wasm publishing process can also use the ./fastly.toml
file to configure the backends automatically for your Wasm app.
Run the following commands to build and deploy your Wasm app:
fastly compute build
fastly compute deploy --service-id=<your-Fastly-service-ID>
At this point, your Compute Wasm service will be activated and available to serve requests.
You can fetch any of the following URLs for your service and observe their responses:
https://<your-name>.edgecompute.app/
https://<your-name>.edgecompute.app/articles
https://<your-name>.edgecompute.app/anything
The /articles
and /anything
paths should result in HTTP 404 “landing page” responses from www.w3.org and www.iana.org. Figuring out how to make the Rust code in ./src/main.rs
resolve these paths to valid pages and resolve paths for page artifacts at the two sites is left to you as an exercise.
Congrats! You have deployed your Compute Wasm app and are now ready to move onto the next steps of emitting program output and logging to external log collectors from your application. We’ll achieve each of these sequentially in the next three sections.
Print debugging with the CLI
In this section, we’ll walk through emitting and monitoring simple print statements from your Compute Wasm application.
Switch back to your MyEdgeComputeApp from the previous step and add the following line to ./src/main.rs,
under fn main(mut req: Request) -> Result<Response, Error> {
:
fn main(mut req: Request) -> Result<Response, Error> {
// Log request path to stdout.
println!("Request received for path {}", req.get_path());
// Filter request methods...
match req.get_method() {
...
Build and re-deploy your Wasm app:
fastly compute build
fastly compute deploy --service-id=<your-Fastly-service-ID>
Note, this deployment step will automatically advance the version of your Wasm service configuration for you, to the next, and activate this new version with the newly published Wasm app package.
In your terminal, enter the following command to start tailing output from your Wasm app:
fastly log-tail --service-id=<your-Fastly-service-ID>
You may see some immediate log messages, notifying you output tailing had been enabled for your Wasm service, but this is a mere consequence of initial setup.
To see application output tailing in action, fetch the URL for your Wasm service, e.g. https://<your-name>.edgecompute.app/
. Then, fetch the other two virtual paths for that same subdomain, i.e. /articles
and /anything
.
At the very least, the outcome of the above fetch requests should yield output, similar to the following:
stdout | 5ee29b16 | Request received for path /
stdout | f438ba38 | Request received for path /articles
stdout | 44c848ca | Request received for path /anything
If you issued your requests from a web browser, you may see additional entries for JavaScript and image artifacts. If you do not see any of the anticipated output initially, please retry issuing the fetch requests. The hash value next to each output message is a correlation ID for each request, whereby a single request may result in multiple output messages. We will see this aspect in action next. Press ^C to quit.
Return to your MyEdgeComputeApp and add the following code under "/articles" => {
:
"/articles" => {
eprintln!("Request will be redirected to {}", BACKEND_NAME);
// Request handling logic could go here... E.g., send the request to an origin backend
...
Build and re-deploy your Wasm app once more:
fastly compute build
fastly compute deploy --service-id=<your-Fastly-service-ID>
Now, again, tail the output from your Wasm app:
fastly log-tail --service-id=<your-Fastly-service-ID>
Then, issue a fetch request to https://<your-name>.edgecompute.app/articles
and observe the monitored output. You should see lines, akin to these:
stdout | ad3d576e | Request received for path /articles
stderr | ad3d576e | Request will be redirected to backend_name
Nothing seemingly spectacular, right? However, if you look closer, you’ll see a couple things happening here:
You have two messages emitted to two separate output streams,
stdout
andstderr
Both outputs were generated from handling a single request, in the example above, identified by
ad3d576e
In case you’re wondering, the CLI also supports filtering the output by stream type.
What have we observed in this section? Using Rust STDOUT and STDERR output streams, you can emit meaningful information from your Compute Wasm application, and using fastly log-tail,
you can track and display that emitted output locally on your system. Last but not least, fastly log-tail,
allows you to also go back in time and display emitted output that was captured within a specified time interval; be certain to explore the documentation for this functionality.
Okay, you’re all excited about your Compute Wasm application, but what if you want to deploy it to production but either do not want to tap into standard output streams from your terminal or would like to have better ways of storing and querying the emitted log information? Moreover, what if your objective is to automate the log analysis process either for identifying anomalies or for identifying critical business opportunities based on the requests that your Wasm app receives?
Well, you are in luck! The solution for this challenge is our log streaming platform, and it’s available to Wasm services and applications just as it has been available to traditional VCL configurations. In the next sections, we’ll cover the details of configuring and logging to sophisticated log aggregation endpoints from your Wasm app, and we’ll even make certain the logs are available to your analytics platform on the other end.
Configure logging endpoints
In order to target aggregation systems with our logs, we first need to set up a few actual endpoints for this purpose. We’ll configure a cloud Kafka endpoint, one in Azure with Event Hubs. We’ll use the most primitive of configurations for our example and defer further information about the service to the documentation on the Microsoft Azure platform. Also, our usage of Kafka tooling will be very rudimentary here as the actual end-to-end integration is our single focus.
Assuming you have installed the Azure CLI and created an Azure account (a trial account should suffice), log into your account via the CLI. Note, you may have to sign in with your tenant ID to authenticate correctly:
az login --tenant <your-tenant-ID>
The output of the above command should show your active Azure subscriptions, for which a type of Basic is sufficient. Note your subscription ID from the command’s output. Run the following commands to stand up an Event Hubs Cluster in Azure. The below commands use “West Central US” region as the target location, but you can substitute a location of your own choosing from running az account list-locations
.
az group create --location westcentralus --name MyAzureGroup \
--subscription <your-subscription-ID>
az eventhubs namespace create --name <your-unique-namespace> \
--resource-group MyAzureGroup --enable-kafka true \
--location westcentralus --sku Standard \
--subscription <your-subscription-ID> \
--enable-auto-inflate false
az eventhubs eventhub create --resource-group MyAzureGroup \
--namespace-name <your-unique-namespace> --name MyEventHub \
--message-retention 1 --partition-count 1 --enable-capture false \
--subscription <your-subscription-ID>
az eventhubs eventhub authorization-rule create \
--eventhub-name MyEventHub --name Fastly \
--namespace-name <your-unique-namespace> \
--resource-group MyAzureGroup --rights Listen Send \
--subscription <your-subscription-ID>
az eventhubs eventhub authorization-rule keys list \
--eventhub-name MyEventHub --name Fastly \
--namespace-name <your-unique-namespace> \
--resource-group MyAzureGroup \
--subscription <your-subscription-ID>
If your Event Hubs namespace name is unavailable, try a different name. The last of the above commands will display the secure Azure Event Hubs connection string ("primaryConnectionString"
), that you will need to use in your Kafka Fastly log streaming endpoint configuration in the next steps.
The above commands set up a single Event Hub, akin to a Kafka topic. But, what if we have two different data processing pipelines for different kinds of events and wish to analyze these categories of events independently? Well, let’s set up another Event Hub just for this scenario. Run the following commands to initialize a secondary event bus for different events:
az eventhubs eventhub create --resource-group MyAzureGroup \
--namespace-name <your-unique-namespace> --name MyOtherEventHub \
--message-retention 1 --partition-count 1 --enable-capture false \
--subscription <your-subscription-ID>
az eventhubs eventhub authorization-rule create \
--eventhub-name MyOtherEventHub --name Fastly \
--namespace-name <your-unique-namespace> \
--resource-group MyAzureGroup --rights Listen Send \
--subscription <your-subscription-ID>
az eventhubs eventhub authorization-rule keys list \
--eventhub-name MyOtherEventHub --name Fastly \
--namespace-name <your-unique-namespace> \
--resource-group MyAzureGroup \
--subscription <your-subscription-ID>
As you observed previously, the last of the above commands will display the secure Azure Event Hubs connection string ("primaryConnectionString"
), that you will need to use in your Fastly log streaming Kafka endpoint configuration, specifically for the second endpoint.
Run the following CLI command to add the first created Azure Event Hubs endpoint to your Compute Wasm service:
fastly logging kafka create --name=AzureEventHubs --version=latest \
--topic=MyEventHub \
--brokers=<your-unique-namespace>.servicebus.windows.net:9093 \
--service-id=<your-Fastly-service-ID> --required-acks=-1 --use-tls --max-batch-size=2048 \
--use-sasl --auth-method=plain --username='$ConnectionString' \
--password='<value-of-first-Azure-primaryConnectionString>' --autoclone --placement=none
Note, after executing the above command, a new service draft version of your service’s configuration will be created, but it won’t be active. We’ll activate it later with the publication of updated code for your Wasm app. Also, note the above command named this logging endpoint as “AzureEventHubs”, and you’ll need to reference this same logging endpoint name within your Wasm application code.
Run the following CLI command to add the second created Azure Event Hubs endpoint to your Compute Wasm service:
fastly logging kafka create --name=AnotherAzureEventHubs --version=latest \
--topic=MyOtherEventHub \
--brokers=<your-unique-namespace>.servicebus.windows.net:9093 \
--service-id=<your-Fastly-service-ID> --required-acks=-1 --use-tls --max-batch-size=2048 \
--use-sasl --auth-method=plain --username='$ConnectionString' \
--password='<value-of-second-Azure-primaryConnectionString>' --autoclone --placement=none
As previously emphasized, note the name of this second logging endpoint, which is “AnotherAzureEventHubs”. You’ll reference it in your code to direct specific events exclusively to the associated logging destination.
You can confirm the successful configuration of the two logging endpoints for your Compute Wasm service by executing the following command:
fastly logging kafka list --version=latest --service-id=<your-Fastly-service-ID>
You should see both, “AzureEventHubs” and “AnotherAzureEventHubs”, endpoints in the output, side by side.
Add production-ready logging
Now that we have the logging endpoints for your Wasm service configured, we can reference them from within the Wasm application code.
First off, we need to add the necessary dependencies to our Rust project, MyEdgeComputeApp. Find and edit the file, ./Cargo.toml
to include the dependencies as follows:
[dependencies]
fastly = "^0.8.0"
log-fastly = "0.8.0"
log = "0.4.14"
fern = "0.6"
chrono = "0.4"
Documentation for the log-fastly
crate is accessible online.
If your editor is integrated with the Rust programming language, the editor may pull these dependencies immediately, or they will be downloaded when you next build this Wasm app. Note, at the time of you reading this tutorial, the listed dependency versions may have advanced. Confirm what the latest versions are and use those.
Within ./src/main.rs
, add the following initialization code under fn main(mut req: Request) -> Result<Response, Error> {
:
fn main(mut req: Request) -> Result<Response, Error> {
let logger: Box<dyn log::Log> = Box::new(log_fastly::Logger::builder()
.max_level(log::LevelFilter::Info)
.default_endpoint("AzureEventHubs")
.endpoint("AnotherAzureEventHubs").build()
.expect("Unable to init Fastly logger"));
fern::Dispatch::new()
.format(|out, message, record| {
out.finish(format_args!(
"{}[{}] {}",
chrono::Local::now().format("[%Y-%m-%d][%H:%M:%S]"),
record.level(),
message
))
})
.chain(logger)
.apply()?;
...
Locate the “/articles”
match clause, and add a log statement under it likewise:
"/articles" => {
log::info!("Processing request to {}", req.get_path());
...
Locate the “/anything”
path prefix match clause, and add a log statement under it likewise:
path if path.starts_with("/anything") => {
log::warn!(target: "AnotherAzureEventHubs", "Processing request to {}", req.get_path());
...
Build and re-deploy your Wasm app:
fastly compute build
fastly compute deploy --service-id=<your-Fastly-service-ID>
Note, this deployment will be to the latest version of your Wasm service’s draft configuration, and the deployment will also activate that version.
Issue four or five fetch requests against https://<your-name>.edgecompute.app/articles
. Then, issue four or five fetch requests against https://<your-name>.edgecompute.app/anything
.
We’ll now proceed to fetch the messages that your Wasm app should have sent to your different Azure Event Hubs endpoints.
Download and extract Apache Kafka from https://kafka.apache.org/downloads, but note, we’ll use only the Kafka client out of that distribution. The Kafka distribution requires the installation of Java on your system, but you are welcome to follow along with a different Kafka client, such as kafkacat/kcat, and adjust your command-line flags accordingly. After extracting the Kafka artifacts and changing your current directory to the root of that archive, change the current directory into ./bin
.
Create a file, ./azure_client.properties
, with the following contents:
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="$ConnectionString" \
password="<value-of-first-Azure-primaryConnectionString>";
sasl.mechanism=PLAIN
security.protocol=SASL_SSL
Run the following command to read published Kafka events from your first Azure Event Hubs instance:
./kafka-console-consumer.sh --bootstrap-server \
<your-unique-namespace>.servicebus.windows.net:9093 \
--topic MyEventHub \
--consumer.config ./azure_client.properties --from-beginning
Output, akin to the following, should appear in the response:
[2021-09-16][21:00:08][INFO] Processing request to /articles
[2021-09-16][21:00:08][INFO] Processing request to /articles
[2021-09-16][21:00:09][INFO] Processing request to /articles
[2021-09-16][21:00:09][INFO] Processing request to /articles
[2021-09-16][21:00:09][INFO] Processing request to /articles
Press ^C to exit kafka-console-consumer.sh
.
Please, note, if you do not see the expected messages as above and see output, indicating the Kafka client had disconnected, the kafka-console-consumer.sh
connectivity to Azure Event Hubs may be getting blocked by either your operating system’s firewall or by the firewall on your local network appliance. Since this may be the case, please allow access to port 9093 on the windows.net
domain among your respective firewall settings.
The timestamp in each log message reflects the time of when the message was created. As you can see, the log line that we added to our Rust code for handling “/articles”
requests appears as the message in our first Azure Event Hub. These are logged at the INFO log level. Also note, we do not see log messages associated with handling “/anything”
requests.
Now, create another file, ./another_azure_client.properties
, with the following contents:
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="$ConnectionString" \
password="<value-of-second-Azure-primaryConnectionString>";
sasl.mechanism=PLAIN
security.protocol=SASL_SSL
Run the following command to read published Kafka events from your second Azure Event Hubs instance:
./kafka-console-consumer.sh --bootstrap-server \
<your-unique-namespace>.servicebus.windows.net:9093 \
--topic MyOtherEventHub \
--consumer.config ./another_azure_client.properties --from-beginning
Output, akin to the following, should appear in the response:
[2021-09-16][21:02:06][WARN] Processing request to /anything
[2021-09-16][21:02:07][WARN] Processing request to /anything
[2021-09-16][21:02:07][WARN] Processing request to /anything
[2021-09-16][21:02:10][WARN] Processing request to /anything
[2021-09-16][21:02:10][WARN] Processing request to /anything
Press ^C to exit kafka-console-consumer.sh
. Notice, we see log messages, associated with handling only the “/anything”
requests, logged at the WARN log level. We do not see messages for handling “/articles”
requests.
The above targeting of different logging endpoints can be extended to targeting different cloud and service providers and not, just in terms of Kafka. You can direct different log messages to uniquely or redundantly go to different integration targets that our log streaming supports, such as Amazon S3 storage, Google BigQuery, Splunk, and many others. With Compute Wasm services’ use of our log streaming feature, you are fully able to segment your log message flows to different destinations, on the basis of specific intent, such as anomaly detection or business intelligence.
Upon completing this tutorial, please refer to the documentation for the Azure CLI to tear down the provisioned resources, such as the resource group. The underlying Event Hub and Event Hubs namespace may need to be removed prior to removing the group.
Conclusion
Congratulations! In this tutorial, you have learned how to configure a Compute Wasm service and how to build and publish your Rust application to that service. You have also learned how to emit output to STDOUT and STDERR and how to follow that output from your command line. Finally, you have configured Azure Event Hubs as a target destination for log streaming, embellished your Rust Wasm app with formal log statements, and verified the successful emission of logs to their Azure Event Hubs destination. You’ve done great!
Now, take the time to explore the different aspects of our CLI, Compute Wasm services, and log streaming in greater detail. And if you're not yet using Compute, we're offering a limited-time opportunity to try it out for yourself with a free three-month test and configuration period and up to $100k a month in credit for six months!