Get $100k/month in edge compute credits for nine months
Since we released Compute in late 2019, we’ve helped customers launch global production workloads for use cases that include A/B testing frameworks, website personalization, backend service porting for latency reduction and scale, edge authorization and JWT token enrichment, security checks, edge redirects, various manifest manipulation use cases, and much more.
The results have been impressive — and they’re not limited to just these use cases. At its core, Compute is a very fast, very secure, general computing environment — almost any logic can be better executed at the edge. Check out this demo of how the iconic video game DOOM can run interactively on Compute, for example.
However, even with the proven advantages of edge computing, I still hear from developers that they’re hesitant to give it a shot because of two barriers to entry: cost and learning curve. We’ve done a lot to help ease the learning curve, and today, I’m happy to announce that we’re making it more cost effective to try Compute as well.
For a limited time, you can get $100k/month in edge compute credits for nine months, as an incentive to start building now. Check out the details at the link, but two notes: 1. This offer won’t last forever, and 2. Each customer who contracts with us can get close to $1 million in free compute credits.
In the rest of this post, I’ll cover some of the most-asked questions I get from edge-curious developers, but if you’re ready to try it out now, get started!
Why Compute?
Edge compute development moves from a monolithic architecture to a decoupled microservices architecture for more flexibility, control and velocity to ship projects. This relatively new development paradigm allows developers to deploy code closer to their end users than typical central clouds like AWS, GCP, and Azure allow. In fact, we've deployed Compute capacity to all of our global POPs.
Unique to our solution, code (written in Rust, AssemblyScript, or JavaScript), is pre-compiled into WebAssembly binary and available to run within microseconds of being requested by a user at any of our POPs. It’s a very secure and speedy approach to edge computing that is proving very valuable to companies with latency-sensitive workloads. We care so deeply about latency ourselves, that we are building our next generation of products using Compute. You can expect to see many more applications in the future leveraging this core technology.
Plus, Compute is one of only two solutions named a “Leader” in The Forrester New Wave™: Edge Development Platforms, Q4 2021, solidifying its position as a major player in the growing edge computing space.
"Compute is simple to develop on and a breeze to deploy. At Edgemesh, we are always focused on two things: performance and stability. When we went live with Fastly in production, our customers saw an instant performance across every metric — with the median time to first byte improving 5x on average. The average improvements are important, but the stability of that speed is a major win. Customers are seeing significant decreases in latency across every KPI, driven by consistently stable response time we get from the Fastly Edge. Deploying our joint technology solution is like watching a hurricane blow over our customers’ performance metrics — every metric instantly drops and remains stable. With the new solution live now for our customers, they have seen the immediate revenue impact that speed makes.” — Jake Loveless, CEO, Edgemesh
When should you use Compute?
Now that you understand some of the reasons driving Compute adoption, let’s talk about when to use it. While this doesn’t encompass everything, here are a few common patterns I’ve observed that typically lead to developers choosing Compute as the appropriate development environment.
Body access and modifications: Compute lets you access and modify the entire body of the request and response. Many current customers are doing exactly this at the edge, rather than sending to an origin for body manipulation, content stitching, and personalization.
Backend bottlenecks: User experiences are dependent on a call to a backend that is a pain point for latency. While the page and experience is supposed to load, the end user is left waiting for a backend response like authentication, personalization, or data that could all be on the edge.
Serialized backend calls: Users are forced to wait for a serialized set of calls to go to the backend with each response piling on to the calculation or final rendering of the user experience. In contrast, Compute can asynchronously fan out and call multiple backends. The resulting responses can be held in memory while functions and computation cycles run and content is stitched right on the edge much faster.
Slow current edges: We’ve delighted many customers with the blazingly fast speed of Compute. It turns out not all edges are equal and since we’ve effectively eliminated cold starts with our implementation of Wasm, and provide an incredible amount of CPU for each function run, we deliver compelling speed to enhance your applications.
Clunky and expensive infrastructure: Several customers have been able to completely eliminate cloud servers and hardware they manage for specific parts of the stack. You don’t have to maintain infrastructure when it can be spun up on demand in a more cost-efficient manner.
Need for fast and secure processes: With Compute, we completely isolate the runtime for every response and immediately spin down the environment as soon as the function completes. This effectively mitigates the opportunity for side-channel attacks and the worry about noisy or insecure neighbors. Unlike solutions that took the easier path of making Chrome’s V8 Isolates available on the edge (then rebranding it as serverless edge compute), we don’t keep the runtime open for subsequent processes. Each response gets its own secure runtime, and we can do this because the cold starts are effectively eliminated (on the order of microseconds). Many customers need this level of security to adopt edge computing at scale.
How can you predict costs?
Plainly put, predicting costs associated with Compute can be challenging since they’re based on the amount of requests, memory, and CPU used. Therefore, the code you write and the speed of your backend calls impact costs under our current pricing model. However, once a workload has been built and tested, we can observe the workload properties and provide a calculator that helps forecast the costs clearly.
The uncertainty around cost is part of the reason we introduced this limited-time offer — to help lower the barrier to testing and building. With the offer, you have 90 days to try out Compute for free. If you're not convinced, cancel with no strings attached, but if you stick around, you get an additional six months of free compute up to $100,000 per month. We only bill you the final six months of the 15-month contract, and by then, you should know if your edge compute use cases are performing well and also have a clear idea of the costs.
Give it a try
The developer ecosystem is still early in the adoption cycle for edge computing, but we are no longer at the beginning. We have many customers already taking advantage of Compute to differentiate their offering and compete more effectively in respective markets. Join them and get started today.