03 Apr 2018, 19:37

Cancelling Requests with Abortable Fetch

There are often times in a web application where you need to send a request for the latest user input or interaction. Some examples might be a autocomplete or zooming in and out a map. Let’s think about each of these examples for a moment. Firstly autocomplete; every time we type (or maybe less if we were to debounce) we might send out a request. If the user input changes the old requests might become irrelevant as we keep typing (i.e. ‘java’ and ‘javascript’). That’s potentially a lot of redundant requests before we get to what we’re interested in!

Now the web map case; we’re zooming and panning around the map. As we zoom in and out, we are no longer interested in the tiles from the previous zoom levels. Again, lots of requests might be pending for redundant data.

Taking the first example, let’s set the scene by looking at some naive code about how we might implement an autocomplete. For the purpose of this article we will be using the more modern fetch rather than XMLHttpRequest for making a network request. Here’s the code:

    autocompleteInput.addEventListener('keydown', function() {

        const url = "https://api.example.com/autocomplete"

            .then((response) => {
                // Do something with the response
            .catch((error) => {
                // Something went wrong


The problem in this case is that each one of these requests will complete, even if it is no longer relevant. We could implement some extra logic in the updateAutocompeleteMenu to prevent unnecessary code execution but this won’t actually stop the request. It’s also worth noting here that browsers have a limit of outgoing requests which means that they queue requests once that limit is hit (although that limit varies by browser).

Abortable Fetch

A new browser technology that we can leverage to solve the aforementioned issue is Abortable Fetch. Abortable fetch relies on a browser specification for AbortController. The controller has a property called signal which we can pass to our fetch as an option (also named signal), and then use this at our later convenience to cancel the request with the controllers abort method.

An example might look a little like this:

    const url = "https://api.example.com/autocomplete"
    let controller;
    let signal;

    autocompleteInput.addEventListener('keyup', () => {

        if (controller !== undefined) {
            // Cancel the previous request

        // Feature detect
        if ("AbortController" in window) {
            controller = new AbortController;
            signal = controller.signal;

        // Pass the signal to the fetch request
        fetch(url, {signal})
            .then((response) => {
                // Do something with the response
            .catch((error) => {
                // Something went wrong

Here we do feature detection to determine if we can use AbortController (it’s supported in Edge, Firefox, Opera and coming in Chrome 66!). We also determine if a controller has already been created, and if so we call controller.abort() which will cancel the previous request. You can also use the same signal in multiple fetches to cancel multiple fetches at once.

A little demo

I’ve created a small demo showing how to use Abortable Fetch, loosely based on the idea of the autocomplete idea (without any of the implementation details!). What happens is every time you type it makes a network request. If you make a new keystroke before the old request has completed it will abort the previous fetch. It looks a little something like this in practice:

You can check the code out here.

Thinking beyond fetch

Perhaps the coolest part about AbortController is it has been designed to be a generic mechanism for aborting asynchronous tasks. It is part of the WHATWG specification, meaning it is DOM specification rather than a language (ECMAScript) specification, but for frontend development this is still a useful feature. You could leverage it as a cleaner async control flow mechanism for times you implement asynchronous tasks (i.e. when using Promises). Feel free to take a look at Bram Van Damme super article for a more detailed example of what I’m talking about.

18 Feb 2018, 17:56

Easier Web Workers

Ever been on a web page and everything feels a bit slow? Delays typing, scrolling, and general interactions with the page? One of the main causes of this is ‘blocking the main thread’. Browsers do their best to keep the rendered contents of a page in sync with the refresh rate of a monitor (generally this is about 60 frames per second). However doing expensive operations in your main thread (i.e. where your everyday JavaScript is executed) has the potential to block it, preventing efficient page rendering and in turn delaying response to user interactivity such as scrolling, inputs, etc.

Thankfully due to the power of Web Workers, we can offload heavy computations to another thread, leaving the main thread for handling rendering and user interactions. Web Worker’s run a JavaScript file as a background thread that that runs as a separate context to the browsers main thread. So how do we construct a worker? Like so:

    const worker = new Worker('worker.js');

Here worker.js is the code that will listen for the message from the workers and perform the specified work.

Workers are pretty flexible, but one core thing you can’t do is access and manipulate the DOM. They also require you to pass data to them and the data is not shared, unless you’re using Transferables. You can natively pass any data that is allowed in the Structured Clone Algorithm to a worker. In practice this means most things minus Functions, Errors and DOM Elements. Here JSON.stringify may bring some performance benefits, although that’s worth testing for your use case first. It is worth mentioning that JSON.stringify also has various types that do not convert including functions, Date objects, regular expressions and undefined.

Since data is not shared, there is a performance overhead copying data to the worker. The exception here is previosuly mentioned Transferables which are ‘zero-copy’ meaning data is transferred to the thread context instead. This can be an order of magnitude faster than copying.

There is a cost to instantiating a Web Worker which will vary from browser and device, but this Mozilla article articulates that you’re looking around the 40ms mark. Communicating over to a Web Worker (postMessage) is fast however, around 0.5ms of latency.

Passing Messages

So what does a the code look like for passing data (a message) to and from a Web Worker look like?

    // In our main JavaScript file

    // Post data
    worker.postMessage("Hello from the main thread!");

    // Receive data
    worker.addEventListener('message', (event) => {
        console.log("Data from worker received: ", event.data);
    }, false);

And then in the Web Worker (say webworker.js) we need a way to receive the message:

    self.addEventListener('message', (event) => {
        console.log("Worker data received from the main thread", event.data);
        // Do what we want with do something with event.data
            `Hello from the Web Worker thread!
             The message received had length: ${event.data.length}`
    }, false);

Here we can see that once the message is received we can manipulate the incoming data as we see fit and send it back with ‘postMessage`.

A simple Web Worker example

To give a more tangible example, I have created an example repository which shows how we can produce large numbers of primes in a Worker whilst maintaining interactivity with the page.

Are there any nice abstraction libraries?

Yes! I have compiled a list of Hello World examples using various popular libraries. Namely:

  • Greenlet - Turn async functions into Web Workers
  • Comlink - Modern abstraction of Web Workers
  • Operative - Simpler callback oriented workers

You can see all of those examples in my GitHub repo here. There are others that might be worth checking out depending on your use case that I haven’t added.

  • promise-worker by Nolan Lawson for simpler promise based workers.
  • Workerize by Jason Miller which is the module level version of greenlet
  • Clooney by Surma; a actor library which builds upon Comlink.

Let’s take a little look at how Greenlet might work. Using ES7 async/await syntax, we get readable code, without sacrificing on functionality. Under the hood greenlet does something pretty cool, it generates an inline Web Worker using URL.createObjectURL and Blob. This allows us to do like so:

    const asyncSieveOfEratosthenes = greenlet(async (limit) => {
        // Code redacted for brevity

    const calculate = document.getElementById("calculatePrimes");
    const message = document.getElementById("showPrimes")

    calculate.addEventListener("click", async () => {
        const n = 100000000;
        message.innerHTML = "Main thread not blocked!";
        // The following async function won't block:
        const totalPrimes = await asyncSieveOfEratosthenes(n);
        calculate.innerText = "Done!"
        message.innerHTML = `${totalPrimes.length} prime numbers calculated!`;

Pretty cool if you ask me!

What about support?

Web Workers are very well supported by all major browsers, so this shouldn’t be an issue:

When to use Web Workers?

Some people may be tempted to try and start moving all there app logic over to a Web Worker. There is no guarantee that this will be any more performant. Web Workers make the most sense when you have heavy processing that would block the main thread and rendering and user interaction. For example, imagine you want to do some intensive number crunching, geometry processing (see for example Turf.js) or deep tree traversal and manipulation. The most useful piece of advice I can give here is profile and benchmark it. If you’re new to profiling, check out this piece on CPU profiling in Chrome.


I am currently working on a library called Fibrelite which is based off of Jason Millers fantastic greenlet library. The aim is to produce a general purpose library for spinning out async functions as Web Workers, but with a variety of approaches to handling those function calls, for example pooling, prioritising calls or debouncing calls where necessary. This would be beneficial for any situation where both user interactions and intensive calculations are in tandem. I will write a more detailed blog post at a later date, in the mean time, check out a demo here.

18 Feb 2018, 17:56

Implementing the Web Share API in Your App

I was looking into how I might be able to improve the sharing experience on https://scratchthe.world, a digital scratch map PWA that I have been working. One thing I came across recently is the Web Share API, which I thought might be a strong contender for integration. The Web Share API allows for a smoother integration of sharing with popular applications on your mobile device (WhatsApp, Facebook Messenger etc). You can see the documentation on MDN here.

The cool thing about the Web Share API is it has such a minimal interface, making it easy to use an implement without complicating your code. To elaborate, it only has one method, share. The method takes an object with the properties title, text, url:

  • title refers to the title of the thing you’re trying to share, you could for example use document.title here if you’re struggling.
  • text is the text that will be shared with the link in the target application
  • url is the link itself.

Unfortunately support mobile support is not ideal; it’s missing on iOS Safari, Samsung Internet and Opera Mini as it stands.

However depending on who you trust (i.e. StatCounter which powers caniuse.com) Chrome for Android accounts for 50%+ of mobile traffic globally. That’s a lot of people that can benefit!

Another nice aspect of the Share API is it is relatively straight forward to implement a fallback, hence my choice to implement it with Scratch the World. In Scratch the World I provide a input with a link to the state of the map for people to share with there friends. If the Web Share API is available I replace that with a share button.

This isn’t the exact code I use (it is based off the Google Developers page linked above) but to give you a feel for the rough approach:

    const setupShareButton = () => {
        const shareButton = document.getElementById("share-button");

        // Add an onclick listener to the element
        shareButton.addEventListener("click", () => {
            console.log("on click");
            if (navigator.share) {
                // If we have web share enabled use that
            } else {
                // Else do something else to help people share
                // your content
    const useFallback = () => {
        console.log("Using fallback!");
        const copyPasteUrl = document.getElementById("copy-paste-url");
        copyPasteUrl.style.visibility = "visible";
        // You could add a bar with share buttons for various 
        // social media sites here as another idea
    const useWebShare = () => {
        console.log("Using Web Share!");
        // Web Share is promised based so we do .then and .catch
        // after to handle success or failure
            title: 'Using webshare!',
            text: 'This is an example of using web share',
            url: 'https://developers.google.com/web',
            .then(() => console.log('Successful share'))
            .catch((error) => console.log('Error sharing', error));


And that’s it! The more complicated bit here is coming up with a nice fallback approach. You could implement sharing workflows for common mediums like Twitter and Facebook for example. Many providers have sharing URLs that you can use to help share your content. You could combine these with Terence Eden’s SuperTinyIcons to create buttons as a performant solution. On a more experimental note, Phil Nash released a Web Component wrapper for links that will default to the Web Share API when available, which is an interesting idea.

Lastly some of you might be wondering what does the Share API look like in practice? Here’s a video of how it shapes up in Scratch the World on Android Chrome: