31 Jan 2021, 21:28

Measuring the World with JavaScript

If I asked you to measure two points on a table, you might get out a ruler or a tape measure to tell me the distance between the two. However, what if I asked you to measure the points between two capital cities? Suddenly it becomes more complicated, not only just logistically but also in terms of assumptions we have to make about that measurement.

One big change, apart from just scale, is that we now have to account for the roundness of the world. Thankfully mathematicians who were far smarter than me have come up with formulas that can account for this in various ways. One such formula is the haversine formula. This is credited as far back as 1801 to Spanish astronomer and mathematician José de Mendoza y Ríos.

The Haversine formula is useful in a lot of spherical trigonometry problems, as it allows a person to determine what is called a great-circle distance between two given points on a sphere. In particular relevance to what we are looking at today, it is useful for determining the approximate distance between two latitude and longitudes on the globe.

When it comes to translating this to code Chris Veness, a UK based programmer has written a healthy amount of code in the space - a lot of it is featured in the popular geospatial library Turf.js. Here’s a stand-alone adaptation of some of his code for the Haversine formula, written in TypeScript:

export function haversineDistance(
  pointOne: { lng: number; lat: number },
  pointTwo: { lng: number; lat: number }
) {
  const toRadians = (latOrLng: number) => (latOrLng * Math.PI) / 180;

  const phiOne = toRadians(pointOne.lat);
  const lambdaOne = toRadians(pointOne.lng);
  const phiTwo = toRadians(pointTwo.lat);
  const lambdaTwo = toRadians(pointTwo.lng);
  const deltaPhi = phiTwo - phiOne;
  const deltalambda = lambdaTwo - lambdaOne;

  const a =
    Math.sin(deltaPhi / 2) * Math.sin(deltaPhi / 2) +
    Math.cos(phiOne) *
      Math.cos(phiTwo) *
      Math.sin(deltalambda / 2) *
      Math.sin(deltalambda / 2);
  const c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1 - a));

  const radius = 6371e3;
  const distance = radius * c;

  return distance;

The Haversine formula suffers from one core problem, which is that it assumes that the world is round. Unfortunately for us, the world is not actually round, it’s closer to what we’d determine as an oblate spheroid; a sphere that is flattened at the poles. As the Haversine makes this assumption, it cannot be guaranteed correct to provide results with better than 0.5% accuracy.

This is where another formula comes in, Vincenty’s inverse formula. The formula comes from Thaddeus Vincenty, a Polish American geodesist who was with the U.S. Air Force. Vincenty’s inverse formula works with the idea that the Earth is an oblate-spheroid. This approach is more complex and is also iteration based, unlike the haversine formula.

Again, I’ve taken some of Chris’s code and made it standalone (you can just copy and paste this function). It works on the WGS84 ellipsoid, as this is probably the most common use case. Lets take a look at the code, again in TypeScript:

export function inverseVincentyDistance(
  pointOne: { lng: number; lat: number },
  pointTwo: { lng: number; lat: number }
): number {
  const toRadians = (latOrLng: number) => (latOrLng * Math.PI) / 180;

  const phiOne = toRadians(pointOne.lat);
  const lambda1 = toRadians(pointOne.lng);
  const phiTwo = toRadians(pointTwo.lat);
  const lambda2 = toRadians(pointTwo.lng);

  const wgs84ellipsoid = {
    a: 6378137,
    b: 6356752.314245,
    f: 1 / 298.257223563,
  const { a, b, f } = wgs84ellipsoid;

  const L = lambda2 - lambda1; // L = difference in longitude, U = reduced latitude, defined by tan U = (1-f)·tanphi.
  const tanU1 = (1 - f) * Math.tan(phiOne),
    cosU1 = 1 / Math.sqrt(1 + tanU1 * tanU1),
    sinU1 = tanU1 * cosU1;
  const tanU2 = (1 - f) * Math.tan(phiTwo),
    cosU2 = 1 / Math.sqrt(1 + tanU2 * tanU2),
    sinU2 = tanU2 * cosU2;

  const antipodal =
    Math.abs(L) > Math.PI / 2 || Math.abs(phiTwo - phiOne) > Math.PI / 2;

  let lambda = L,
    sinLambda = null,
    cosLambda = null; // lambda = difference in longitude on an auxiliary sphere
  let sigma = antipodal ? Math.PI : 0,
    sinSigma = 0,
    cosSigma = antipodal ? -1 : 1,
    sinSqsigma = null; // sigma = angular distance P₁ P₂ on the sphere
  let cos2sigmaM = 1; // sigmaM = angular distance on the sphere from the equator to the midpoint of the line
  let sinalpha = null,
    cosSqAlpha = 1; // alpha = azimuth of the geodesic at the equator
  let C = null;

  let lambdaʹ = null,
    iterations = 0;
  do {
    sinLambda = Math.sin(lambda);
    cosLambda = Math.cos(lambda);
    sinSqsigma =
      cosU2 * sinLambda * (cosU2 * sinLambda) +
      (cosU1 * sinU2 - sinU1 * cosU2 * cosLambda) *
        (cosU1 * sinU2 - sinU1 * cosU2 * cosLambda);

    if (Math.abs(sinSqsigma) < Number.EPSILON) {
      break; // co-incident/antipodal points (falls back on lambda/sigma = L)

    sinSigma = Math.sqrt(sinSqsigma);
    cosSigma = sinU1 * sinU2 + cosU1 * cosU2 * cosLambda;
    sigma = Math.atan2(sinSigma, cosSigma);
    sinalpha = (cosU1 * cosU2 * sinLambda) / sinSigma;
    cosSqAlpha = 1 - sinalpha * sinalpha;
    cos2sigmaM =
      cosSqAlpha != 0 ? cosSigma - (2 * sinU1 * sinU2) / cosSqAlpha : 0; // on equatorial line cos²alpha = 0 (§6)
    C = (f / 16) * cosSqAlpha * (4 + f * (4 - 3 * cosSqAlpha));
    lambdaʹ = lambda;
    lambda =
      L +
      (1 - C) *
        f *
        sinalpha *
        (sigma +
          C *
            sinSigma *
            (cos2sigmaM + C * cosSigma * (-1 + 2 * cos2sigmaM * cos2sigmaM)));
    const iterationCheck = antipodal
      ? Math.abs(lambda) - Math.PI
      : Math.abs(lambda);
    if (iterationCheck > Math.PI) {
      throw new Error("lambda > Math.PI");
  } while (Math.abs(lambda - lambdaʹ) > 1e-12 && ++iterations < 1000);
  if (iterations >= 1000) {
    throw new Error("Vincenty formula failed to converge");

  const uSq = (cosSqAlpha * (a * a - b * b)) / (b * b);
  const A = 1 + (uSq / 16384) * (4096 + uSq * (-768 + uSq * (320 - 175 * uSq)));
  const B = (uSq / 1024) * (256 + uSq * (-128 + uSq * (74 - 47 * uSq)));
  const deltaSigma =
    B *
    sinSigma *
    (cos2sigmaM +
      (B / 4) *
        (cosSigma * (-1 + 2 * cos2sigmaM * cos2sigmaM) -
          (B / 6) *
            cos2sigmaM *
            (-3 + 4 * sinSigma * sinSigma) *
            (-3 + 4 * cos2sigmaM * cos2sigmaM)));

  const distance = b * A * (sigma - deltaSigma); // distance = length of the geodesic

  return distance;

I’ll be honest, Vincenty’s formula is a bit lost on me, but it promises to provide more accurate results! In fact, its use is very common in geodesy because the expected accuracy is to 0.5mm on the Earth ellipsoid. Pretty cool right? The one downside is that it is computationally slower due to the increased number of operations and the iterative approach.

Hopefully, you’ve enjoyed this short blog post about how to measure distances on the Earth with JavaScript (…okay TypeScript). Let me know if you would be interested in follow-ups on similar topics!

04 Jul 2018, 20:37

Service Worker State Management

With this post I want to build upon a previous technique I wrote about; message passing with Service Workers. Here I will be looking at how you might integrate this concept to manage application state across tabs for a site. For those of you unfamiliar, Service Workers are a separate thread that sit at in the background at browser level (rather than the page level like Web Workers) allowing pages and workers on the same domain scope (i.e. example.com) to interact with it. This makes them a great host for doing things such as intercepting and caching requests, handling offline requests, triaging push notifications and managing background syncing. This is even more true now that they are supported in all modern browsers!

Off the Beaten Track with Service Workers

As well as these more run-of-the-mill Service Worker patterns, there are also some more experimental ideas, like on the fly WebP support, or caching from a ZIP file. It’s definitely cool that Service Workers enable this interesting applications. Previously I wrote about passing messages between tabs using a Service Worker, inspired by some tweets and in turn a blog post by Craig Russell. I also recently realised Arnelle Balane wrote some similar ideas, albeit a slight different approach, which are worth a read.

In this post I want to take this further by exploring the idea of state management in a Service Worker. Although many of you might be versed in writing complex applications, I wanted to run through state management at high level before we kick off. For the TL;DR skip to the Mixing State Management with Service Workers section.

State Management

We can think of state management as the remits of how we organise and update the state of our application. For example if our web application was a simple counter, state management would entail concepts like:

  • What is it’s default value?
  • Where do we store it’s value?
  • How do we update the counter?
  • In which ways can the counter increment/decrement?

You can see how for even a arguably straightforward application such as a counter the cognitive load we have to endure as developers can quickly stack up. This is where state management libraries come in.

State management libraries are tools that allow you to centralise you state management, allowing updates to your data to become more predictable as they are funnelled through very specific and narrow channels. This in theory reduce bugs, decrease cognitive load, and in turn make it quicker and easier to scale a web application. With this being said, although state management can simplify scaling complex applications, they may actually make smaller applications more complicated than necessary (see this great post from Dan Abrimov for a deeper insight on that). Many state management libraries are based (or loosely based on the concepts of), Facebook’s Flux pattern. Of these Redux is perhaps the most popular. Another popular state management library is MobX which takes a more reactive/observer based approach to state management.

Mixing State Management with Service Workers

Now for the interesting part; putting state management in a Service Worker. Because Service Workers exist outside of a page and/or worker context for a given domain scope, we can use them to pass data to each other. So what if we took this a step further and stored the apps state in Service Worker? So I know what you’re potentially thinking, which is ‘is this a good idea?’ and in honesty I’m not even sure, but it’s definitely fun and foreseeably useful in specific cases.

Initially I tried to use Redux as the demonstration state manager, however I hit a hurdle. My proof-of-concept appeared to work great in Chrome, but in Firefox it would fail when changing between tabs. What was going on? As it currently stands (June 2018) it looks like Firefox kills off idle Service Workers after 30 seconds, although from my experimenting it seems actually less than that. This means when the tab is idle for a certain period, the script is re-executed when a new message is sent to the worker. State is wiped during this process, making it a none viable approach. There is some potential Chrome might be doing this in the future.

So, what to do? One suggestion in the above issue suggests is sending some sort of message on a timer to keep the Service Worker alive. I’m not a massive fan of this approach though as it feels a bit flakey and in general think timers should be avoided where possible. So what else can we do? Jeff Posnick recommends using IndexDB for persisting state, which got me looking into IndexedDB backed Redux libraries. I came across another Redux library called redux-persist. However this didn’t work out, as the state didn’t seem to persist the data in a way that was conducive to syncing state in the way I wanted. So instead, I rolled my own state library based on idb by Jake Archibald.

The Web Page

Let’s start with the web page first, let’s assume we are building our counter application and we have a standard HTML page. Next we’re going to want to register our Service Worker (let’s assume it’s wrapped in a ('serviceWorker' in navigator):

		.then((reg) => {

			// Here we add the event listener for receiving messages
			navigator.serviceWorker.addEventListener('message', function(event){
				// Some function that renders the state to the page

			messageServiceWorker({ GET_STATE: true});

		}).catch(function(error) {
			console.error('Service Worker registration error : ', error);

	// When a new SW registration becomes available
	navigator.serviceWorker.oncontrollerchange = function() {
		messageServiceWorker({ GET_STATE: true});

Here we are going to do something interesting; we’re going to tell the page that when it closes, we want to fire an event to the Service Worker letting it know that tab died. Because Service Workers can exist even when the page isn’t open, we need to a way to reset the state when no tabs are open. Again lets assume we use feature detection for the Service Worker:

	// Event on tab/window closed, so we can then check for no tabs/window.
	// If we wanted we could make this false to permanently persist state
		window.onunload = function() {
			// postMessage should be synchronous in this context?
				TAB_KILLED: true

We’ll also need a way to post our actions to our Service Worker so the Redux store and dispatch them, so lets add that:

	// Send generic messages to the Service Worker
	function messageServiceWorker(data){
		if (navigator.serviceWorker && navigator.serviceWorker.controller) {

	// Pass actions specifically to the Service Worker
	function actionToServiceWorker(action) {
		messageServiceWorker({ ACTION: action })

Let’s also say for the sake of simplicity that we only want to increment the counter, we could do it like this:

		.addEventListener('click', function () {

The Service Worker

A Service Worker exists as a single file, although may import others with the importScripts function. Let’s setup a Service Worker that can handle our state changes are persist them. Because Service Workers are only supported in modern browsers, I’ve written these in ES6 syntax. Firstly lets handle the incoming messages to the worker:

	initialiseOnMessage() {
		if (!self) {
			console.error("Self undefined, are you sure this is a worker context?");
		self.onmessage = (message) => {
			if (message.data.GET_STATE) {
				this.store.getState().then((state) => {
			} else if (message.data.TAB_KILLED) {
			} else if (message.data.ACTION) {

Next lets handle syncing state to the tabs. Here we need to be able to be able to dispatch events to our store, sync that store with new state, and also reset that store when all the tabs have been closed. Let’s see how we can do that:

		// Get all the tabs for the current domain scope
		getTabs() {
			return self.clients.claim().then(() => {
				return clients.matchAll(
						includeUncontrolled: true,
						type: "window"

		// Dispatch a store event and sync that back to the tabs
		dispatchToStore(action, clientId) {
			this.store.dispatch(action).then(() => {
				this.store.getState().then((state) => {

		// Check if all the tabs have died and if so reset the state
		checkIfAllTabsKilled(RESET) {

			this.getTabs().then((clients) => {

				// Sometimes the new client exists before we can check if there
				// are no tabs open at all. Essentially we need to handle the refresh case
				const isRefresh = clients.length === 1 && this.lastKnownNumClients < 2;
				const shouldReset = clients.length === 0 || isRefresh;

				if (shouldReset) {
					// Reset state back to normal

				this.lastKnownNumClients = clients.length;



		// Sync the state back to all the available tabs and windows
		syncTabState(newState) {

			this.getTabs().then((clients) => {
				// Loop over all available clients
				clients.forEach((client) => {
					const data = { state: newState }

				this.lastKnownNumClients = clients.length;



This code misses out the logic for actually updating our IndexedDB store, but under the hood it’s a mix of a Redux-esque pattern and the idb library I mentioned for persisting that store. The state will only update if the persistence part is successful. You can find the full code for the store logic in the GitHub link below.

Pulling it All Together

Now I’ve explained the page and Service Worker parts, let’s see how it looks in practice! You can find a link to a live demo here, and a link to the full code here.


Is it possible to put your state management in a Service Worker? Totally, if you’re willing to persist state. Is it sensible? I’m not entirely sure. Some obvious shortcomings are that you’ll have to write a fallback for none SW support browsers, you can’t use it in incognito in Firefox and it’s going to increase the complexity of the app with the message passing / asynchronosity aspect. Also in theory there’s more points of failure as you’re introducing a Service Worker and IndexedDB into the mix. This being said, if having tabs in sync is a critical part of your application, this may be a reasonable approach to solving that specific problem. Another way might be to just broadcast the actions to all other pages which, in theory should keep them in sync whilst keeping the Service Worker stateless.

22 May 2018, 19:37

Examining Web Worker Performance

Recently I’ve been writing about Web Workers and various options developers have for leveraging them in their applications. For those unfamiliar, Web Workers allow you to create a separate thread for execution in a web browser. This is powerful as it allows work to be done off the main thread which is responsible for rendering and responding to user events. Over recent times we’ve seen a growth in Web Worker libraries such as greenlet, Workerize and Comlink to name a few.

One thing that’s been swirling around in my head is the question of what is the tradeoff of using a Web Worker? They are great because we can leave the main thread for rendering and responding to user interactions, but at what cost? The purpose of this post is to examine empirically where Web Workers make sense and where they might improve an application.

Benchmarking Web Workers

I set out trying to benchmark the performance of Web Workers within the browser. This data was collected based on code I wrote which manifested as a hosted app which can be found here. All the performance numbers specified are on my Dell XPS, Intel Core i7-4500 CPU @ 1.80GHz, 8GB of RAM, running Xubuntu. References to Chrome are version 66 and for Firefox version 59. To preface there is some possibility that numbers are slightly skewed due to garbage collection which is automated by the browser.

Web Worker Performance

At a high level creation and termination of Web Workers is relatively cost free depending on your tolerance for main thread worker:

  • Creation normally comes in at sub 1ms on Chrome
  • Termination is sub 0.5ms on Chrome

The real cost of Web Workers comes from the transfer of a data from the main thread to the Web Worker (worker.postMessage) and the return of data to the main thread (postMessage - onmessage).

This graph reflects this cost. We can see that increased data transfer sizes result in increased transfer times. More usefully we can deduce:

  • Sending an object of 1000 keys or less via postMessage comes in sub-millisecond, and 10,000 is ~2.5ms on Chrome
  • Over this we have more noticeable transfer costs to the worker; 100,000 comes in at ~35ms and 1,000,000 at ~550ms again on Chrome
  • onmessage timings are fairly comparable to this, although coming in slightly higher

Greenlet Performance

There has been an open issue on Jason Miller’s greenlet library for a while now which asks about the performance implications of using the library. As such I extended my research to also explore the library.

Overall Greenlet performance is slower than inlined Web Workers when you combine posting to and from the worker thread. This comes to ~850ms vs ~1700ms (i.e. around double) in Chrome at the 1,000,0000 key level but is slightly less pronounced in Firefox at ~1500ms vs ~2300ms. It’s difficult to deduce why this is the case and may have something to do with the ES6+ to ES5 transpilation process in Webpack or some other factor that I am unaware of (please feel free to let me know if you have an idea!). Overall, however, it’s a substantially easier abstraction for developers to deal with, so this needs to be taken into consideration. The main takeaways for people interested in using greenlet in anger are:

  • Objects with sub 10,000 entries greenlet appear to be sub 50ms for Chrome and Firefox
  • Increase fairly linear after that point (~150ms for 100,000 vs ~1500ms for 1,000,000)

Data Transfer Using JSON.stringify and JSON.parse

Using stringify and parse appears to yield fairly comparable results on Chrome but generally performs better than passing raw objects on Firefox. As such I would recommend having a look for yourself at the demo here to make your own conclusions or test with your own data.

Browser Differences

On average Chrome outperforms Firefox especially under heavier data transfers. Interestingly it performs substantially better at postMessage back to the main thread from the worker by up to a factor of three, although I am unsure as to why. I would love to hear more about how this works on Safari, Edge and other browsers. Again here JSON.stringify and JSON.parse might behave differently on these browsers.


Transferables behave more or less as expected with a near constant transfer cost; transfers of all sizes to and from the worker were sub 10ms.

The underlying idea here you can transfer values of type ArrayBuffer, MessagePort, ImageBitmap comparatively cheaply, which is a going to be a large performance boost if you’re using Web Workers, especially if your data is any considerable size. For example, you might transfer geometries as a Float32Array for speedier transfer.

Talking Points

This is the point at which we look at ways of making decisions around when to use Web Workers. Ultimately this is not a simple question but we can make some inferences from the data collected.

Smaller Workloads and Render Blocking

Objects of sub 50,0000 entries (or equivalent complexity) are on average in Chrome going to be less than 16ms to execute a postMessage and shouldn’t have too much noticeable effect on render performance for the user (i.e. there is some possibility that a frame render or two is skipped). However, overall using a Web Worker in a worse case in this situation will add up to ~50ms of overall processing time on top of the work the worker actually has to do. The trade-off is not blocking the main thread with heavy work, but taking a little extra time for the results to come back.

Large Workloads

Transfering over 100,000 entry object (or equivalent) is most likely going to have a noticeable blockage on the main thread because of the cost of postMessage. A recommendation could be to batch up heavy work into multiple postMessages so that the chance of frame rendering being blocked at any point is substantially reduced. You could even spin up a pool of workers and implement prioritisation strategies (see the fibrelite library I worked on for inspiration). Furthermore it may be worth considering if the data can be turned into Transferables which have a fairly minimal constant cost which could in turn be a massive performance boost.

The Trade Off

Ultimately here we are trading off transfer time to and from the Web Worker in exchange for preventing long render blocking tasks in the main thread. You may make overall times to results longer but prevent poor user experience in the process, for example a janky input or scroll experience.

If you can avoid your object transfers to and from the worker being render blocking you can move complex processing over to a Web Worker with the only cost of being the transfer times. We have shown for simple objects (1000 keys or less) should be sub-millisecond.

Final Thoughts On Web Worker Performance

Hopefully this data and commentary has helped explore the cost and benefits of Web Workers. Overall, we have shown how they can be a big win in the right situations. Blocking the main thread with heavy work is never going to be great for user experience, so we can make use of Web Workers here to prevent that, especially when transfer times are low (smaller data loads). For larger loads it might be worth batching work to prevent extensive blocking postMessages.

Although Web Workers are very useful, they do have a cost; the transfer times increases the overall time to work being finished, and they can add complexity to a code base. For some cases, this tradeoff might be undesired. In these situations it might be worth exploring using requestAnimationFrame and requestIdleCallback with batched workloads on the main thread to keep rendering and user interactions fluid.

It’s also worth concluding with the idea that Web Workers can be used for things other than simply running long running tasks however. David East recently wrote an article about wrapping the Firebase JavaScript SDK into a Web Worker which means the importing and parsing of the SDK is handled in the worker, leaving the main thread able to handle user input, reducing the First Input Delay (FID).

Lastly I think there is some strong potential for Web Workers to become core elements of some web frameworks. We’ve seen this jump recently in the start of React Native DOM by Vincent Riemer which tries to move work off the main thread into worker threads leaving the main thread for rendering and handling user input. Time will tell if this takes off as an approach!