Web Workers: Multithreading in JavaScript

Web Workers: Multithreading in JavaScript

JavaScript runs on a single thread, and the moment you throw something heavy at it, the whole page locks up. Web Workers give you actual background threads that run in parallel with the main thread. Here is how to use dedicated workers, shared workers, transferable objects, and worker pools — with real examples like image processing and large file parsing.

// Go ahead, paste this into your console
function freezeEverything() {
  let result = 0;
  for (let i = 0; i < 2_000_000_000; i++) {
    result += Math.sqrt(i) * Math.sin(i);
  }
  return result;
}

freezeEverything();
// Your tab is now a brick. No scrolling, no clicking, nothing.

Try that in production. I dare you.

Why Your Page Freezes

That code block above is not some contrived example — it is a stand-in for any heavy work you might throw at the main thread. Parsing a giant CSV. Running a complex filter across thousands of data points. Crunching numbers for a visualization. The specifics do not matter. What matters is that while JavaScript is doing that work, your browser cannot do anything else.

This is because the browser's main thread handles everything: parsing HTML, computing styles, running layout, painting pixels, responding to clicks, and executing your JavaScript. All of it shares one thread. When your code hogs that thread for two seconds, the user sees a frozen page. Hold it for five seconds and the browser starts showing "this page is not responding" dialogs.

The problem is not that JavaScript is slow. The problem is that JavaScript execution and UI rendering are fighting for the same thread, and only one of them can run at a time. You literally cannot update a progress bar while a computation is running, because the browser cannot repaint until your code finishes.

Web Workers solve this by giving you separate threads. Real OS-level threads, running real JavaScript, completely independent of the main thread. Your heavy computation runs in a worker, the main thread stays free to handle user interaction, and everyone is happy.

Dedicated Workers and How Message Passing Works

The most common type of Web Worker is a dedicated worker. You create one by pointing the Worker constructor at a JavaScript file. That file runs in its own thread, and the only way it can talk to your main thread is through messages.

// main.js
const worker = new Worker('worker.js');

worker.postMessage({ type: 'compute', data: [1, 2, 3, 4, 5] });

worker.onmessage = (event) => {
  console.log('Result from worker:', event.data);
};

worker.onerror = (error) => {
  console.error('Worker error:', error.message);
};
// worker.js
self.onmessage = (event) => {
  const { type, data } = event.data;

  if (type === 'compute') {
    // Perform heavy computation
    const result = data.map(n => {
      let sum = 0;
      for (let i = 0; i < 100_000_000; i++) {
        sum += Math.sqrt(n * i);
      }
      return sum;
    });

    self.postMessage(result);
  }
};

This is the core of how workers operate. You send data in, work happens on another thread, results come back. The main thread never blocks.

The worker runs in a completely isolated environment. No document, no window, no DOM of any kind. It does get access to fetch, IndexedDB, WebSocket, setTimeout, and most other non-DOM browser APIs. The isolation is not an oversight — it is the whole point. If two threads could touch the DOM at the same time, you would need locks and mutexes, and that is a category of bugs nobody wants in their frontend code.

Sometimes you do not want a separate file for your worker. You can create one inline using a Blob URL:

const workerCode = `
  self.onmessage = (e) => {
    const result = e.data.reduce((sum, n) => sum + n, 0);
    self.postMessage(result);
  };
`;

const blob = new Blob([workerCode], { type: 'application/javascript' });
const worker = new Worker(URL.createObjectURL(blob));

worker.postMessage([10, 20, 30, 40, 50]);
worker.onmessage = (e) => console.log('Sum:', e.data); // Sum: 150

This is convenient for small workers, but for anything substantial, separate files are easier to maintain and debug.

Now, about the data you send back and forth. By default, postMessage creates a structured clone — a deep copy of whatever you pass. For small objects, you will not notice. For a 100 MB ArrayBuffer, you absolutely will notice, because cloning 100 MB takes real time and real memory.

That is where transferable objects come in. Instead of copying the data, you transfer ownership of the underlying memory. The sending side loses access completely, and the receiving side gets it with zero copy overhead.

// main.js - Transferring an ArrayBuffer
const buffer = new ArrayBuffer(1024 * 1024 * 100); // 100 MB
console.log(buffer.byteLength); // 104857600

// Transfer instead of clone
worker.postMessage({ buffer }, [buffer]);

// After transfer, the buffer is neutered
console.log(buffer.byteLength); // 0
// worker.js - Receiving the transferred buffer
self.onmessage = (event) => {
  const { buffer } = event.data;
  console.log(buffer.byteLength); // 104857600

  // Process the buffer
  const view = new Float64Array(buffer);
  for (let i = 0; i < view.length; i++) {
    view[i] = view[i] * 2;
  }

  // Transfer it back
  self.postMessage({ buffer }, [buffer]);
};

Transferable objects include ArrayBuffer, MessagePort, ImageBitmap, OffscreenCanvas, and ReadableStream. The transfer is a move, not a copy. After you transfer a buffer, trying to read it on the sending side gives you a zero-length buffer. This is how JavaScript avoids data races without locks — you simply cannot have two threads holding the same memory at the same time.

Worker Pools and Real-World Usage

Spinning up a new worker is not free. Each one gets its own JavaScript execution context, its own memory space, and there is startup cost involved in fetching and parsing the script. If you create a fresh worker for every small task, the overhead will eat your performance gains.

The fix is a worker pool: create a fixed number of workers once, then hand them tasks as needed.

class WorkerPool {
  constructor(workerScript, poolSize = navigator.hardwareConcurrency || 4) {
    this.workers = [];
    this.queue = [];
    this.activeWorkers = new Map();

    for (let i = 0; i < poolSize; i++) {
      const worker = new Worker(workerScript);
      worker.busy = false;
      worker.onmessage = (event) => this.handleResult(worker, event);
      worker.onerror = (error) => this.handleError(worker, error);
      this.workers.push(worker);
    }
  }

  exec(data) {
    return new Promise((resolve, reject) => {
      const freeWorker = this.workers.find(w => !w.busy);
      if (freeWorker) {
        this.runTask(freeWorker, data, resolve, reject);
      } else {
        this.queue.push({ data, resolve, reject });
      }
    });
  }

  runTask(worker, data, resolve, reject) {
    worker.busy = true;
    this.activeWorkers.set(worker, { resolve, reject });
    worker.postMessage(data);
  }

  handleResult(worker, event) {
    const { resolve } = this.activeWorkers.get(worker);
    resolve(event.data);
    this.activeWorkers.delete(worker);
    worker.busy = false;
    this.processQueue();
  }

  handleError(worker, error) {
    const { reject } = this.activeWorkers.get(worker);
    reject(error);
    this.activeWorkers.delete(worker);
    worker.busy = false;
    this.processQueue();
  }

  processQueue() {
    if (this.queue.length === 0) return;
    const freeWorker = this.workers.find(w => !w.busy);
    if (freeWorker) {
      const { data, resolve, reject } = this.queue.shift();
      this.runTask(freeWorker, data, resolve, reject);
    }
  }

  terminate() {
    this.workers.forEach(w => w.terminate());
  }
}

// Usage
const pool = new WorkerPool('compute-worker.js', 4);

const tasks = Array.from({ length: 20 }, (_, i) => i);
const results = await Promise.all(tasks.map(t => pool.exec(t)));
console.log(results);

Set your pool size to navigator.hardwareConcurrency, which tells you how many logical CPU cores the machine has. Creating more workers than cores does not help — the OS just context-switches between them, which adds overhead without adding throughput. Most machines report between 4 and 16.

Here are the use cases where I have seen workers make a real difference in production:

Image processing is the classic example. Applying filters to pixel data is pure CPU work with no DOM involvement — a perfect fit.

// image-worker.js
self.onmessage = (event) => {
  const { imageData, filter } = event.data;
  const data = imageData.data;

  switch (filter) {
    case 'grayscale':
      for (let i = 0; i < data.length; i += 4) {
        const avg = (data[i] + data[i + 1] + data[i + 2]) / 3;
        data[i] = avg;     // Red
        data[i + 1] = avg; // Green
        data[i + 2] = avg; // Blue
      }
      break;

    case 'sepia':
      for (let i = 0; i < data.length; i += 4) {
        const r = data[i], g = data[i + 1], b = data[i + 2];
        data[i] = Math.min(255, r * 0.393 + g * 0.769 + b * 0.189);
        data[i + 1] = Math.min(255, r * 0.349 + g * 0.686 + b * 0.168);
        data[i + 2] = Math.min(255, r * 0.272 + g * 0.534 + b * 0.131);
      }
      break;

    case 'invert':
      for (let i = 0; i < data.length; i += 4) {
        data[i] = 255 - data[i];
        data[i + 1] = 255 - data[i + 1];
        data[i + 2] = 255 - data[i + 2];
      }
      break;
  }

  self.postMessage({ imageData }, [imageData.data.buffer]);
};

Parsing large files is the other big one. When a user uploads a 50 MB CSV, you do not want the page to lock up while you split a million lines. Move it to a worker and report progress back:

// csv-worker.js
self.onmessage = (event) => {
  const csvText = event.data;
  const lines = csvText.split('\n');
  const headers = lines[0].split(',').map(h => h.trim());
  const results = [];

  for (let i = 1; i < lines.length; i++) {
    if (lines[i].trim() === '') continue;
    const values = lines[i].split(',');
    const row = {};
    headers.forEach((header, idx) => {
      row[header] = values[idx]?.trim();
    });
    results.push(row);

    // Report progress every 10,000 rows
    if (i % 10000 === 0) {
      self.postMessage({
        type: 'progress',
        percent: Math.round((i / lines.length) * 100)
      });
    }
  }

  self.postMessage({ type: 'complete', data: results });
};

Other strong candidates: cryptographic hashing, spell checking, physics simulations in games, search indexing over large client-side datasets, and running WebAssembly modules. The general rule is simple — if the task takes more than about 16 milliseconds and does not need the DOM, consider putting it in a worker.

SharedWorkers and OffscreenCanvas

There are two more specialized types of workers worth knowing about, even if you will use them less often.

A SharedWorker is accessible from multiple tabs or windows of the same origin. Where a dedicated worker belongs to one page, a SharedWorker can serve several at once. This is useful for things like maintaining a single WebSocket connection across tabs, or sharing a cache so each tab does not duplicate the same data.

// shared-worker.js
const connections = [];

self.onconnect = (event) => {
  const port = event.ports[0];
  connections.push(port);

  port.onmessage = (e) => {
    if (e.data.type === 'broadcast') {
      // Send message to all connected tabs
      connections.forEach(conn => {
        conn.postMessage({
          type: 'update',
          payload: e.data.payload,
          from: e.data.tabId
        });
      });
    }
  };

  port.start();
};
// page.js (used in multiple tabs)
const shared = new SharedWorker('shared-worker.js');

shared.port.onmessage = (event) => {
  console.log('Received:', event.data);
};

shared.port.start();

// Broadcast a message to all tabs
shared.port.postMessage({
  type: 'broadcast',
  payload: { count: 42 },
  tabId: crypto.randomUUID()
});

The big difference is the port — each connecting page gets its own MessagePort, and the SharedWorker juggles all of them. The worker stays alive as long as at least one tab holds a connection. Browser support is less universal than dedicated workers, so check compatibility before depending on it.

OffscreenCanvas lets you do canvas rendering entirely inside a worker. You transfer control of a canvas element to the worker, and from that point on, all drawing happens off the main thread.

// main.js
const canvas = document.getElementById('myCanvas');
const offscreen = canvas.transferControlToOffscreen();

const worker = new Worker('render-worker.js');
worker.postMessage({ canvas: offscreen }, [offscreen]);
// render-worker.js
self.onmessage = (event) => {
  const canvas = event.data.canvas;
  const ctx = canvas.getContext('2d');
  let frame = 0;

  function draw() {
    ctx.clearRect(0, 0, canvas.width, canvas.height);

    // Draw animated particles
    for (let i = 0; i < 1000; i++) {
      const x = Math.sin(frame * 0.01 + i) * 200 + canvas.width / 2;
      const y = Math.cos(frame * 0.013 + i * 0.7) * 200 + canvas.height / 2;
      const hue = (frame + i) % 360;

      ctx.fillStyle = `hsl(${hue}, 80%, 60%)`;
      ctx.beginPath();
      ctx.arc(x, y, 3, 0, Math.PI * 2);
      ctx.fill();
    }

    frame++;
    requestAnimationFrame(draw);
  }

  draw();
};

This is genuinely useful for data visualization dashboards, games, or anything where canvas rendering needs to stay smooth regardless of what the main thread is doing. If the main thread briefly stalls handling a click event, the animation in the worker keeps running.

Limitations and Gotchas

I would be doing you a disservice if I did not lay out the sharp edges, because there are several and they will bite you if you are not expecting them.

No DOM access. This is the big one. Workers cannot read or modify the DOM. No document, no window, no parent. If you need to update the UI based on worker results, you send a message back to the main thread and let it handle the update. There is no workaround for this, and there should not be.

Structured clone is expensive. Unless you use transferable objects, every postMessage call deep-copies its payload. I have seen people pass megabytes of JSON back and forth between workers and wonder why performance is worse than running everything on the main thread. If you are moving large data, always transfer it — do not clone it.

Same-origin policy applies. Worker scripts must come from the same origin as the page. You cannot load a worker from a different CDN domain. The Blob URL trick gets around this when you need to inline worker code, but it is worth knowing the restriction exists.

No shared memory by default. Each worker has its own memory. If you need two threads to see the same data without copying it, you need SharedArrayBuffer and Atomics, which require your server to send specific HTTP headers: Cross-Origin-Opener-Policy: same-origin and Cross-Origin-Embedder-Policy: require-corp. These headers break some third-party embeds, so it is not a decision you make lightly.

Debugging is harder. Browser DevTools do support worker inspection, but stepping through message-passing flows across threads is just harder than tracing single-threaded code. Bugs in workers can also be silent — if a worker throws and you forgot to attach an onerror handler, you might never see the error.

Startup cost is real. Each worker needs to fetch its script, parse it, and set up a new execution context. For a task that takes 10 milliseconds, the worker overhead could easily be 50 milliseconds. This is why worker pools matter — you pay the startup cost once and amortize it across hundreds of tasks.

As a rule of thumb: if the work takes less than about 50 milliseconds, the overhead of worker communication will probably make it slower, not faster. Profile your actual workload before committing to a worker-based architecture. The browser performance panel will tell you exactly how long your tasks take, and that number — not a guess — should drive the decision.

// Feature detection — always check before using
if (typeof Worker !== 'undefined') {
  // Use Web Worker
} else {
  // Fallback to main thread execution
}

if (typeof SharedWorker !== 'undefined') {
  // Use SharedWorker
}

if (typeof OffscreenCanvas !== 'undefined') {
  // Use OffscreenCanvas in worker
}

Workers are a tool, and like any tool, they solve specific problems well and other problems poorly. The specific problem they solve is: CPU-heavy work that blocks the main thread. If that is your problem, workers are the answer. If your performance issue is network latency, render-heavy CSS, or too many DOM nodes, workers will not help you at all. Know what you are solving before you reach for them.

Written by Anurag Kumar

Full-stack developer passionate about Node.js and building fast, scalable web applications. Writing about what I learn every day.

Comments (0)

No comments yet. Be the first to share your thoughts!