for (var i = 0; i < 5; i++) {
setTimeout(() => console.log(i), 100);
}
// Expected: 0, 1, 2, 3, 4
// Got: 5, 5, 5, 5, 5
That bug right there -- I hit it during a technical interview in 2022 and completely blanked. I knew closures were "a thing" but I couldn't explain what was happening or how to fix it. Walked out of that interview feeling terrible, went home, and finally sat down to actually understand closures instead of just hand-waving through them.
The One-Sentence Version
A closure is a function that remembers the variables from where it was defined, even after that surrounding code has finished running.
That's it. Not complicated at all, once you strip away the academic language. A function grabs onto its surroundings and carries them around like a backpack.
function createGreeting(greeting) {
return function(name) {
return `${greeting}, ${name}!`;
};
}
const sayHello = createGreeting('Hello');
const sayNamaste = createGreeting('Namaste');
console.log(sayHello('Anurag')); // Hello, Anurag!
console.log(sayNamaste('Priya')); // Namaste, Priya!
createGreeting('Hello') runs and finishes. The greeting variable should be gone, right? Garbage collected? Nope. The returned function still holds a reference to it. That reference is the closure.
What the Engine Actually Does
I think closures become much less mysterious once you understand what V8 (or SpiderMonkey, or whichever engine you are running) is actually doing behind the scenes. When a function is created, the engine does not just store the function body -- it also stores a reference to the surrounding lexical environment. That environment is basically a record of all the variables that were in scope when the function was born.
When createGreeting('Hello') executes, the engine creates an execution context with a variable greeting set to 'Hello'. Normally, when createGreeting finishes, that execution context would be torn down and the memory reclaimed. But because the inner function references greeting, the engine sees that reference and keeps the environment alive. The inner function has an internal [[Environment]] slot that points to it. As long as somebody has a reference to that inner function, the environment sticks around.
You can actually see this in Chrome DevTools. Set a breakpoint inside a closure, and in the Scope panel you will see a section labeled "Closure" listing the closed-over variables. Try it sometime -- it makes the concept feel much more concrete than reading about it in an article. I actually keep a snippet bookmarked in my DevTools that creates a simple closure, just so I can show it to people during pair programming sessions when the topic comes up.
Making Private Variables (Before We Had #private)
JavaScript didn't have private class fields until recently. But closures gave us privacy for years. Here's a pattern I still use:
function createCounter(initialValue = 0) {
let count = initialValue;
return {
increment() { return ++count; },
decrement() { return --count; },
getCount() { return count; },
reset() { count = initialValue; return count; }
};
}
const counter = createCounter(10);
console.log(counter.increment()); // 11
console.log(counter.increment()); // 12
console.log(counter.decrement()); // 11
console.log(counter.getCount()); // 11
console.log(counter.reset()); // 10
There's no way to touch count from outside. Can't do counter.count. Can't do counter.__proto__.count. It's genuinely hidden. The only way in is through those four methods. I used this pattern to build a rate limiter at my last job -- worked perfectly.
Why Not Just Use a Class?
Fair question. You could absolutely do the counter example with a class. But there is a meaningful difference: even with #private fields, the privacy in classes is a syntactic convention enforced at parse time. Closure-based privacy is enforced by the scoping rules of the language -- there is literally no way to access the variable because it does not exist on any object. There is no Reflect trick, no proxy hack, no prototype walk that can reach it.
I still reach for the closure pattern when I am building something small and self-contained -- a debouncer, a rate limiter, a state machine for a specific UI component. If the thing needs inheritance or is going to be instantiated hundreds of times, a class is probably better because the engine can optimize shared prototypes. But for one-off utility factories? Closures are simpler and more private.
Back to That Interview Bug
So about that code from the top. Here's why it prints 5 five times:
var is function-scoped, not block-scoped. So there's one single i variable shared across all five iterations. By the time the setTimeout callbacks fire (100ms later), the loop is done and i is 5. Every callback closes over the same variable.
The old-school fix uses an IIFE to create a new scope per iteration:
for (var i = 0; i < 5; i++) {
(function(j) {
setTimeout(() => console.log(j), 100);
})(i);
}
// Output: 0, 1, 2, 3, 4
Each call to the IIFE gets its own j. Five function calls, five separate variables, five correct closures.
Or, you know, just use let:
for (let i = 0; i < 5; i++) {
setTimeout(() => console.log(i), 100);
}
// Output: 0, 1, 2, 3, 4
let is block-scoped, so each iteration gets its own binding. Under the hood, it's doing the same thing as the IIFE version -- creating a fresh variable per loop. Less typing though.
Stale Closures: The Bug That Took Me Two Days
If you have used React hooks, you have probably encountered this one even if you did not have a name for it. A stale closure happens when a function captures a variable, but the variable has since been updated and the function is still using the old snapshot.
Here is a simplified version of a bug I spent two full days on in a React project:
function ChatRoom({ roomId }) {
const [messages, setMessages] = useState([]);
useEffect(() => {
const socket = connectToRoom(roomId);
socket.on('message', (msg) => {
// BUG: messages is always [] here
setMessages([...messages, msg]);
});
return () => socket.disconnect();
}, [roomId]);
}
The callback passed to socket.on closes over messages. But messages is captured at the time the effect runs -- which is when it is still an empty array. Every time a new message arrives, the callback spreads an empty array plus the new message. You only ever see the latest message.
The fix is to use the functional form of the setter:
socket.on('message', (msg) => {
setMessages(prev => [...prev, msg]);
});
Now instead of reading from the closed-over messages, we are asking React to give us the current value through the prev parameter. This pattern completely sidesteps the stale closure issue because we are never relying on the captured variable at all.
I see this bug in code reviews at least once a month. It is the modern version of the var loop problem -- same root cause (closing over a variable that changes), different context.
Places You're Already Using Closures
After my deep embarrassment at that interview, I went through my own codebase looking for closures. They were everywhere:
- Every event handler that uses a variable from its surrounding function
- Express middleware that captures config -- like
cors({ origin: '...' })returning a function that closes over the options - React's useState -- the setter function closes over the component's state
- Curried functions and partial application
- Any callback passed to map, filter, reduce that references an outside variable
Honestly, if you write JavaScript, you're using closures all day. You might just not call them that.
Closures as Function Factories
One pattern I keep coming back to is using closures to build specialized functions from general ones. I had a project where we needed to format currencies for different locales across the app. Instead of passing locale and currency to every call, I made a factory:
function createFormatter(locale, currency) {
const formatter = new Intl.NumberFormat(locale, {
style: 'currency',
currency: currency,
});
return (amount) => formatter.format(amount);
}
const formatUSD = createFormatter('en-US', 'USD');
const formatINR = createFormatter('en-IN', 'INR');
const formatEUR = createFormatter('de-DE', 'EUR');
console.log(formatUSD(1234.5)); // ,234.50
console.log(formatINR(1234.5)); // ₹1,234.50
console.log(formatEUR(1234.5)); // 1.234,50 €
The closure captures both the locale and the already-constructed Intl.NumberFormat object. That means the formatter is created once and reused every time you call formatUSD. No repeated object construction. And the calling code is dead simple -- just pass the amount, everything else is baked in.
I have used this same approach for date formatters, API clients configured for specific endpoints, and logging functions that include context like the module name.
My Favorite Closure Pattern: Memoization
We had an API endpoint that computed recommendations. Same user, same input, same result -- but it took 800ms every time. I wrapped the computation in a memoizer and response times dropped to single digits on cache hits.
function memoize(fn) {
const cache = new Map();
return function(...args) {
const key = JSON.stringify(args);
if (cache.has(key)) {
return cache.get(key);
}
const result = fn.apply(this, args);
cache.set(key, result);
return result;
};
}
const expensiveCalculation = memoize((n) => {
console.log('Computing...');
return n * n;
});
console.log(expensiveCalculation(5)); // Computing... 25
console.log(expensiveCalculation(5)); // 25 (cached, no "Computing...")
The returned function closes over cache. Each call checks the Map first. The Map persists between calls because the closure keeps it alive. Clean, simple, effective.
One gotcha: if your function takes objects as arguments, JSON.stringify might not produce consistent keys (property order isn't guaranteed). For simple primitive arguments though, this works great.
The Dark Side: Accidental Memory Leaks
Closures keep variables alive. That's the whole point. But it also means those variables can't be garbage collected as long as the closure exists.
I once had a Node process whose memory kept climbing. Turned out we had an event listener that closed over a large buffer, and we kept adding new listeners without removing old ones. Each listener held onto its own copy of the buffer through the closure. Memory climbed until the process crashed.
The fix was obvious in hindsight: remove listeners when you're done with them. But it's a good reminder that closures have a cost. They're not free.
Here is a subtler version of this problem that bit me in a different project:
function processData() {
const hugeDataSet = loadGigabyteFile();
const summary = computeSummary(hugeDataSet);
return function getSummary() {
return summary;
};
}
You might expect only summary to be retained, since getSummary only references summary. And in modern engines, that is usually true -- V8 is smart enough to figure out that hugeDataSet is not referenced by the returned function and will let it be collected. But older engines, or certain patterns involving eval or with statements, can prevent this optimization. The engine cannot be sure what variables might be accessed if eval is in play, so it keeps everything alive just in case. One more reason to avoid eval.
Anyway, That's How I Think About Them
Closures aren't a feature you choose to use. They're baked into how JavaScript works. Every function closes over its surrounding scope. Once that clicked for me, a bunch of patterns that used to seem like magic -- memoization, module patterns, middleware factories -- suddenly just made sense. If you're still fuzzy on them, my honest advice is to open your own codebase and look for functions that reference variables from an outer scope. You'll find dozens. Those are your closures. Add a console.log inside one and check what it can access -- you will see the closure in action, holding onto variables that by all rights should have been garbage collected long ago. That moment of recognition is when closures go from an interview topic to a tool you actually understand.
Comments (3)
Finally someone explains closures without jumping into abstract theory. The counter example is perfect. Sharing this with my team.
The loop problem section saved me. I had this exact bug in my code and could not figure out why all my timeouts printed the same value. Now I know!
That memoization pattern is gold. I used this approach in a data-heavy dashboard and the performance improvement was immediately noticeable.