React Rendering Is Not the Problem You Think It Is
Let me tell you what I used to believe about React performance: I thought re-renders were the enemy. Every blog post, every conference talk, every Twitter thread hammered the same message -- fewer re-renders equals faster app. So I'd wrap everything in React.memo, sprinkle useMemo everywhere, and pat myself on the back.
Then I actually profiled one of my apps. Turns out, the thing making it slow wasn't re-renders at all. It was a single unvirtualized list of 2,000 items creating 2,000 DOM nodes on mount. The re-renders I'd been obsessing over? They took 0.3ms each. Nobody could perceive that.
So before we get into optimization techniques, let's get the mental model right. React's rendering has two phases that people constantly conflate:
The render phase is where React calls your component functions, builds the virtual DOM tree, and diffs it against the previous one. Pure JavaScript computation -- no DOM touching.
The commit phase is where React actually mutates the DOM. If the diff comes back clean, React skips the commit entirely. Your component "re-rendered" but the user sees zero impact.
This reframes the whole conversation. A re-render that takes 0.1ms and produces no DOM changes is effectively free. A re-render that takes 50ms because you're sorting a 10,000-item array inline? That's your actual problem. The goal isn't "eliminate re-renders" -- it's "make sure no single render blocks the main thread long enough for the user to notice."
React.memo: The Tool Everyone Uses Wrong
React.memo is probably the most misunderstood API in React. You wrap a component, React skips re-rendering it if the props haven't changed. Simple enough. The misunderstanding is about when it helps and the dozen ways you accidentally break it.
import { memo } from 'react';
function ExpensiveChart({ data, width, height }) {
console.log('ExpensiveChart rendered');
return (
<svg width={width} height={height}>
{data.map((point, i) => (
<circle key={i} cx={point.x} cy={point.y} r={3} />
))}
</svg>
);
}
const MemoizedChart = memo(ExpensiveChart);
Looks great. Ship it. Except here's what happens in 90% of codebases:
function Dashboard() {
const [refreshCount, setRefreshCount] = useState(0);
// NEW array on every render. Memo sees a new reference
// and re-renders the chart anyway.
const chartData = apiData.map(item => ({
x: item.timestamp, y: item.value
}));
return (
<div>
<button onClick={() => setRefreshCount(c => c + 1)}>Refresh</button>
<MemoizedChart data={chartData} width={800} height={400} />
</div>
);
}
The memo does nothing because chartData is a brand-new array every render. Shallow comparison sees a new reference and re-renders anyway. This is why React.memo almost always needs useMemo or useCallback on the parent side. You can also pass a custom comparison for complex props:
const MemoizedChart = memo(ExpensiveChart, (prevProps, nextProps) => {
return (
prevProps.width === nextProps.width &&
prevProps.height === nextProps.height &&
prevProps.data.length === nextProps.data.length &&
prevProps.data.every((point, i) =>
point.x === nextProps.data[i].x && point.y === nextProps.data[i].y
)
);
});
My rule: don't reach for React.memo until you've profiled. A component that renders in 0.5ms doesn't need it. One that takes 30ms and gets triggered by unrelated parent state? That's your candidate.
useMemo and useCallback: Your Reference Stability Toolkit
Here's the conversation I have roughly once a month: "I added React.memo but it's still re-rendering every time." The culprit is always an unstable reference being passed as a prop.
useMemo caches a computed value. useCallback caches a function. Both return the cached version as long as dependencies haven't changed.
import { useMemo, useCallback, useState, memo } from 'react';
function ProductList({ products, taxRate }) {
const sortedProducts = useMemo(() => {
return [...products].sort((a, b) => a.price - b.price);
}, [products]);
const totalWithTax = useMemo(() => {
const subtotal = products.reduce((sum, p) => sum + p.price, 0);
return subtotal * (1 + taxRate);
}, [products, taxRate]);
const [selectedId, setSelectedId] = useState(null);
const handleSelect = useCallback((id) => setSelectedId(id), []);
return (
<div>
<p>Total (with tax): ${totalWithTax.toFixed(2)}</p>
{sortedProducts.map((product) => (
<ProductCard
key={product.id} product={product}
isSelected={product.id === selectedId}
onSelect={handleSelect}
/>
))}
</div>
);
}
const ProductCard = memo(function ProductCard({ product, isSelected, onSelect }) {
return (
<div className={isSelected ? 'card selected' : 'card'}
onClick={() => onSelect(product.id)}>
<h3>{product.name}</h3>
<p>${product.price}</p>
</div>
);
});
The chain: handleSelect is stable via useCallback. ProductCard is memo-wrapped. When selectedId changes, only two cards re-render -- the old and new selection. Without useCallback, all 100 cards re-render because they each see a "new" onSelect.
The part I feel strongly about: don't memoize everything. Every hook adds a dependency comparison, stores a previous value, and makes code harder to read. If dependencies change every render, you're paying the comparison cost and still recomputing. Only memoize when the computation is genuinely expensive, or the value goes to a memo-wrapped child.
Virtualization: The Optimization That Actually Matters
If I had to pick one technique that gives the biggest bang for the buck, it's virtualization. Not close.
You have a list. 50 items in dev, fine. Then production data hits and you've got 5,000 items. The page takes 4 seconds to become interactive. Virtualization fixes this by only rendering visible items plus a small buffer -- maybe 25 DOM nodes instead of 5,000.
import { FixedSizeList as List } from 'react-window';
function VirtualizedUserList({ users }) {
const Row = ({ index, style }) => {
const user = users[index];
return (
<div style={style} className="user-row">
<img src={user.avatar} alt="" width={40} height={40} />
<div>
<strong>{user.name}</strong>
<p>{user.email}</p>
</div>
</div>
);
};
return (
<List height={600} itemCount={users.length} itemSize={72} width="100%">
{Row}
</List>
);
}
For variable heights, use VariableSizeList:
import { VariableSizeList } from 'react-window';
function ChatMessages({ messages }) {
const getItemSize = (index) => {
const lineCount = Math.ceil(messages[index].text.length / 60);
return 40 + lineCount * 20;
};
const MessageRow = ({ index, style }) => (
<div style={style} className="chat-message">
<strong>{messages[index].sender}:</strong>
<p>{messages[index].text}</p>
</div>
);
return (
<VariableSizeList height={500} itemCount={messages.length}
itemSize={getItemSize} width="100%">
{MessageRow}
</VariableSizeList>
);
}
Critical gotcha: you must apply the style prop. It contains the absolute positioning that makes virtualization work. Forget it and items stack on top of each other. I've debugged this for other developers more times than I can count.
If any list could exceed 100 items in production, start with virtualization from day one. Retrofitting is always more painful.
Code Splitting: Stop Shipping JavaScript Users Don't Need
I once inherited a React app with a 2.8MB initial bundle. The landing page loaded a rich text editor, charting library, admin panel, and emoji picker. Every user downloaded code they'd never execute.
import { lazy, Suspense, useState } from 'react';
import { Routes, Route } from 'react-router-dom';
const Dashboard = lazy(() => import('./pages/Dashboard'));
const Settings = lazy(() => import('./pages/Settings'));
const Analytics = lazy(() =>
import('./pages/Analytics').then(m => ({ default: m.AnalyticsPage }))
);
function App() {
return (
<Suspense fallback={<div className="spinner">Loading...</div>}>
<Routes>
<Route path="/dashboard" element={<Dashboard />} />
<Route path="/settings" element={<Settings />} />
<Route path="/analytics" element={<Analytics />} />
</Routes>
</Suspense>
);
}
Route-level splitting is table stakes. The more interesting pattern is interaction-level splitting -- loading heavy components only when needed:
const HeavyEditor = lazy(() => import('./components/HeavyEditor'));
function DocumentPage({ doc }) {
const [isEditing, setIsEditing] = useState(false);
return (
<div>
<h1>{doc.title}</h1>
<button onClick={() => setIsEditing(true)}>Edit</button>
{isEditing && (
<Suspense fallback={<p>Loading editor...</p>}>
<HeavyEditor content={doc.content} />
</Suspense>
)}
</div>
);
}
The pro move: preload on hover. The 200-400ms hover-to-click delay is often enough to fetch a chunk:
let editorPromise = null;
const preload = () => { if (!editorPromise) editorPromise = import('./components/HeavyEditor'); };
<button onMouseEnter={preload} onClick={() => setIsEditing(true)}>Edit</button>
Bundle Analysis: Seeing What You're Shipping
You can't fix what you can't see. Every time I've analyzed a production bundle, I've found at least one surprise.
// vite.config.js
import { visualizer } from 'rollup-plugin-visualizer';
export default defineConfig({
plugins: [visualizer({ open: true, gzipSize: true })],
});
The recurring patterns: the lodash trap (import _ from 'lodash' ships 72KB; import debounce from 'lodash/debounce' ships 1.5KB), duplicate dependencies from version conflicts, dead polyfills for browsers that don't need them, and non-tree-shakeable icon libraries.
// 72KB you don't need
import _ from 'lodash';
const result = _.debounce(fn, 300);
// 1.5KB. Just the function.
import debounce from 'lodash/debounce';
const result = debounce(fn, 300);
Set a bundle budget in CI. Fail the build if it's exceeded. It's easier to keep a bundle small than to shrink a bloated one.
Keys: The Silent Performance Killer
Everyone knows you need key on list items. Most think it's to suppress a console warning. It's not. Keys are how React tracks identity, and getting them wrong has real performance and correctness consequences.
// Bad: array index as key
{items.map((item, index) => <TodoItem key={index} item={item} />)}
// Good: stable unique ID
{items.map((item) => <TodoItem key={item.id} item={item} />)}
"But it works with index!" Sure, until someone sorts or filters the list, or adds an item at the beginning. With index keys, React updates the content of every DOM node instead of moving one. For 500 items with one insertion at the top, that's 500 DOM updates instead of 1.
Worse: with stateful components, index keys cause state to "stick" to positions rather than follow data. Checkboxes get the wrong state, inputs show wrong values. If your data lacks IDs, generate them at creation time with crypto.randomUUID(). And never use Math.random() as a key at render time -- it tells React to destroy and recreate everything every render.
The React Profiler: Your Performance Microscope
Everything above assumes you know where your problems are. The React DevTools Profiler tells you. Record, interact, stop. You get a flame chart colored by render duration. The tall red towers are your problems. The "Why did this render?" feature tells you exactly what triggered each render.
For production, use the programmatic Profiler component:
import { Profiler } from 'react';
function onRender(id, phase, actualDuration, baseDuration) {
if (actualDuration > 16) { // longer than one 60fps frame
analytics.track('slow_render', { component: id, phase, duration: actualDuration });
}
}
<Profiler id="App" onRender={onRender}>
<Profiler id="MainContent" onRender={onRender}><MainContent /></Profiler>
<Profiler id="Sidebar" onRender={onRender}><Sidebar /></Profiler>
</Profiler>
Compare baseDuration (time without memoization) to actualDuration to see if your React.memo calls are earning their keep.
useTransition and useDeferredValue
React 18's concurrent features changed how I think about performance. Not all state updates are equally urgent. Typing in a search box is urgent. Filtering 10,000 items based on that query can wait.
function SearchPage() {
const [query, setQuery] = useState('');
const [results, setResults] = useState([]);
const [isPending, startTransition] = useTransition();
const handleSearch = (e) => {
const value = e.target.value;
setQuery(value); // urgent: input updates instantly
startTransition(() => { // non-urgent: filter in background
setResults(filterAndRankResults(allData, value));
});
};
return (
<div>
<input value={query} onChange={handleSearch} />
{isPending && <div>Filtering...</div>}
<ResultsList results={results} />
</div>
);
}
Type a character before filtering finishes? React abandons the in-progress render and starts fresh. No wasted work, no blocked UI.
useDeferredValue is the flip side -- for when you don't control the state update but want to defer your response:
function FilterableList({ items }) {
const [filter, setFilter] = useState('');
const deferredFilter = useDeferredValue(filter);
const filteredItems = useMemo(() =>
items.filter(item => item.name.toLowerCase().includes(deferredFilter.toLowerCase())),
[items, deferredFilter]
);
return (
<div>
<input value={filter} onChange={e => setFilter(e.target.value)} />
<div style={{ opacity: filter !== deferredFilter ? 0.7 : 1 }}>
{filteredItems.map(item => <ItemCard key={item.id} item={item} />)}
</div>
</div>
);
}
I use useTransition ~80% of the time (when I own the state update) and useDeferredValue when the value comes from props or context. Both work for tab switches, chart redraws, and any state change triggering a heavy render.
The Re-render Rogues Gallery
After reviewing hundreds of React codebases, these anti-patterns come up constantly.
Inline objects in JSX. Every style={{ marginTop: 10 }} is a new reference that breaks memo on children. Hoist to constants or memoize.
The mega-context. One context holding all app state means every consumer re-renders on any change. Split by update frequency and memoize provider values.
Fetching without caching. useEffect + fetch without a cache means every mount fires a request. Use React Query, SWR, or your framework's data layer instead.
The Performance Optimization Playbook
After years of profiling, here's my process:
Start by measuring. Open the React Profiler, record typical user interactions, and look at the actual numbers. Don't guess where bottlenecks are — you'll be wrong more often than not.
If you have lists over 100 items, virtualize them. Seriously, this single change beats every other optimization combined in most apps I've worked on.
Next, look at your bundle. Kill bloated imports, code-split your routes and heavy components, and set a size budget in CI so things don't creep back up.
After that, stabilize your references. Wrap expensive components in React.memo and make sure parents pass stable props using useMemo and useCallback.
If you're using React context, split it. Group state by how frequently it updates so that a theme toggle doesn't re-render your entire app.
Finally, reach for the concurrent features. useTransition and useDeferredValue can smooth out whatever input jank is left after the previous steps.
Notice what I didn't say: "memoize everything from the start." Premature optimization makes code harder to read and sometimes actually slower. Clean code first, profiling second, targeted optimization third. Your users don't care about theoretical render counts. They care about whether the app feels fast.
Comments (0)
No comments yet. Be the first to share your thoughts!