A colleague asked me last week what packages I install in basically every Node project. I started listing them off and realized I had opinions about each one -- not just "use this" but "use this because I tried three alternatives and they all had problems." So here's my list. These aren't theoretical picks from some "awesome-node" repo. They're packages I've shipped to production and would install again tomorrow.
A quick note before we get into it: I'm not listing React, Express, or any framework here. Those are architecture choices, not utility packages. Everything below is something you'd drop into an existing project regardless of what framework you're running.
1. dotenv -- Keep Your Secrets Out of Git
I hardcoded an API key in source code exactly once. It got pushed to a public repo. I rotated the key within an hour but the panic was enough to make me religious about environment variables.
require('dotenv').config();
const dbHost = process.env.DB_HOST;
const apiKey = process.env.API_KEY;
dotenv reads a .env file and loads its values into process.env. Ship a .env.example with placeholder values so your team knows what variables they need. And put .env in your .gitignore. Please.
One thing I've started doing recently is validating the env vars at startup rather than letting the app blow up later when some variable is missing. Combine dotenv with a small check at the top of your entry file:
require('dotenv').config();
const required = ['DB_HOST', 'API_KEY', 'JWT_SECRET'];
const missing = required.filter(key => !process.env[key]);
if (missing.length > 0) {
console.error('Missing env vars:', missing.join(', '));
process.exit(1);
}
This has saved me more than once. You deploy to staging, realize someone forgot to set a variable in the dashboard, and the app tells you immediately instead of dying halfway through a request handler twenty minutes later.
2. zod -- Runtime Validation That Plays Nice With TypeScript
TypeScript catches type errors at build time. Zod catches them at runtime. You need both, because user input, API responses, and config files don't care about your type definitions.
const { z } = require('zod');
const UserSchema = z.object({
name: z.string().min(1).max(100),
email: z.string().email(),
age: z.number().int().positive().optional()
});
const result = UserSchema.safeParse(req.body);
if (!result.success) {
return res.status(400).json({ errors: result.error.flatten() });
}
We switched from Joi to Zod about a year ago. The TypeScript inference is so much better -- you define the schema once and get the type for free. z.infer<typeof UserSchema> and you're done.
Where Zod really shines is in the edge cases. Say you've got an API that accepts dates as strings. You can chain transforms right into the schema:
const EventSchema = z.object({
title: z.string(),
date: z.string().datetime().transform(str => new Date(str)),
attendees: z.array(z.string().email()).min(1)
});
// result.data.date is now a Date object, not a string
That transform step means your validated data is already in the shape you want. No separate mapping step, no conversion logic scattered around your controllers. The schema is both the gatekeeper and the transformer.
3. winston -- When console.log Isn't Enough
I love console.log for debugging. I hate it for production. No timestamps, no log levels, no way to send logs to a file or external service.
const winston = require('winston');
const logger = winston.createLogger({
level: 'info',
format: winston.format.combine(
winston.format.timestamp(),
winston.format.json()
),
transports: [
new winston.transports.File({ filename: 'error.log', level: 'error' }),
new winston.transports.File({ filename: 'combined.log' })
]
});
Structured JSON logs are a lifesaver when you're searching through thousands of lines trying to find what happened at 3am. Winston handles that out of the box.
One pattern I've settled on: add a request-id to every log line. When a user reports a problem, you can filter your logs by that single ID and see the entire request lifecycle. Here's a middleware that does it:
const { nanoid } = require('nanoid');
app.use((req, res, next) => {
req.requestId = nanoid(12);
req.log = logger.child({ requestId: req.requestId });
next();
});
// Then in your route handlers:
app.get('/users/:id', (req, res) => {
req.log.info({ userId: req.params.id }, 'Fetching user');
// ...
});
Every log from that request now includes the same requestId field. Debugging distributed issues goes from "dig through 50,000 lines" to "grep for one string." Winston's child logger method makes this painless.
4. helmet -- Security Headers in One Line
One line. That's it.
const helmet = require('helmet');
app.use(helmet());
Sets Content-Security-Policy, X-Frame-Options, X-Content-Type-Options, and a bunch of other headers that protect against common attacks. I can't think of a reason not to use it. The effort-to-benefit ratio is absurd.
The default configuration is sensible for most apps, but you'll probably need to tweak Content-Security-Policy if your frontend loads scripts from CDNs or uses inline styles. Here's a more realistic setup:
app.use(helmet({
contentSecurityPolicy: {
directives: {
defaultSrc: ["'self'"],
scriptSrc: ["'self'", "https://cdn.jsdelivr.net"],
styleSrc: ["'self'", "'unsafe-inline'"],
imgSrc: ["'self'", "data:", "https:"],
}
}
}));
I've seen teams skip helmet because "we're behind a reverse proxy." That proxy might set some headers, sure, but helmet costs you nothing and catches the gaps. Defense in layers.
5. day.js -- Dates Without the Bloat
Moment.js is deprecated and it's 300KB. Day.js does the same stuff at 2KB.
const dayjs = require('dayjs');
const relativeTime = require('dayjs/plugin/relativeTime');
dayjs.extend(relativeTime);
dayjs('2026-02-08').fromNow(); // "2 weeks ago"
dayjs().format('MMMM D, YYYY'); // "February 23, 2026"
The plugin system means you only load what you need. I use relativeTime, utc, and timezone plugins. That covers probably 95% of date work.
6. nanoid -- Short IDs That Won't Collide
UUID v4 gives you something like 550e8400-e29b-41d4-a716-446655440000. That's fine for databases but ugly in URLs. Nanoid gives you shorter strings with the same collision resistance.
const { nanoid } = require('nanoid');
const id = nanoid(); // "V1StGXR8_Z5jdHi6B-myT"
const short = nanoid(10); // "IRFa-VaY2b"
URL-friendly by default. No special characters that need encoding. I use the 10-character version for things like invite codes and short links.
A thing worth mentioning: nanoid is also significantly faster than crypto.randomUUID() for generating IDs at scale. We ran into this on an event-tracking service that generates thousands of IDs per second. The difference was measurable. Nanoid also gives you control over the alphabet if you need numeric-only codes or case-insensitive IDs:
const { customAlphabet } = require('nanoid');
const numericId = customAlphabet('0123456789', 8);
numericId(); // "48293017" -- great for order numbers
const lowerId = customAlphabet('abcdefghijklmnopqrstuvwxyz0123456789', 12);
lowerId(); // "a8k3mx9qz2fb" -- case-insensitive safe
7. pino -- The Speed Freak's Logger
I mentioned Winston above. Pino is the alternative when you care about raw speed. It logs about 5x faster by offloading serialization to a worker thread.
const pino = require('pino');
const logger = pino({ level: 'info' });
logger.info({ userId: 123, action: 'login' }, 'User logged in');
Winston vs Pino is one of those debates that never ends. My take: Pino if you're logging a lot and latency matters. Winston if you want more built-in transports and don't mind the overhead. We use both across different services.
8. cors -- Because Browsers Are Picky
If your API and frontend live on different domains, the browser will block requests unless you set CORS headers. This package handles it:
const cors = require('cors');
app.use(cors({
origin: ['https://namastenode.com', 'http://localhost:3000'],
methods: ['GET', 'POST', 'PUT', 'DELETE']
}));
Be specific with your origins. I've seen cors({ origin: '*' }) in production code and it makes me uncomfortable every time.
For more complex setups -- say you've got multiple staging environments and you want to allow them all -- you can pass a function instead of an array:
app.use(cors({
origin: function (origin, callback) {
const allowed = [
'https://namastenode.com',
'https://staging.namastenode.com'
];
// Allow requests with no origin (like mobile apps or curl)
if (!origin || allowed.includes(origin)) {
callback(null, true);
} else {
callback(new Error('Not allowed by CORS'));
}
},
credentials: true
}));
That credentials: true part is important if you're sending cookies cross-origin. Without it, the browser silently drops them and you'll spend an hour wondering why auth isn't working. Ask me how I know.
9. compression -- Shrink Your Responses
Gzip your HTTP responses. Smaller payloads, faster transfers.
const compression = require('compression');
app.use(compression());
Two lines and your JSON responses get 60-80% smaller. If you're behind Nginx or a CDN that handles compression, you might not need this. But for direct Express-to-client setups, it's an easy win.
10. nodemailer -- Sending Email Without the Headache
Every app eventually needs to send email. Password resets, welcome messages, notifications. Nodemailer handles SMTP, and works with Gmail, SendGrid, Mailgun, whatever you use.
const nodemailer = require('nodemailer');
const transporter = nodemailer.createTransport({
host: 'smtp.gmail.com',
port: 587,
auth: { user: process.env.EMAIL, pass: process.env.EMAIL_PASS }
});
await transporter.sendMail({
from: '[email protected]',
to: '[email protected]',
subject: 'Welcome to NamasteNode',
html: 'Welcome aboard!
'
});
A hard-earned tip: never send email synchronously inside your request handler. A slow SMTP server will hold up the response. Push the email job to a queue (Bull, BullMQ, even a simple Redis list) and process it in the background. Your users get a fast response, and the email goes out within seconds anyway.
// In your route handler
await emailQueue.add('welcome-email', {
to: user.email,
template: 'welcome',
data: { name: user.name }
});
res.json({ message: 'Account created' });
// In your worker process
emailQueue.process('welcome-email', async (job) => {
const { to, template, data } = job.data;
const html = renderTemplate(template, data);
await transporter.sendMail({ from: '[email protected]', to, html });
});
This pattern also gives you automatic retries if the SMTP server is temporarily down. The queue worker will pick up the failed job and try again.
11. sharp -- Image Processing That's Actually Fast
We needed to resize user-uploaded avatars. Tried a couple of pure-JS image libraries. They were painfully slow on anything above 2MP. Sharp uses libvips under the hood and it's night-and-day faster.
const sharp = require('sharp');
await sharp('input.jpg')
.resize(800, 600)
.webp({ quality: 80 })
.toFile('output.webp');
WebP output with quality control in three lines. Our image upload pipeline went from 4 seconds to under 200ms after switching to Sharp.
12. PM2 -- Keep Your App Alive
Your Node process will crash eventually. PM2 restarts it automatically. It also gives you clustering (use all CPU cores), log management, and a monitoring dashboard.
npm install -g pm2
pm2 start server.js -i max --name "namastenode"
pm2 monit
The -i max flag forks one process per CPU core. On our 4-core production server, that means 4 Node processes behind a built-in load balancer. Throughput roughly quadrupled.
Something people miss about PM2: the ecosystem file. Instead of passing a bunch of CLI flags, you define everything in a config file and check it into your repo:
// ecosystem.config.js
module.exports = {
apps: [{
name: 'namastenode',
script: './server.js',
instances: 'max',
exec_mode: 'cluster',
env: {
NODE_ENV: 'production',
PORT: 3000
},
max_memory_restart: '500M',
error_file: './logs/err.log',
out_file: './logs/out.log',
log_date_format: 'YYYY-MM-DD HH:mm:ss'
}]
};
Then just pm2 start ecosystem.config.js and everything's consistent across deployments. No more "wait, what flags were we using again?" conversations in Slack.
Honorable Mentions
A few packages that didn't make the top 12 but deserve a quick shout-out:
- rate-limiter-flexible -- I've used this to throttle login attempts and API calls. More flexible than express-rate-limit when you need per-user or per-IP limits backed by Redis.
- cron -- For scheduled jobs. It handles cron syntax and doesn't fight you. I run nightly database cleanups and report generation with it.
- ms -- Converts human-readable time strings to milliseconds.
ms('2 days')returns172800000. Tiny utility, but it makes timeout and TTL values readable.
Things I Still Want to Figure Out
This list isn't final. There are packages I'm actively evaluating but haven't committed to yet:
- Whether Fastify should replace Express as our default (benchmarks look great, ecosystem is smaller)
- If tsx is stable enough to replace ts-node for TypeScript execution in dev
- How Drizzle ORM compares to Prisma for SQL databases -- Prisma's cold starts bug me
- Whether we should drop Winston and Pino entirely for OpenTelemetry's logging
- If Bun's built-in test runner is good enough to replace Jest and Vitest
The packages above aren't going to revolutionize your codebase. That's not the point. They're the boring infrastructure that keeps your app running, your data validated, your logs searchable, and your team sane. I'd rather have twelve boring dependencies I understand than one magical framework I don't.
Comments (2)
Glad to see Zod on this list. We switched from Joi to Zod last year and the TypeScript integration is so much better. The safeParse pattern keeps our controllers clean.
PM2 has been a lifesaver in production. The cluster mode alone handles our traffic spikes without needing to scale horizontally. Great picks overall.