File uploads broke our app twice before I figured out Multer properly. The first time, we were saving everything to disk with random filenames, ran out of space on a tiny EC2 instance over a weekend, and nobody noticed until Monday. The second time, someone uploaded a 200MB video through an endpoint that was only supposed to accept profile pictures. Both times, the fix took twenty minutes. Knowing what to fix took much longer. So here's what I wish I'd read before any of that happened.
Installation and Basic Configuration
Multer is middleware for handling multipart/form-data, which is the encoding type browsers use when a form includes file inputs. Express doesn't parse this on its own -- it just ignores it. Multer hooks into the request, pulls out the file data, and attaches it to req.file or req.files so you can actually work with it.
Install it:
npm install multerThe minimum viable setup looks like this:
const express = require('express');
const multer = require('multer');
const path = require('path');
const app = express();
// Basic configuration with destination folder
const upload = multer({ dest: 'uploads/' });
app.post('/upload', upload.single('avatar'), (req, res) => {
console.log(req.file);
// {
// fieldname: 'avatar',
// originalname: 'profile.jpg',
// encoding: '7bit',
// mimetype: 'image/jpeg',
// destination: 'uploads/',
// filename: 'a1b2c3d4e5f6...',
// path: 'uploads/a1b2c3d4e5f6...',
// size: 234567
// }
res.json({ message: 'File uploaded successfully', file: req.file });
});
app.listen(3000, () => console.log('Server running on port 3000'));This works, barely. Files go into uploads/ with auto-generated names and no extension. You can't tell what's a JPEG and what's a PDF just by looking at the directory. It's fine for prototyping. It's not fine for anything with users.
Storage Engines: Disk and Memory
Multer has two built-in storage engines. DiskStorage writes files to the filesystem and gives you control over the folder and filename. MemoryStorage holds the file in a Buffer in RAM and never touches disk. Picking the wrong one was my first real mistake with Multer -- I used MemoryStorage on an endpoint that accepted large PDFs, and the Node process kept crashing under load because it was holding dozens of 30MB buffers simultaneously.
DiskStorage is what you want for most cases:
const storage = multer.diskStorage({
destination: function (req, file, cb) {
// Dynamic destination based on file type
let uploadPath = 'uploads/';
if (file.mimetype.startsWith('image/')) {
uploadPath = 'uploads/images/';
} else if (file.mimetype === 'application/pdf') {
uploadPath = 'uploads/documents/';
}
cb(null, uploadPath);
},
filename: function (req, file, cb) {
// Generate unique filename with original extension
const uniqueSuffix = Date.now() + '-' + Math.round(Math.random() * 1E9);
const ext = path.extname(file.originalname);
cb(null, file.fieldname + '-' + uniqueSuffix + ext);
}
});
const upload = multer({ storage: storage });The destination callback runs per file, so you can sort images and documents into different folders on the fly. The filename callback gives you a place to generate something unique. I use a timestamp plus random digits -- collisions are basically impossible.
MemoryStorage makes sense when you need to process the file before storing it (like resizing an image) or when you're streaming straight to cloud storage and don't want a local copy:
const memoryStorage = multer.storage = multer.memoryStorage();
const uploadToMemory = multer({ storage: memoryStorage });
app.post('/process-upload', uploadToMemory.single('image'), (req, res) => {
// req.file.buffer contains the file data as a Buffer
console.log('File size in memory:', req.file.buffer.length);
// Process the buffer (resize, upload to S3, etc.)
res.json({ message: 'File processed', size: req.file.size });
});Just be careful with it. If you accept files over a few megabytes and you might get concurrent uploads, MemoryStorage can eat through your available RAM fast. I learned this the hard way.
Single and Multiple File Uploads
Multer gives you four methods depending on how many files you expect and from which fields.
upload.single(fieldname) is the simplest. One file, one field. Result lands on req.file:
// Single file upload
app.post('/profile-picture', upload.single('avatar'), (req, res) => {
if (!req.file) {
return res.status(400).json({ error: 'No file uploaded' });
}
res.json({
message: 'Profile picture uploaded',
filename: req.file.filename,
size: req.file.size
});
});upload.array(fieldname, maxCount) takes multiple files from one field. Results on req.files as an array:
// Multiple files from the same field (max 5)
app.post('/gallery', upload.array('photos', 5), (req, res) => {
if (!req.files || req.files.length === 0) {
return res.status(400).json({ error: 'No files uploaded' });
}
const fileDetails = req.files.map(f => ({
originalName: f.originalname,
size: f.size,
path: f.path
}));
res.json({
message: `${req.files.length} files uploaded`,
files: fileDetails
});
});upload.fields(fields) handles multiple named fields, which is what you want for a form with separate inputs for avatar, cover photo, and gallery. Results on req.files as an object keyed by field name:
// Multiple fields with different file types
const cpUpload = upload.fields([
{ name: 'avatar', maxCount: 1 },
{ name: 'coverPhoto', maxCount: 1 },
{ name: 'gallery', maxCount: 8 }
]);
app.post('/user-media', cpUpload, (req, res) => {
const avatar = req.files['avatar'] ? req.files['avatar'][0] : null;
const cover = req.files['coverPhoto'] ? req.files['coverPhoto'][0] : null;
const gallery = req.files['gallery'] || [];
res.json({
avatar: avatar?.filename,
coverPhoto: cover?.filename,
galleryCount: gallery.length
});
});And upload.none() is for multipart forms that only have text fields. No files expected, and Multer will reject any that show up.
File Filtering and Size Limits
This is where the second outage came from. We had no file filter and no size limit. Someone uploaded a huge video, Multer happily accepted it, and our disk filled up. The fix was embarrassingly simple.
The fileFilter function runs for every incoming file. Return true to accept, false (or an error) to reject:
const imageFilter = function (req, file, cb) {
// Accept only image files
const allowedTypes = ['image/jpeg', 'image/png', 'image/gif', 'image/webp'];
if (allowedTypes.includes(file.mimetype)) {
cb(null, true);
} else {
cb(new Error('Only JPEG, PNG, GIF, and WebP images are allowed'), false);
}
};
const uploadImages = multer({
storage: storage,
fileFilter: imageFilter,
limits: {
fileSize: 5 * 1024 * 1024, // 5 MB max file size
files: 10, // Max 10 files per request
fieldNameSize: 100, // Max field name length
fieldSize: 1024 * 1024 // Max non-file field size (1MB)
}
});One thing to know: MIME types can be spoofed. A user can rename malware.exe to malware.jpg and the MIME type might still say image/jpeg. Check the extension too:
const strictImageFilter = function (req, file, cb) {
const allowedMimes = ['image/jpeg', 'image/png', 'image/webp'];
const allowedExts = ['.jpg', '.jpeg', '.png', '.webp'];
const ext = path.extname(file.originalname).toLowerCase();
if (allowedMimes.includes(file.mimetype) && allowedExts.includes(ext)) {
cb(null, true);
} else {
cb(new Error(
`Invalid file type. Received ${file.mimetype} with extension ${ext}. ` +
`Allowed: JPEG, PNG, WebP.`
), false);
}
};Is this bulletproof? No. Someone determined enough can still get past MIME + extension checks. But it stops the casual stuff and that's 99% of what you're dealing with. For true file content validation, you'd need to read the file's magic bytes, which is a different rabbit hole entirely.
Image Resizing with Sharp
Users upload 4000x3000 photos from their phone when your UI only ever shows them at 600px wide. You're storing (and serving) five times the data you need. Sharp fixes this, and it's fast -- it uses libvips under the hood, which is significantly quicker than ImageMagick.
npm install sharpHere's a middleware that takes an uploaded image and spits out three sizes. I convert everything to WebP because it's typically 25-35% smaller than equivalent JPEG:
const sharp = require('sharp');
const fs = require('fs').promises;
const uploadForResize = multer({
storage: multer.memoryStorage(),
fileFilter: imageFilter,
limits: { fileSize: 10 * 1024 * 1024 }
});
const resizeImages = async (req, res, next) => {
if (!req.file) return next();
const filename = `${Date.now()}-${Math.round(Math.random() * 1E9)}`;
const ext = '.webp'; // Convert all images to WebP for better compression
req.file.resized = {};
try {
// Create thumbnail (150x150, cropped)
const thumbPath = `uploads/thumbnails/${filename}${ext}`;
await sharp(req.file.buffer)
.resize(150, 150, { fit: 'cover', position: 'center' })
.webp({ quality: 80 })
.toFile(thumbPath);
req.file.resized.thumbnail = thumbPath;
// Create medium size (600px wide, maintain aspect ratio)
const mediumPath = `uploads/medium/${filename}${ext}`;
await sharp(req.file.buffer)
.resize(600, null, { fit: 'inside', withoutEnlargement: true })
.webp({ quality: 85 })
.toFile(mediumPath);
req.file.resized.medium = mediumPath;
// Create large size (1200px wide, maintain aspect ratio)
const largePath = `uploads/large/${filename}${ext}`;
await sharp(req.file.buffer)
.resize(1200, null, { fit: 'inside', withoutEnlargement: true })
.webp({ quality: 90 })
.toFile(largePath);
req.file.resized.large = largePath;
// Get metadata for original image
const metadata = await sharp(req.file.buffer).metadata();
req.file.metadata = {
width: metadata.width,
height: metadata.height,
format: metadata.format
};
next();
} catch (err) {
next(err);
}
};
app.post('/upload-image',
uploadForResize.single('image'),
resizeImages,
(req, res) => {
res.json({
message: 'Image uploaded and resized',
original: req.file.metadata,
versions: req.file.resized
});
}
);Notice withoutEnlargement: true -- that prevents Sharp from blowing up a 400px image to 1200px, which would just add file size and blur. MemoryStorage is the right call here because we need the buffer to feed into Sharp. The tradeoff is RAM usage, but since we're capping uploads at 10MB, it stays manageable.
Upload Progress Tracking
For files over a megabyte or two, users want to see progress. Multer doesn't emit progress events itself, but you can track bytes on the raw request stream. Honestly though, the easier approach is to do it client-side with XMLHttpRequest's upload progress event, since the browser already knows how many bytes it has sent:
// Client-side JavaScript
function uploadWithProgress(file, onProgress) {
return new Promise((resolve, reject) => {
const formData = new FormData();
formData.append('file', file);
const xhr = new XMLHttpRequest();
xhr.upload.addEventListener('progress', (event) => {
if (event.lengthComputable) {
const percent = Math.round((event.loaded / event.total) * 100);
onProgress(percent);
}
});
xhr.addEventListener('load', () => {
if (xhr.status >= 200 && xhr.status < 300) {
resolve(JSON.parse(xhr.responseText));
} else {
reject(new Error(`Upload failed with status ${xhr.status}`));
}
});
xhr.addEventListener('error', () => reject(new Error('Upload failed')));
xhr.open('POST', '/upload-with-progress');
xhr.send(formData);
});
}
// Usage
const fileInput = document.getElementById('fileInput');
const progressBar = document.getElementById('progressBar');
fileInput.addEventListener('change', async (e) => {
const file = e.target.files[0];
try {
const result = await uploadWithProgress(file, (percent) => {
progressBar.style.width = `${percent}%`;
progressBar.textContent = `${percent}%`;
});
console.log('Upload complete:', result);
} catch (err) {
console.error('Upload error:', err);
}
});No server-side changes needed for this. The browser tracks upload progress locally and fires events on xhr.upload. Works everywhere.
Error Handling
Multer errors come with a code property that tells you exactly what went wrong. Catch them in one place and return something useful to the client:
const multer = require('multer');
// Centralized error handling middleware for uploads
function handleUploadError(err, req, res, next) {
if (err instanceof multer.MulterError) {
const errorMessages = {
LIMIT_FILE_SIZE: 'File is too large. Maximum size is 5MB.',
LIMIT_FILE_COUNT: 'Too many files. Maximum is 10 files.',
LIMIT_UNEXPECTED_FILE: 'Unexpected file field name.',
LIMIT_FIELD_KEY: 'Field name is too long.',
LIMIT_FIELD_VALUE: 'Field value is too long.',
LIMIT_FIELD_COUNT: 'Too many fields.',
LIMIT_PART_COUNT: 'Too many parts in the multipart request.'
};
const message = errorMessages[err.code] || 'File upload error.';
return res.status(400).json({
error: 'UPLOAD_ERROR',
code: err.code,
message: message,
field: err.field
});
}
if (err.message && err.message.includes('Only')) {
// Custom file filter errors
return res.status(415).json({
error: 'INVALID_FILE_TYPE',
message: err.message
});
}
// Unknown error
console.error('Unexpected upload error:', err);
return res.status(500).json({
error: 'INTERNAL_ERROR',
message: 'An unexpected error occurred during upload.'
});
}
// Usage with a route
app.post('/secure-upload',
(req, res, next) => {
const singleUpload = uploadImages.single('file');
singleUpload(req, res, (err) => {
if (err) return handleUploadError(err, req, res, next);
next();
});
},
(req, res) => {
if (!req.file) {
return res.status(400).json({ error: 'No file provided.' });
}
res.json({ message: 'Upload successful', file: req.file.filename });
}
);
// Apply as global error handler
app.use(handleUploadError);One more thing: create your upload directories at startup. If the folder doesn't exist when Multer tries to write, it throws. I've seen this happen after a fresh deploy to a new server where nobody ran the setup script:
const fs = require('fs');
const uploadDirs = [
'uploads',
'uploads/images',
'uploads/documents',
'uploads/thumbnails',
'uploads/medium',
'uploads/large'
];
uploadDirs.forEach(dir => {
if (!fs.existsSync(dir)) {
fs.mkdirSync(dir, { recursive: true });
console.log(`Created directory: ${dir}`);
}
});And if your upload succeeds but the database save fails, clean up the orphan file. Otherwise you accumulate dead files forever:
app.post('/upload-and-save',
upload.single('document'),
async (req, res, next) => {
try {
// Try to save file metadata to database
await db.documents.create({
filename: req.file.filename,
originalName: req.file.originalname,
size: req.file.size,
userId: req.user.id
});
res.json({ message: 'Document uploaded and saved' });
} catch (dbError) {
// Database save failed, clean up the uploaded file
try {
await fs.promises.unlink(req.file.path);
console.log('Cleaned up orphaned file:', req.file.path);
} catch (unlinkError) {
console.error('Failed to clean up file:', unlinkError);
}
next(dbError);
}
}
);Cloud Storage with Amazon S3
Once you're running more than one server -- or using any kind of ephemeral hosting -- local disk storage stops making sense. Files saved on server A aren't visible from server B. The standard answer is S3, and multer-s3 makes the integration pretty painless:
npm install @aws-sdk/client-s3 multer-s3const { S3Client } = require('@aws-sdk/client-s3');
const multerS3 = require('multer-s3');
const s3 = new S3Client({
region: process.env.AWS_REGION,
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
}
});
const s3Storage = multerS3({
s3: s3,
bucket: process.env.S3_BUCKET_NAME,
acl: 'public-read',
contentType: multerS3.AUTO_CONTENT_TYPE,
metadata: function (req, file, cb) {
cb(null, {
fieldName: file.fieldname,
uploadedBy: req.user?.id || 'anonymous'
});
},
key: function (req, file, cb) {
const folder = file.mimetype.startsWith('image/') ? 'images' : 'documents';
const uniqueName = `${folder}/${Date.now()}-${file.originalname}`;
cb(null, uniqueName);
}
});
const uploadToS3 = multer({
storage: s3Storage,
fileFilter: imageFilter,
limits: { fileSize: 10 * 1024 * 1024 } // 10 MB
});
app.post('/upload-s3', uploadToS3.single('image'), (req, res) => {
res.json({
message: 'File uploaded to S3',
url: req.file.location, // Public URL of the uploaded file
key: req.file.key, // S3 object key
bucket: req.file.bucket,
size: req.file.size
});
});
Comments (0)
No comments yet. Be the first to share your thoughts!