PostgreSQL with Node.js: Getting Started with pg and Connection Pooling

PostgreSQL with Node.js: Getting Started with pg and Connection Pooling

I switched from MongoDB to PostgreSQL after years of document queries, and it stuck. This is the practical walkthrough covering the pg library setup, connection pooling, parameterized queries, error handling, transactions, and the pool config I actually run in production.

I was a MongoDB person for years before I touched PostgreSQL. The switch felt weird at first — writing SQL again after years of document queries. But it grew on me fast.

What got me was the reliability. With MongoDB, I'd sometimes get bitten by schema drift — a field that was a string in some documents and a number in others, queries silently returning nothing because of a typo in a field name. PostgreSQL just wouldn't let that happen. You define your schema, and the database holds you to it. After getting burned a few times, I started to appreciate that strictness.

The Node.js side of things turned out to be straightforward too. The pg library (node-postgres) is stable, well-maintained, and doesn't try to do too much. No ORM magic, no surprise query generation — you write SQL, you get results. Here's how I set things up and what I've learned along the way.

Setting Up the pg Library

Install pg, plus the type definitions if you're using TypeScript:

npm install pg
npm install --save-dev @types/pg  # For TypeScript users

The quickest way to test your connection is with a single client:

const { Client } = require('pg');

const client = new Client({
  connectionString: 'postgresql://user:password@localhost:5432/mydb'
});

await client.connect();
const res = await client.query('SELECT NOW()');
console.log(res.rows[0]);
await client.end();

That's it. If you see a timestamp come back, you're connected.

For anything beyond a quick test, you'll want to pass config options separately so you can pull from environment variables:

const client = new Client({
  host: process.env.DB_HOST || 'localhost',
  port: parseInt(process.env.DB_PORT) || 5432,
  database: process.env.DB_NAME || 'mydb',
  user: process.env.DB_USER || 'postgres',
  password: process.env.DB_PASSWORD || 'secret',
  ssl: process.env.NODE_ENV === 'production' ? { rejectUnauthorized: false } : false
});

One thing I want to stress: never hardcode credentials in your source code. I've seen it in production codebases more than I'd like. Use a .env file locally and inject variables through your deployment platform in production.

# .env file
DATABASE_URL=postgresql://postgres:secret@localhost:5432/myapp_dev
DB_POOL_MIN=2
DB_POOL_MAX=10

Connection Pooling (and Why You Need It)

When I first started, I was creating a new connection for every request. My app worked fine in development with 5 users. Then it hit staging with actual traffic, and database connections started timing out.

The problem is that each new connection is expensive. There's a TCP handshake, SSL negotiation if you've got that on, and PostgreSQL's own authentication step. Do that for every query on a server handling hundreds of requests per second, and you've got a bottleneck.

Connection pooling fixes this. You create a pool of connections upfront, and your code borrows one when it needs to run a query, then gives it back. Same connections, reused over and over.

const { Pool } = require('pg');

const pool = new Pool({
  connectionString: process.env.DATABASE_URL,
  max: 20,                // Maximum connections in the pool
  idleTimeoutMillis: 30000, // Close idle connections after 30 seconds
  connectionTimeoutMillis: 2000 // Fail if a connection isn't available in 2s
});

// Listen for errors on idle clients — don't let these crash your app silently
pool.on('error', (err, client) => {
  console.error('Unexpected error on idle client', err);
  process.exit(-1);
});

pool.on('connect', (client) => {
  console.log('New client connected to the pool');
});

For most queries, you can call pool.query() directly and the pool handles checkout and release for you:

async function getUsers() {
  const result = await pool.query('SELECT id, name, email FROM users');
  return result.rows;
}

If you need to run multiple queries on the same connection (for transactions, which I'll get to), check out a client manually:

async function performMultipleQueries() {
  const client = await pool.connect();
  try {
    const users = await client.query('SELECT * FROM users');
    const posts = await client.query('SELECT * FROM posts');
    return { users: users.rows, posts: posts.rows };
  } finally {
    client.release(); // Always release back to the pool
  }
}

A word on pool sizing: I've seen people set max to 100 thinking more connections equals better performance. It doesn't. PostgreSQL has overhead per connection, and past a certain point you're just adding contention. A good starting point is (CPU cores * 2) + number of disks. For most cloud setups, 10-20 per app instance works well. If you're running multiple instances, remember that the total across all of them can't exceed PostgreSQL's max_connections.

For really high-concurrency situations, look into PgBouncer. It sits between your app and Postgres, managing thousands of incoming connections with just a few real database connections. It saved me once on a project where we had 12 app instances all hitting the same database.

Something I set up early now on every project is pool monitoring. The pool object exposes some useful properties: pool.totalCount, pool.idleCount, and pool.waitingCount. I log these on a 30-second interval in production. When waitingCount starts climbing, it means your pool is saturated — queries are queuing up waiting for a free connection. That is your signal to either increase the pool size or investigate why connections are being held too long.

setInterval(() => {
  console.log({
    total: pool.totalCount,
    idle: pool.idleCount,
    waiting: pool.waitingCount
  });
}, 30000);

I have caught connection leaks this way — a missing client.release() in an error path that slowly drained the pool until everything ground to a halt. The monitoring showed totalCount at max and idleCount at zero, which pointed straight to the problem.

Parameterized Queries and Avoiding SQL Injection

If you've worked with any database from Node.js, you've probably heard the SQL injection lecture. I'll keep this brief but it's worth showing the actual pattern because I still see string concatenation in code reviews.

// DANGEROUS: Never do this!
const unsafeQuery = `SELECT * FROM users WHERE email = '${userInput}'`;
// If userInput is: ' OR '1'='1' --
// The query becomes: SELECT * FROM users WHERE email = '' OR '1'='1' --'

The pg library makes the safe version just as easy to write:

// SAFE: Use parameterized queries
const result = await pool.query(
  'SELECT * FROM users WHERE email = $1',
  [userInput]
);

// Multiple parameters use $1, $2, $3...
const result2 = await pool.query(
  'INSERT INTO users (name, email, role) VALUES ($1, $2, $3) RETURNING *',
  [name, email, role]
);

// Filtering with multiple conditions
const result3 = await pool.query(
  'SELECT * FROM products WHERE price BETWEEN $1 AND $2 AND category = $3 ORDER BY price',
  [minPrice, maxPrice, category]
);

Beyond safety, parameterized queries are faster too. PostgreSQL can cache execution plans for them, so repeated queries with different values skip the planning step. On a high-traffic endpoint, that adds up.

The trickier case is dynamic queries — search endpoints where filters are optional. Here's the pattern I use:

async function searchUsers(filters) {
  const conditions = [];
  const values = [];
  let paramIndex = 1;

  if (filters.name) {
    conditions.push(`name ILIKE $${paramIndex++}`);
    values.push(`%${filters.name}%`);
  }
  if (filters.role) {
    conditions.push(`role = $${paramIndex++}`);
    values.push(filters.role);
  }
  if (filters.minAge) {
    conditions.push(`age >= $${paramIndex++}`);
    values.push(filters.minAge);
  }

  const whereClause = conditions.length > 0
    ? 'WHERE ' + conditions.join(' AND ')
    : '';

  const query = `SELECT * FROM users ${whereClause} ORDER BY created_at DESC`;
  const result = await pool.query(query, values);
  return result.rows;
}

It's a bit verbose, but it keeps every value parameterized regardless of which filters are active. I've tried query builders like Knex for this, and they're fine too — but for straightforward cases, this manual approach is easy to follow and has no extra dependencies.

Error Handling Worth Getting Right

The pg library throws errors with useful properties that most tutorials skip over. Every error from PostgreSQL includes a code field — a five-character string defined in the Postgres docs. The ones I check for most often: 23505 is a unique constraint violation (someone tried to insert a duplicate email), 23503 is a foreign key violation (referencing a row that does not exist), and 42P01 means the table does not exist.

try {
  await pool.query(
    'INSERT INTO users (email, name) VALUES (\, \)',
    [email, name]
  );
} catch (err) {
  if (err.code === '23505') {
    // Unique violation - this email already exists
    return { error: 'Email already registered' };
  }
  throw err; // Re-throw anything unexpected
}

Handling these explicitly instead of letting them bubble up as generic 500 errors makes a real difference in user experience. Your API can return a clear email-already-taken message instead of internal server error.

Transactions

Transactions are where I really started to appreciate PostgreSQL coming from MongoDB. The classic example is transferring money: debit one account, credit another. If the credit fails, you need to undo the debit. MongoDB added multi-document transactions eventually, but in Postgres, they just work and they've worked for decades.

async function transferFunds(fromAccountId, toAccountId, amount) {
  const client = await pool.connect();
  try {
    await client.query('BEGIN');

    const debitResult = await client.query(
      'UPDATE accounts SET balance = balance - $1 WHERE id = $2 AND balance >= $1 RETURNING balance',
      [amount, fromAccountId]
    );

    if (debitResult.rowCount === 0) {
      throw new Error('Insufficient funds or account not found');
    }

    const creditResult = await client.query(
      'UPDATE accounts SET balance = balance + $1 WHERE id = $2 RETURNING balance',
      [amount, toAccountId]
    );

    if (creditResult.rowCount === 0) {
      throw new Error('Destination account not found');
    }

    await client.query(
      'INSERT INTO transfers (from_account, to_account, amount) VALUES ($1, $2, $3)',
      [fromAccountId, toAccountId, amount]
    );

    await client.query('COMMIT');
    return { success: true, newBalance: debitResult.rows[0].balance };
  } catch (err) {
    await client.query('ROLLBACK');
    throw err;
  } finally {
    client.release();
  }
}

The pattern is always the same: BEGIN, do your work, COMMIT if everything's fine, ROLLBACK if anything throws. The finally block releases the client back to the pool no matter what.

I got tired of writing that BEGIN/COMMIT/ROLLBACK boilerplate, so I made a helper:

async function withTransaction(pool, callback) {
  const client = await pool.connect();
  try {
    await client.query('BEGIN');
    const result = await callback(client);
    await client.query('COMMIT');
    return result;
  } catch (err) {
    await client.query('ROLLBACK');
    throw err;
  } finally {
    client.release();
  }
}

// Usage
const result = await withTransaction(pool, async (client) => {
  await client.query('INSERT INTO orders (user_id, total) VALUES ($1, $2)', [userId, total]);
  await client.query('UPDATE inventory SET stock = stock - $1 WHERE product_id = $2', [qty, productId]);
  return { orderId: result.rows[0].id };
});

This is one of those small abstractions that pays for itself immediately. Every transaction in the codebase follows the same structure, and you never forget a ROLLBACK.

The Production Config I Actually Use

Here's the database module I drop into most of my Node.js projects. Nothing fancy — just the patterns above wired together with some logging for slow queries and a clean shutdown hook.

// db.js
const { Pool } = require('pg');

const pool = new Pool({
  connectionString: process.env.DATABASE_URL,
  max: parseInt(process.env.DB_POOL_MAX) || 20,
  idleTimeoutMillis: 30000,
  connectionTimeoutMillis: 5000,
  statement_timeout: 10000
});

pool.on('error', (err) => {
  console.error('Unexpected pool error:', err);
});

async function query(text, params) {
  const start = Date.now();
  const result = await pool.query(text, params);
  const duration = Date.now() - start;
  if (duration > 1000) {
    console.warn('Slow query detected:', { text, duration, rows: result.rowCount });
  }
  return result;
}

async function getClient() {
  return pool.connect();
}

async function withTransaction(callback) {
  const client = await pool.connect();
  try {
    await client.query('BEGIN');
    const result = await callback(client);
    await client.query('COMMIT');
    return result;
  } catch (err) {
    await client.query('ROLLBACK');
    throw err;
  } finally {
    client.release();
  }
}

async function shutdown() {
  await pool.end();
}

module.exports = { query, getClient, withTransaction, shutdown, pool };

Written by Anurag Kumar

Full-stack developer passionate about Node.js and building fast, scalable web applications. Writing about what I learn every day.

Comments (0)

No comments yet. Be the first to share your thoughts!