I built a CLI tool last weekend because I was tired of doing the same git commands over and over. Turns out, Node makes this pretty straightforward.
I'd been manually running the same sequence of git add, commit, push, branch, merge for every feature I worked on. Copy-pasting the same boilerplate into new projects. Running four commands when one would do. So I figured, why not just make a thing that does it for me? Node.js felt like the obvious choice since I already think in JavaScript, and npm makes it dead simple to share your tool with the world.
Getting the Bones in Place
The whole thing starts with a folder and a package.json. Nothing fancy.
mkdir my-cli-tool && cd my-cli-tool
npm init -yConvention says you put your executable in a bin directory, so I went with that:
mkdir bin
touch bin/index.jsThe first line of your entry file needs a shebang. This tells your OS to run it with Node:
#!/usr/bin/env node
console.log('Hello from my CLI tool!');
console.log('Arguments:', process.argv);process.argv is the raw way Node gives you command-line arguments. It's an array. First element is the path to the Node binary, second is the path to your script, and everything after that is whatever the user typed. So if someone runs my-tool build --output dist, you'd get ['/usr/bin/node', '/path/to/bin/index.js', 'build', '--output', 'dist'].
To actually make this runnable as a command, you need to tell npm about it in your package.json:
{
"name": "my-cli-tool",
"version": "1.0.0",
"bin": {
"my-tool": "./bin/index.js"
}
}Run npm link in your project directory and boom, you can type my-tool from anywhere. Try my-tool hello world and you'll see the arguments printed out.
Now, parsing process.argv by hand works for trivial stuff. But the moment you want subcommands, flags, defaults, and validation? You'll want a library.
Parsing Arguments Without Losing Your Mind
There are two big players here: Commander and Yargs. Commander is fluent and chainable. Yargs is more config-driven. I've used both and honestly, pick whichever feels right. They both work great.
Here's Commander:
npm install commander#!/usr/bin/env node
const { program } = require('commander');
program
.name('my-tool')
.description('A scaffolding tool for web projects')
.version('1.0.0');
program
.command('create ')
.description('Create a new project')
.option('-t, --template ', 'Project template', 'default')
.option('--typescript', 'Use TypeScript', false)
.option('-p, --port ', 'Dev server port', '3000')
.action((projectName, options) => {
console.log(`Creating project: ${projectName}`);
console.log(`Template: ${options.template}`);
console.log(`TypeScript: ${options.typescript}`);
console.log(`Port: ${options.port}`);
});
program
.command('serve')
.description('Start the development server')
.option('-p, --port ', 'Port to listen on', '3000')
.action((options) => {
console.log(`Starting server on port ${options.port}`);
});
program.parse(); You get --help for free. You get version flags for free. Unknown commands throw errors automatically. It's nice.
And here's the same thing in Yargs, if that's more your style:
npm install yargs#!/usr/bin/env node
const yargs = require('yargs/yargs');
const { hideBin } = require('yargs/helpers');
yargs(hideBin(process.argv))
.command(
'create ',
'Create a new project',
(yargs) => {
yargs
.positional('project-name', {
describe: 'Name of the project',
type: 'string'
})
.option('template', {
alias: 't',
type: 'string',
default: 'default',
describe: 'Project template'
})
.option('typescript', {
type: 'boolean',
default: false,
describe: 'Use TypeScript'
});
},
(argv) => {
console.log(`Creating: ${argv.projectName}`);
console.log(`Template: ${argv.template}`);
}
)
.demandCommand(1, 'You must provide a valid command')
.strict()
.help()
.argv; Yargs has some neat things like .demandCommand() to force at least one command and .strict() to reject anything unknown. Both libraries are battle-tested. You won't regret either choice.
Asking the User Questions
Flags are great for automation, but sometimes you just want to walk someone through a setup. That's where inquirer comes in. It's the go-to for interactive prompts in the terminal.
npm install inquirerI set up a little scaffolding wizard for my tool:
import inquirer from 'inquirer';
async function scaffoldProject() {
const answers = await inquirer.prompt([
{
type: 'input',
name: 'projectName',
message: 'What is your project name?',
default: 'my-app',
validate: (input) => {
if (/^[a-z0-9-]+$/.test(input)) return true;
return 'Project name must be lowercase with hyphens only';
}
},
{
type: 'list',
name: 'template',
message: 'Choose a template:',
choices: [
{ name: 'Minimal - Basic setup', value: 'minimal' },
{ name: 'API - Express REST API', value: 'api' },
{ name: 'Fullstack - React + Express', value: 'fullstack' }
]
},
{
type: 'checkbox',
name: 'features',
message: 'Select additional features:',
choices: [
{ name: 'TypeScript', value: 'typescript' },
{ name: 'ESLint + Prettier', value: 'linting' },
{ name: 'Docker support', value: 'docker' },
{ name: 'CI/CD (GitHub Actions)', value: 'ci' },
{ name: 'Testing (Jest)', value: 'testing' }
]
},
{
type: 'confirm',
name: 'installDeps',
message: 'Install dependencies now?',
default: true
}
]);
console.log('Configuration:', answers);
// Proceed to create project with the collected answers
}
scaffoldProject();Inquirer supports a bunch of prompt types: text input, passwords, single-select lists, multi-select checkboxes, confirmations, and even one that opens your default text editor. There's a when property you can use to conditionally show questions based on earlier answers, which is handy for branching wizards.
The pattern I like is: accept flags for the scripted/CI use case, but fall back to interactive prompts when flags are missing. Best of both worlds.
Making It Look Like You Tried
A CLI tool that just dumps plain text feels unfinished. A little color and a spinner go a long way.
npm install chalk oraChalk handles colors and text styling:
import chalk from 'chalk';
// Basic colors
console.log(chalk.green('Success!'));
console.log(chalk.red('Error: Something went wrong'));
console.log(chalk.yellow('Warning: Deprecated feature'));
console.log(chalk.blue('Info: Processing files...'));
// Combine styles
console.log(chalk.bold.underline('Project Configuration'));
console.log(chalk.bgGreen.black(' PASS '), 'All tests passed');
console.log(chalk.bgRed.white(' FAIL '), 'Connection refused');
// Template literals for complex output
const name = 'my-app';
console.log(`
${chalk.bold('Project created successfully!')}
${chalk.cyan('cd')} ${name}
${chalk.cyan('npm install')}
${chalk.cyan('npm start')}
${chalk.dim('You're all set.')}
`);Ora gives you those nice little loading spinners:
import ora from 'ora';
async function deployProject() {
const spinner = ora('Building project...').start();
try {
await buildStep();
spinner.succeed('Build complete');
spinner.start('Running tests...');
await testStep();
spinner.succeed('All tests passed');
spinner.start('Deploying to production...');
await deployStep();
spinner.succeed('Deployed successfully!');
} catch (err) {
spinner.fail(`Deployment failed: ${err.message}`);
process.exit(1);
}
}If you're processing a bunch of files, a progress bar is better than a spinner. cli-progress does that:
const cliProgress = require('cli-progress');
const bar = new cliProgress.SingleBar({
format: 'Processing |{bar}| {percentage}% | {value}/{total} files',
barCompleteChar: '\u2588',
barIncompleteChar: '\u2591'
});
bar.start(files.length, 0);
for (const file of files) {
await processFile(file);
bar.increment();
}
bar.stop();One thing to keep in mind: not every terminal supports colors. Chalk and Ora both detect this automatically, but if you're doing something custom, check process.stdout.isTTY first.
Dealing with Files
Most CLI tools need to create files, read templates, or move stuff around. The built-in fs module works fine, but fs-extra saves you from writing a lot of boilerplate.
Here's the file scaffolding function I wrote for my tool:
const fs = require('fs-extra');
const path = require('path');
async function createProject(projectName, template) {
const targetDir = path.resolve(process.cwd(), projectName);
// Check if directory already exists
if (await fs.pathExists(targetDir)) {
console.error(chalk.red(`Directory "${projectName}" already exists!`));
process.exit(1);
}
// Create project directory structure
await fs.ensureDir(path.join(targetDir, 'src'));
await fs.ensureDir(path.join(targetDir, 'tests'));
await fs.ensureDir(path.join(targetDir, 'config'));
// Copy template files
const templateDir = path.join(__dirname, '..', 'templates', template);
await fs.copy(templateDir, targetDir);
// Generate package.json dynamically
const packageJson = {
name: projectName,
version: '0.1.0',
description: '',
main: 'src/index.js',
scripts: {
start: 'node src/index.js',
dev: 'nodemon src/index.js',
test: 'jest'
},
keywords: [],
license: 'MIT'
};
await fs.writeJson(
path.join(targetDir, 'package.json'),
packageJson,
{ spaces: 2 }
);
// Create a .gitignore
const gitignore = 'node_modules/\n.env\ndist/\ncoverage/\n';
await fs.writeFile(path.join(targetDir, '.gitignore'), gitignore);
return targetDir;
}When you need to find files in a directory tree, fast-glob is your friend:
const fg = require('fast-glob');
async function findAllComponents(projectDir) {
const files = await fg('**/*.component.{js,ts,jsx,tsx}', {
cwd: projectDir,
ignore: ['node_modules/**', 'dist/**']
});
return files;
}Two rules I learned the hard way: always use path.join() or path.resolve() instead of string concatenation (Windows paths will bite you otherwise), and always check if a file or directory exists before trying to overwrite it. Your users will thank you for the clear error messages.
Shipping It
So you've got a working tool. Time to put it on npm so other people can use it (or so future-you can npx it on a fresh machine).
First, make sure your package.json is filled out properly:
{
"name": "create-awesome-app",
"version": "1.0.0",
"description": "Scaffold awesome web applications in seconds",
"bin": {
"create-awesome-app": "./bin/index.js"
},
"files": [
"bin/",
"src/",
"templates/"
],
"keywords": ["cli", "scaffold", "generator", "web"],
"author": "Your Name",
"license": "MIT",
"repository": {
"type": "git",
"url": "https://github.com/you/create-awesome-app"
},
"engines": {
"node": ">=16.0.0"
}
}The files field matters. Without it, npm publishes everything that isn't in your .npmignore, which means tests, docs source, random scripts -- stuff nobody downloading your package needs. The engines field tells people the minimum Node version.
Make sure your entry file is executable and has the shebang:
chmod +x bin/index.js
# Verify the first line is: #!/usr/bin/env nodeBefore you publish, do a dry run to see what'll end up in the package:
npm pack --dry-run # Preview what will be published
npm pack # Create the .tgz file
# Install it globally from the tarball to test
npm install -g ./create-awesome-app-1.0.0.tgzIf everything looks right:
npm login
npm publishFor scoped packages like @yourname/my-tool, add the --access public flag if you want it publicly visible:
npm publish --access publicAnd just like that, anyone can install it:
# Install globally
npm install -g create-awesome-app
create-awesome-app my-project
# Or use npx for one-time usage without installing
npx create-awesome-app my-projectTip: For subsequent releases, check out
np(install it withnpm install -g np). It handles version bumping, git tagging, and two-factor auth in one interactive flow. Way less error-prone than doing it manually.
Error Handling and Exit Codes
A CLI tool that crashes with a stack trace looks broken, even if the error was the user's fault. You need to catch errors and present them cleanly. I wrap my entire top-level action in a try/catch and use process.exit() with meaningful exit codes:
program
.command('create ')
.action(async (projectName, options) => {
try {
await createProject(projectName, options);
} catch (err) {
if (err.code === 'EACCES') {
console.error(chalk.red('Permission denied. Try running with sudo or check directory permissions.'));
process.exit(126);
}
if (err.code === 'EEXIST') {
console.error(chalk.red(`Directory "${projectName}" already exists.`));
process.exit(1);
}
// Unexpected error -- show the stack trace for debugging
console.error(chalk.red('Unexpected error:'));
console.error(err);
process.exit(1);
}
});
Exit code conventions matter if your tool might be used in scripts or CI pipelines. Zero means success. Non-zero means failure. Some tools use specific codes for specific errors (like 126 for permission issues, 127 for command not found). At minimum, exit 0 on success and 1 on failure.
The first time I published a CLI tool, I forgot to set exit codes entirely. Everything exited 0, even on error. A coworker put it in a bash script with set -e and the script kept running past failures. Took us a while to figure out why deployments were "succeeding" with broken builds.
Testing Your CLI
Testing a CLI tool is awkward because the "interface" is stdin/stdout. The approach that works best for me: extract all the business logic into regular functions, test those with normal unit tests, and then write a few integration tests that actually spawn your CLI as a child process and check the output:
const { execSync } = require('child_process');
const path = require('path');
test('create command generates project', () => {
const cliPath = path.resolve(__dirname, '../bin/index.js');
const output = execSync(
`node ${cliPath} create test-project --template minimal`,
{ encoding: 'utf8', cwd: '/tmp' }
);
expect(output).toContain('Project created');
});
test('unknown command shows help', () => {
const cliPath = path.resolve(__dirname, '../bin/index.js');
try {
execSync(`node ${cliPath} foobar`, { encoding: 'utf8' });
} catch (err) {
expect(err.stderr).toContain('unknown command');
}
});
It's not the prettiest test setup, but it catches regressions that unit tests miss -- like a broken shebang line, a missing dependency, or an argument parsing bug. I run these in CI alongside the unit tests.
Small Things That Make a Big Difference
A few quality-of-life features that separate a decent CLI from a good one:
Config files. For tools people use repeatedly, let them put defaults in a config file instead of passing the same flags every time. cosmiconfig is a library that searches for config in .myapprc, .myapprc.json, myapp.config.js, and the package.json "myapp" key, all automatically. It's what ESLint and Prettier use under the hood.
Auto-updates. The update-notifier package checks npm for newer versions and shows a non-intrusive message. It caches the check so it doesn't slow down every run:
const updateNotifier = require('update-notifier');
const pkg = require('../package.json');
updateNotifier({ pkg }).notify();
Verbose mode. Add a --verbose or --debug flag that logs extra information. When something goes wrong, you can tell users "run it again with --verbose and send me the output." It beats asking them to read your source code to figure out where it failed.
That's basically it. My CLI started as a weekend project to automate some git commands, and it's turned into something I actually use daily. It's not fancy, but it saves me real time and that's all that matters.
Comments (0)
No comments yet. Be the first to share your thoughts!