Publishing 10+ npm Packages as a Solo Developer: What I Learned
Hard-won lessons from publishing across two npm scopes — monorepo trade-offs, versioning, CI/CD, the GeoKit pipeline architecture, and download numbers that surprised me.
Gagan Deep Singh
Founder | GLINR Studios
I didn't set out to publish ten npm packages. I set out to solve specific problems, and the packages were what happened when I decided the solutions were good enough to share. Two scopes, two different philosophies, one person maintaining all of it. Here's what that experience actually looks like.
The Two Scopes and Why They're Different
@glincker packages are infrastructure utilities that power GLINCKER and related projects. They're pragmatic, internal-first tools that got extracted because they're genuinely general-purpose: @glincker/geokit for geographic computation, glin-profanity for content moderation, @glincker/queue-utils for job queue abstractions.
@typeweaver packages are developer tooling focused on the TypeScript/Node ecosystem. @typeweaver/commitweave is a conventional commits validator and changelog generator. The scope exists because I wanted a clean namespace for tools aimed at other developers' workflows rather than application infrastructure.
The philosophies differ. @glincker packages optimize for reliability and API stability — breaking changes in production infrastructure are expensive. @typeweaver packages optimize for developer experience and can move faster, since a broken CLI tool in someone's local workflow is painful but not a 3am incident.
Monorepo vs. Multi-Repo: I've Done Both, Here's the Truth
My first four packages were separate repos. Seemed fine at the time. Then I needed to share a validateSchema utility across three of them and suddenly I was copying code and forgetting to sync fixes. That's when I moved to a monorepo with pnpm workspaces.
The monorepo setup for @glincker packages:
glincker-packages/
├── packages/
│ ├── geokit/
│ │ ├── src/
│ │ ├── package.json # name: "@glincker/geokit"
│ │ └── tsconfig.json
│ ├── queue-utils/
│ └── shared/ # internal shared utilities
├── pnpm-workspace.yaml
├── tsconfig.base.json
└── .github/workflows/
└── release.yml
What the monorepo bought me: shared TypeScript config, shared testing setup, atomic cross-package changes, and a single CI pipeline that only builds affected packages. What it cost me: more upfront configuration, occasional pnpm workspace quirks, and the psychological overhead of "this one repo owns a lot."
For @typeweaver packages I kept multi-repo because the tools are genuinely independent — commitweave has no shared code with anything else I publish. The overhead of a monorepo isn't worth it when there's nothing to share.
My rule of thumb: if packages share more than one utility function, monorepo. If they're truly independent, multi-repo is simpler.
Versioning Strategy: Conventional Commits All the Way Down
All my packages use semantic-release with conventional commits. Every repository has this in CI:
# .github/workflows/release.yml
- name: Release
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
run: npx semantic-releaseThe .releaserc.json is minimal:
{
"branches": ["main"],
"plugins": [
"@semantic-release/commit-analyzer",
"@semantic-release/release-notes-generator",
"@semantic-release/changelog",
"@semantic-release/npm",
"@semantic-release/github"
]
}feat: commits bump minor, fix: bumps patch, feat!: or BREAKING CHANGE: in the footer bumps major. Zero manual version management. I never think about version numbers — I just write descriptive commits.
The one gotcha: in a monorepo, you need multi-semantic-release or similar to handle changed-package detection. The standard semantic-release doesn't know about workspace packages. I use multi-semantic-release in the @glincker monorepo and it handles the dependency graph — if geokit changes, it releases geokit. If shared changes, it can cascade releases to everything that depends on it.
CI/CD for npm Publishing
The pipeline has three jobs that gate each other:
- Test — unit tests, type checking, lint. No test skips, ever. A package that ships with failing tests is worse than no package.
- Build — compile TypeScript to both
esmandcjs. I dual-publish everything because in 2024 the ecosystem still isn't uniformly ESM. - Release — only runs on
main, only after test and build pass, usesNPM_TOKENfrom GitHub secrets.
The dual-publish setup in package.json:
{
"exports": {
".": {
"import": "./dist/esm/index.js",
"require": "./dist/cjs/index.js",
"types": "./dist/types/index.d.ts"
}
},
"main": "./dist/cjs/index.js",
"module": "./dist/esm/index.js",
"types": "./dist/types/index.d.ts"
}Two separate tsconfig files — one targeting ES2022 with module: NodeNext for ESM, one targeting CommonJS. The build script runs both. Worth the setup overhead because you stop getting "cannot use import statement" issues from users on older toolchains.
The GeoKit Pipeline Architecture
@glincker/geokit is the package I'm most proud of architecturally. It's a geographic computation toolkit built around three pipeline stages: audit → generate → convert.
- audit: validates coordinate data, detects projection issues, flags precision problems
- generate: computes derived geographic data (bounding boxes, centroids, convex hulls, Voronoi regions)
- convert: transforms between coordinate systems, formats (GeoJSON, WKT, H3 hex), and precision levels
The pipeline API is composable:
import { audit, generate, convert } from '@glincker/geokit';
const result = await audit(coordinateSet)
.then(audited => generate.centroid(audited))
.then(withCentroid => convert.toH3(withCentroid, resolution: 8));Each stage returns a typed result that the next stage accepts. TypeScript infers the chain. Invalid operations (converting before auditing, for example) are type errors, not runtime errors. The explicit pipeline makes the intent obvious and makes testing trivial — each stage is a pure function.
Why this architecture instead of a grab-bag of utility functions? Because geographic data problems compound. Bad input produces subtly wrong output five transforms later. Making audit an explicit, required step means users can't accidentally skip it.
Packaging MCP Servers
The most recent category I've added: Model Context Protocol (MCP) servers as npm packages. MCP is the protocol that lets AI tools like Claude connect to external context providers. Packaging an MCP server as an npm package means users can run it with npx with zero setup:
npx @glincker/geokit-mcpThe package exposes geographic tools to AI assistants — give Claude a set of coordinates and ask it to compute the convex hull, convert to H3, or validate the data. The MCP packaging adds a thin stdio transport layer over the existing geokit core:
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import { audit, generate, convert } from '@glincker/geokit';
const server = new Server(
{ name: 'geokit-mcp', version: '1.0.0' },
{ capabilities: { tools: {} } }
);
// Register tools that map to geokit pipeline stages
server.setRequestHandler(CallToolRequestSchema, async (request) => {
// ... tool dispatch
});
const transport = new StdioServerTransport();
await server.connect(transport);The interesting constraint: MCP servers run as subprocesses communicating over stdio. No HTTP, no WebSocket. The entire API surface is tool calls and responses in JSON. Designing good tool schemas — what parameters to expose, how to describe them so the AI uses them correctly — is its own discipline.
Writing READMEs That Actually Get Read
Nobody reads a wall of text. My README structure for every package:
- One-sentence description at the top (not the package name, the purpose)
- Install command — copy-pasteable, immediately
- 30-second example — the simplest useful thing the package can do
- API reference — generated from JSDoc via TypeDoc, linked not inlined
- Edge cases and gotchas — the things that will bite you if you don't know them
The gotchas section is the most valuable part and the most skipped. I've fielded a dozen GitHub issues that were answered in the gotchas section. Now I put them in a callout block so they're visually impossible to miss.
Handling Issues from Strangers
The first GitHub issue from someone I don't know is a milestone. The second one is a test of your patience. Most issues fall into four categories:
- Legitimate bugs — fix promptly, thank them, add a test case
- Documentation gaps — fix the docs, not just the response
- Feature requests — evaluate against the package's stated scope, say no clearly and kindly when out of scope
- "This doesn't work" with no reproduction — ask for a minimal reproduction, wait, close if no response in two weeks
I added issue templates to every repo. The bug template requires a minimal reproduction. Issues opened without it get automatically commented with a gentle reminder. This one change cut "this doesn't work" reports by about 60%.
Download Numbers That Surprised Me
glin-profanity consistently outperforms everything else by a wide margin. A content moderation utility being the most downloaded package makes sense in retrospect — it's solving a problem every social app has and nobody wants to build themselves. The narrow, specific utility beats the ambitious general-purpose toolkit.
@glincker/geokit downloads are lower but the issues are more interesting. The users are building real geographic applications with genuine technical depth. Quality of engagement matters more than quantity.
The @typeweaver/commitweave numbers plateau around teams that have 5-20 engineers — small enough that one person's tool recommendation propagates the whole team, large enough that commit discipline matters. That's useful market signal.
What I'd Tell Myself at Package Number One
Solve your own problem first. The packages that work best are the ones I built because I needed them, not the ones I built because I thought the ecosystem needed them. Genuine use in production is the best quality signal.
Stability is a feature. I've broken @glincker package APIs once in three years. Every breaking change cost me more in downstream migration time than the API improvement saved. Deprecation cycles, not breaking changes.
A package without tests is a liability. The packages I published early without robust test suites are the ones I'm afraid to touch. 100% coverage is cargo-culting, but meaningful tests for every public API surface is non-negotiable.
Publishing open source as a solo developer is a different discipline than writing open source. The code is the easy part. The documentation, the issue triage, the backward compatibility promises, the release automation — that's the maintenance surface you're signing up for. Sign up deliberately, and it's genuinely rewarding. Sign up accidentally, and it's a burden.