Part 2 of a multi-year migration case study. After building a homegrown monorepo toolchain, we adopted Nx — and discovered that its opinionated approach solves some problems elegantly while creating new ones when your apps can't truly be isolated.
This is Part 2 of a case study covering the migration of a white-label credit card platform. Part 1 covers the move from a legacy Angular/C# stack to a React monorepo with single-spa.
By 2023, the homegrown monorepo was doing its job — but showing its seams. The LPT/matrix build distribution had been running for four months before it was replaced, Nx had been evaluated and shelved twice, and the case for adopting it had grown stronger each time CI maintenance crept back onto the team's plate. This is the story of that adoption: what Nx got right, what it got wrong, and the structural tension that no toolchain can fully paper over.
Why Nx
The immediate trigger was the same one it always is: CI time. The LPT/matrix solution had brought build times down from 45 minutes to ~12 minutes, but the maintenance overhead — keeping build-times.json accurate, tuning the worker count when new packages were added — was a recurring tax. Every quarter or so, someone would notice that one worker was consistently finishing 8 minutes before the others and the numbers needed updating.
Nx offered the correct solution to the problem the LPT script had been approximating: a dynamic task queue backed by a remote cache. Workers pull tasks from a shared queue rather than having tasks assigned statically. If a worker finishes early, it pulls the next task immediately. The queue drains in the minimum possible time regardless of how build durations shift.
The remote cache was equally compelling. Nx hashes every input to a task — source files, config files, environment variables — and stores the output. If the same task runs again with identical inputs, the output is replayed from cache without re-executing. On a large monorepo where most PRs touch only a small slice of the graph, cache hit rates above 90% are routine.
The combination brought CI times from ~12 minutes to ~6 minutes and removed the build-times.json maintenance overhead entirely.
What Nx Gets Right
An Opinionated CLI for New Projects
Before Nx, creating a new React app in the monorepo meant copying a starter template, updating import aliases, wiring up Jest, configuring the Webpack dev server, adding the app to the CI matrix, and hoping nothing was missed. The starter templates drifted: the app created six months ago had a different Jest config than the one created last week. Bike-shedding discussions about which version of ESLint config to use were a recurring time sink.
nx generate @nx/react:app my-new-app produces a project that is correctly configured for building, testing, linting, and type-checking from the first commit. The generator is maintained by the Nx team and the community — not by whoever had 20 minutes spare last Tuesday.
Upgrading "starter kits" went from a manual diff across a dozen files to a single nx migrate command.
Nx's migration tooling (nx migrate) applies codemods when upgrading between versions, updating configuration files and resolving breaking changes automatically. This replaced a class of manual upgrade work that had previously cost one engineer a full sprint every six months.
Per-Package Change Detection and Task Orchestration
The homegrown change detection used TypeScript path aliases and Jest's moduleNameMapper to build a dependency graph. It worked, but it was a file-level graph assembled at test time. Nx's dependency graph operates at the project level: it knows which projects depend on which, and can compute the minimal set of projects affected by any change with a single command:
nx affected --target=build
nx affected --target=testCombined with the remote cache, this means that a PR touching only packages/utils will build and test utils and its dependents — and replay cached results for everything else. The graph is visualised interactively via nx graph, which proved genuinely useful for onboarding new engineers who had inherited a large, underdocumented dependency tree.
Less Bike-Shedding
An underrated benefit: fewer decisions to make. When the toolchain has opinions — about where config files live, how targets are named, how dependencies are declared — teams spend less time discussing those things and more time building features. The Nx conventions became the team conventions by default.
What Nx Gets Wrong (for This Use Case)
Build Times Got Worse Before They Got Better
The headline number — ~6 minutes vs ~12 minutes — is real, but it obscures a regression that surprised the team. The homegrown build scripts operated at the file level: change detection could identify exactly which source files had changed and run only the tests that depended on them. Nx's affected logic operates at the project level: if any file in a project changes, the entire project is considered affected.
In practice this meant that a one-line change to a comment in a utility file would, under the homegrown system, affect perhaps 3 test files. Under Nx it would mark the entire packages/utils project affected, triggering a full rebuild and re-test of utils and every project that depends on it.
For the first weeks after migration, before the remote cache warmed up and before engineers adjusted their commit patterns, raw build times on cache misses were noticeably higher than before. The regression was temporary — cache warm-up and the dynamic queue more than compensated — but it was a real surprise for a team that had been told "Nx is faster."
Nx's affected computation is at the project granularity, not the file granularity. If your projects are large and your changes are small, expect more cache misses than you might predict from file-level analysis alone. Keeping projects small and focused is both a performance optimisation and a good architectural practice.
The Single Root package.json Problem
Nx's recommended monorepo setup uses a single package.json at the root. All projects share the same set of dependencies — one version of React, one version of React Router, one version of any given library across the entire repo.
At the beginning this felt like a strength. Dependency management was simple. Security patches applied to every project simultaneously. There was no version skew between packages.
The problem surfaced when the platform needed to upgrade React from 16 to 17. React 17 was not a breaking change in the classic sense, but some dependencies — particularly libraries that relied on React internals or global state — behaved differently. In a fully isolated architecture (iframes, separate deployments) you could upgrade one app at a time, validate it, and roll forward incrementally. In the shared-package.json monorepo, upgrading React meant upgrading every app simultaneously.
A single package.json is only a strength if every app in the repo can always move together. In practice, they can't.
The team had ~12 React apps at the point of the React 17 upgrade. Getting all twelve to pass tests on React 17 simultaneously took three sprints. Two apps had subtle breakages that only manifested in production. The upgrade became a coordinated freeze — no feature work until the upgrade landed — which is exactly the kind of org-level coordination overhead the monorepo was supposed to reduce.
A further consequence: upgrades must land in a single large PR. Because all apps share one version of every dependency, you cannot upgrade one app, merge it, validate it in production, and then roll forward to the next app. The entire upgrade — all 12 apps, all affected tests, all config changes — must land atomically. Large PRs are harder to review, harder to revert, and concentrate risk. The organisational discipline required to manage them safely is non-trivial.
The root cause is that Module Federation and single-spa do not offer the isolation that iframes do. In an iframe-based architecture, each app runs in a completely separate browsing context with its own globals, its own module registry, its own React instance. A dependency version bump in one app is invisible to all others.
With Module Federation, apps share a runtime scope. When two federated apps negotiate which copy of React to use, they resolve to a single shared instance — which is efficient but means they must agree on the version. The single package.json makes this sharing explicit and uniform, which is correct as long as the whole fleet can move together. When it can't, the sharing becomes a coupling.
Some packages make this especially painful because they rely on or mutate global state. Libraries that register themselves on window, use module-level singletons, or assume they are the only instance running will misbehave — or fail silently — when two versions coexist in the same browsing context. In an iframe each app has its own window; in a Module Federation setup they share one. You cannot simply pin two apps to different versions of such a library and call it done.
// This tells Module Federation to share React across all federated apps.
// Both apps must use a compatible version — which the root package.json enforces
// by making it impossible to have different versions in the first place.
shared: {
react: { singleton: true, requiredVersion: deps['react'] },
'react-dom': { singleton: true, requiredVersion: deps['react-dom'] },
}The singleton: true flag is correct — you absolutely do not want two copies of React on the same page. But it couples every app to the same React version at build time, not just at runtime.
The Practical Escape Hatch
The team's eventual solution was a hybrid: keep the single root package.json for the majority of dependencies, but introduce per-app package.json overrides for the specific packages where version skew was necessary during upgrades. Nx supports this via its implicitDependencies and namedInputs config, though it requires care to avoid the module registry conflicts that singleton: true was designed to prevent.
It worked, but it was a sign that the "Nx way" — designed for organisations where every app truly moves together — was being stretched to fit a reality where different teams had different upgrade timelines and different risk tolerances.
Conclusions
Nx is a strong default for new monorepos. The CLI generators, migration tooling, dynamic task queue, and remote cache are genuinely excellent. For an organisation starting fresh, the opinionated setup reduces decision overhead and produces a well-maintained baseline.
For existing monorepos with heterogeneous apps, adopt Nx incrementally. The single package.json constraint is a trap if your apps cannot actually move together. Consider whether your micro-frontend architecture provides enough runtime isolation before committing to shared dependency management.
The single package.json is not a Nx limitation — it's a micro-frontend architecture limitation. Nx would work fine if every app ran in a fully isolated context (iframes, separate deployments). The problem is that Module Federation's shared scope model and the single-package.json convention make the same assumption: that all apps are part of one coherent release train. When that assumption holds, both are excellent. When it doesn't, both create coordination costs.
Shared globals make version co-existence impossible without isolation. Libraries that write to window, use module-level singletons, or assume a single instance per page cannot safely run as two different versions in the same browsing context. Without true isolation — a separate window per app — you are forced to keep all apps on the same version of any such library. This effectively means the most conservative app sets the upgrade pace for the entire fleet.
The hidden cost of big-bang upgrades. Because all apps share one version of every dependency, dependency upgrades cannot be staged: app-by-app, validate in production, roll forward. The entire fleet must upgrade in a single coordinated PR. Large coordinated PRs concentrate risk, are harder to review meaningfully, and are harder to revert if something goes wrong in production. For minor upgrades this is manageable. For upgrades with subtle behaviour changes, it is a compounding liability.
The Nx way is only "the right way" if you have isolation. Without it, a single package.json is a shared constraint masquerading as a convenience — and big dependency upgrades become big-bang PRs.
Toolchain convergence is worth the transition cost — eventually. The migration to Nx required re-training engineers on new conventions, updating CI pipelines, and absorbing a temporary performance regression while the cache warmed up. None of that was free. But the reduction in bespoke tooling maintenance, the improvement in onboarding new projects, and the CI time gains compounded over time. Looking back from two years out, the migration was net positive. The regret is not having done it earlier — and not having invested in proper runtime isolation before adopting a shared dependency model.
Related Articles
From Legacy to Monorepo: Migrating a White-Label Credit Card Platform (2021–2024)
A four-year case study of migrating a white-labeled credit card platform from a legacy Angular/C# stack to a React monorepo with single-spa — covering GraphQL BFFs, micro-frontends, build parallelization, and the people problems that matter more than the tech ones.
Taming 200k Lines: Refactoring a Redux & RxJS Codebase
How static and dynamic code analysis helped delete over 200,000 lines of dead code from a large-scale train ticketing app built with Redux, RxJS, and normalizr — before React Hooks existed.
Optimizing CI Builds with Docker Layer Caching on TeamCity
How we cut build times from 45 minutes down to 3–10 minutes on an on-premise TeamCity cluster using multistage Docker builds, content-addressed cache keys, and shared filesystem volumes — plus what modern BuildKit unlocks today.