From Legacy to Monorepo: Migrating a White-Label Credit Card Platform (2021–2024)

A four-year case study of migrating a white-labeled credit card platform from a legacy Angular/C# stack to a React monorepo with single-spa — covering GraphQL BFFs, micro-frontends, build parallelization, and the people problems that matter more than the tech ones.

·18 min read

Some migrations are greenfield rewrites. Most aren't. This is the story of a four-year, still-ongoing migration of a white-labeled credit card platform — one product, 26 brands, hundreds of engineers — from a legacy Angular/C# stack into a React monorepo with shared infrastructure.

The technical decisions were hard. The organisational ones were harder.

The Client

A white-label credit card company offering a single core product under 26 different brand identities. From the end-user's perspective each brand is a distinct experience — different colours, different copy, sometimes different feature sets. Under the hood it is one platform.

The engineering organisation was split into independent backend and frontend teams, each with their own roadmap and release cadence. At the start of 2021 the codebase looked like this:

  • A large, aging Angular single-page application
  • A collection of C# microservices with data scattered across many databases
  • No shared API contract between frontend and backend
  • Backend teams and frontend teams operating on entirely separate roadmaps

The consequence: feature delivery required coordinating across teams who had no formal mechanism for doing so. A backend change would land weeks before the frontend was ready to use it, or vice versa. Estimates were fiction because every estimate had a hidden dependency buried inside it.

Introducing a GraphQL BFF

The first major architectural change was not a rewrite — it was a seam.

A GraphQL Backend-for-Frontend (BFF) layer was introduced between the frontend and the C# services. Crucially, the BFF was owned by the frontend teams. This had several immediate effects:

Simpler client code. Instead of aggregating data from four or five REST endpoints and stitching it together in the browser, a single GraphQL query could fetch exactly the shape the UI needed. Components became straightforward mappings from query result to rendered output.

Protection from upstream churn. When a backend team renamed a field or restructured a response, the BFF absorbed the change. Frontend engineers could mock the BFF layer entirely during development, decoupling their sprint from backend availability.

Frontend-driven development. GraphQL's declarative model inverted the usual flow. Rather than consuming whatever shape the backend chose to expose, each component declared its own data requirements as a fragment. Those fragments composed upward into a single query per page — the BFF resolved only what was asked for, aggregating from whichever upstream services held the data. Frontend engineers could design the ideal API for their UI first, then implement the BFF resolvers to satisfy it, without ever touching the legacy C# services directly.

This was a meaningful expansion of scope. Frontend engineers who had previously been blocked whenever they needed data from a new service could now write the BFF resolver themselves — calling the existing C# endpoint, transforming the response, and exposing a clean field to the UI. The undocumented, legacy services became an implementation detail of the BFF rather than a direct dependency of every frontend feature.

account-summary.graphql
query AccountSummary($customerId: ID!) {
  customer(id: $customerId) {
    name
    creditLimit
    availableCredit
    recentTransactions(limit: 5) {
      date
      merchant
      amount
      status
    }
    upcomingPayment {
      dueDate
      minimumDue
      statementBalance
    }
  }
}

The BFF also gave the team a unified customer view for the first time. Data that previously lived in three separate services and required three separate API calls could now be queried as a single graph. This became the foundation for everything that followed.

Embedding React Islands in Angular

With the BFF providing a stable data layer, product teams wanted to start shipping new experiences in React — new designs, new interactions — without waiting for a full rewrite of the Angular shell.

The solution was surgical embedding: keep the Angular application as the outer shell, carve out specific verticals, and rewrite them in React as independently-deployed bundles that mount into the existing page.

v0: The Div Mount Pattern

The initial approach was deliberately low-tech. Angular rendered a placeholder div with a known id. A separately built React bundle was loaded onto the page and mounted itself into that div.

angular-template.html
<!-- Angular shell renders a stable mount point -->
<div id="react-payments-root"></div>
 
<!-- Angular loads the React bundle via a script tag it controls -->
payments-app/src/index.tsx
import React from 'react'
import { createRoot } from 'react-dom/client'
import { App } from './App'
 
const container = document.getElementById('react-payments-root')
if (container) {
  createRoot(container).render(<App />)
}

The React bundle was built independently, deployed to a CDN, and Angular loaded it by injecting a <script> tag pointing to the CDN URL. Angular and React shared nothing — no build pipeline, no state, no routing. Communication between the two happened through custom DOM events.

Three teams adopted this pattern within the first six months. It worked well enough that it became the de facto approach for new verticals. The isolation was also its weakness: each React island was its own island, and the islands were multiplying.

Bugs in the Wild

The simplicity of the pattern masked some sharp edges that only surfaced under real conditions.

Module hash collisions. Webpack generates chunk filenames that include a content hash by default. Without deterministic hashing, two independently built React apps could produce chunks with the same filename but different content. The CDN or browser cache would serve the wrong bundle for a given app. The fix was to enable deterministic module and chunk IDs in each app's Webpack config:

webpack.config.js
module.exports = {
  optimization: {
    moduleIds: 'deterministic',
    chunkIds: 'deterministic',
  },
}

Timers kept alive after unmount. The React apps set up setInterval and setTimeout calls for polling, animations, and session timeouts. When Angular navigated away from the page hosting a React island, Angular destroyed the DOM node — but nothing cancelled the timers. They continued firing against a detached tree, causing memory leaks and occasional errors. The fix was two-part: code hygiene (always return cleanup functions from useEffect) and a formal unmount signal. Angular sent a postMessage to the React app before tearing down its container, and the React entry point listened for it:

payments-app/src/index.tsx
const container = document.getElementById('react-payments-root')
if (container) {
  const root = createRoot(container)
  root.render(<App />)
 
  // Angular signals navigation away via postMessage
  window.addEventListener('message', (event) => {
    if (event.data?.type === 'UNMOUNT_REACT_APP') {
      root.unmount()
    }
  })
}

Style bleed. Each React app loaded its own CSS bundle. Styles leaked in both directions: React component styles would override Angular host styles, and Angular global styles would creep into React-rendered elements. The candidate solutions were Shadow DOM encapsulation, iframes, or refactoring the CSS. Shadow DOM and iframes were ruled out due to accessibility and integration complexity. CSS refactoring became the chosen path — enforcing strict BEM-style class namespacing per app and auditing for global selectors — and was eventually consolidated as part of the broader design-system migration.

v2: Single-SPA with Module Federation

The div-mount pattern scaled to three teams but not to twelve. Each React island was still an independent deployment with its own bootstrap, its own React copy, and a bespoke integration contract with the Angular host. As the number of islands grew, so did the surface area for the class of bugs described above.

The natural next step was a proper micro-frontend orchestration layer. The choice was single-spa combined with Webpack Module Federation, which allowed individual apps to share a single copy of React, React Router, and the design system at runtime rather than bundling their own.

An iframe-based architecture — for example, SAP's Luigi Framework — was considered. Iframes offer hard isolation (no style bleed, no JS collision), but come with significant trade-offs: cross-origin communication is verbose, deep-linking and back-button behaviour require explicit coordination, and shared UI elements like navigation bars can't span iframe boundaries. The single-spa approach was chosen for its tighter integration story.

The key architectural shift was inverting the host: instead of Angular owning the shell and React mounting into it, a top-level React app became the shell. Angular was wrapped as a single-spa application and treated as just another route target — the default fallback for everything not yet migrated:

src/App.tsx
import { BrowserRouter, Routes, Route } from 'react-router-dom'
import { lazy, Suspense } from 'react'
 
const FAQApp = lazy(() => import('faq/App'))
const TransactionApp = lazy(() => import('transactions/App'))
const AngularApp = lazy(() => import('./AngularApp'))
 
export function App() {
  return (
    <BrowserRouter>
      <Suspense fallback={null}>
        <Routes>
          <Route path="/faq/*" element={<FAQApp />} />
          <Route path="/transactions/*" element={<TransactionApp />} />
          <Route path="/*" element={<AngularApp />} />
        </Routes>
      </Suspense>
    </BrowserRouter>
  )
}

FAQApp and TransactionApp are Module Federation remotes — loaded on demand, sharing React from the shell's scope. AngularApp is the entire legacy application, componentised as a React component that mounts and unmounts the Angular runtime.

Wrapping Angular as a single-spa Application

Angular's router owns the browser history. When React Router renders <AngularApp /> as the fallback, Angular's router and React Router's router both want control of window.history. Left unchecked, they fight.

The solution was to monkey-patch history.pushState and history.replaceState inside the Angular wrapper — intercepting navigation events and forwarding them to single-spa's activity function logic so the correct application stays active:

src/AngularApp.tsx
import { useEffect, useRef } from 'react'
import { start, registerApplication, getAppStatus } from 'single-spa'
 
export function AngularApp() {
  const containerRef = useRef<HTMLDivElement>(null)
 
  useEffect(() => {
    // Intercept Angular's history mutations so single-spa stays in sync
    const originalPushState = history.pushState.bind(history)
    const originalReplaceState = history.replaceState.bind(history)
 
    history.pushState = (...args) => {
      originalPushState(...args)
      window.dispatchEvent(new PopStateEvent('popstate', { state: args[0] }))
    }
    history.replaceState = (...args) => {
      originalReplaceState(...args)
      window.dispatchEvent(new PopStateEvent('popstate', { state: args[0] }))
    }
 
    // Register and start the Angular app if not already registered
    if (!getAppStatus('@platform/angular-shell')) {
      registerApplication({
        name: '@platform/angular-shell',
        app: () => import('@platform/angular-shell'),
        activeWhen: () => true, // always active when this component is mounted
        customProps: { domElement: containerRef.current },
      })
      start()
    }
 
    return () => {
      // Restore original history methods on unmount
      history.pushState = originalPushState
      history.replaceState = originalReplaceState
    }
  }, [])
 
  return <div ref={containerRef} id="angular-shell-root" />
}

The single-spa-angular adapter handles the Angular lifecycle (bootstrap → mount → unmount) and integrates with Angular's own router zone. By setting activeWhen: () => true on the registered Angular app, Angular's router retains full control over URL changes within the fallback subtree, while React Router controls which subtree is active.

angular-shell/src/main.single-spa.ts
import { platformBrowserDynamic } from '@angular/platform-browser-dynamic'
import { singleSpaAngular } from 'single-spa-angular'
import { AppModule } from './app/app.module'
 
const lifecycles = singleSpaAngular({
  bootstrapFunction: () =>
    platformBrowserDynamic().bootstrapModule(AppModule),
  template: '<app-root />',
  Router,
  NgZone,
})
 
export const { bootstrap, mount, unmount } = lifecycles
💡

The monkey-patch approach is a known pattern in single-spa deployments and is documented in its migration guides. The key invariant to maintain: history.pushState must always fire a popstate event after the state change, because single-spa's activity functions re-evaluate on popstate. Without this, navigating within Angular would not trigger React Router to check whether a different micro-frontend should become active.

With v2 in place, new verticals could be written entirely in React and wired into the top-level router. The Angular app shrank gradually as verticals migrated out. The routing table became a living record of the migration's progress.

Emergence: Shared Infrastructure

As more React islands came online, two things became unavoidable.

A shared design system. Each team had been building its own buttons, its own form inputs, its own typography. When the second team shipped a slightly different shade of blue, the conversation about a shared component library was finally unavoidable. The library started small — tokens, a handful of primitives — and grew into the backbone of all visual consistency across the platform.

Shared tooling and a monorepo. Independent build pipelines for each island meant independent node_modules, independent CI configs, independent versioning. Updating a shared dependency (say, a security patch in a utility library) required touching every repo individually. The path forward was a monorepo — a single repository containing all React apps and packages, managed with Yarn workspaces.

package.json (root)
{
  "private": true,
  "workspaces": [
    "packages/*",
    "apps/*"
  ],
  "scripts": {
    "build": "yarn workspaces run build",
    "test": "yarn workspaces run test"
  }
}

With Yarn workspaces, packages could depend on each other using ordinary package.json dependencies and the workspace protocol ("@platform/design-system": "workspace:*"). TypeScript path aliases unified the import story across the whole project:

tsconfig.base.json
{
  "compilerOptions": {
    "baseUrl": ".",
    "paths": {
      "@platform/design-system": ["packages/design-system/src/index.ts"],
      "@platform/design-system/*": ["packages/design-system/src/*"],
      "@platform/graphql-client": ["packages/graphql-client/src/index.ts"],
      "@platform/utils": ["packages/utils/src/index.ts"]
    }
  }
}

Each app-level tsconfig.json extended this base, getting the aliases for free. The same aliases needed to work in Jest — TypeScript's path resolution doesn't apply at runtime, so Jest needs its own mapping:

jest.config.base.js
/** @type {import('jest').Config} */
module.exports = {
  moduleNameMapper: {
    // Mirror tsconfig paths so Jest resolves workspace packages
    '^@platform/design-system/(.*)$':
      '<rootDir>/../../packages/design-system/src/$1',
    '^@platform/design-system$':
      '<rootDir>/../../packages/design-system/src/index.ts',
    '^@platform/graphql-client$':
      '<rootDir>/../../packages/graphql-client/src/index.ts',
    '^@platform/utils$':
      '<rootDir>/../../packages/utils/src/index.ts',
  },
}

With this in place, Jest could construct the full dependency graph across the workspace. When a PR touched packages/design-system, the CI pipeline knew exactly which apps and packages imported it — and could run only the affected test suites.

Growing Pains

The monorepo solved the coordination problem between packages. It created a new one between people.

Team Dependencies Become Visible

In the old world, teams were isolated. A team that broke something in their own repo could fix it on their own timeline. In the monorepo, a shared package is a shared contract. When a team modified the design-system component API without a deprecation cycle, every consumer's CI pipeline went red simultaneously, and the fix had to happen in consumer code — owned by a different team.

This wasn't a technical failure. It was an organisational one that the monorepo made undeniable.

The team dependencies had always existed. The CI now surfaced them.

The response was a shift in ways of working:

  • Scrum of Scrums — a weekly cross-team sync where dependency blockers were explicitly tracked and unblocked
  • Shared roadmapping — a quarterly planning session where all teams mapped their work against a shared timeline, making cross-team dependencies visible before they became merge conflicts
  • 15% allocation for foundational work — each team committed 15% of every sprint to cross-team or infrastructure work, creating a predictable budget for the maintenance that a shared codebase demands

The 15% figure was not arbitrary — it was a negotiated truce. The stronger argument was for a dedicated platform team or a ring-fenced mission pot: a budget of engineering capacity specifically for foundational work, independent of any feature team's roadmap. That model would have provided more predictable throughput on infrastructure and removed the per-sprint negotiation entirely. Stakeholder buy-in wasn't achievable at the time, so 15% per team became the fallback. It was the minimum that kept the shared infrastructure healthy without starving feature delivery. Below it, the shared packages degraded. Above it, teams felt they were doing maintenance at the expense of their own roadmaps.

Building Everything, Every Time

The second crisis was speed. In the early monorepo every commit triggered a build of every package and app, regardless of what had changed. With a dozen apps and growing, CI times ballooned to 45 minutes. The pipeline had become a bottleneck.

The fix had two parts.

Change detection. Because TypeScript path aliases resolved at the source level and Jest's moduleNameMapper mirrored them, the CI scripts could statically trace the dependency graph and determine which packages were affected by a given diff. Only affected packages were built and tested.

Build parallelization. Even with change detection, some PRs — touching a foundational package — would trigger large portions of the graph. Those builds needed to run in parallel. GitHub Actions matrix builds offered a static mechanism for this: define N workers upfront and assign packages to each worker.

The assignment problem — distribute N packages across K workers to minimise total wall-clock time — is a variant of the bin-packing / multiprocessor scheduling problem. The exact optimum is NP-hard, but the Longest Processing Time (LPT) greedy heuristic gives results within 4/3 of optimal:

scripts/pack-builds.js
/**
 * Distribute packages across `workerCount` workers using the LPT heuristic
 * to minimise the makespan (total wall-clock time of the slowest worker).
 *
 * @param {{ name: string; estimatedMs: number }[]} packages
 * @param {number} workerCount
 * @returns {string[][]}  One array of package names per worker
 */
function packBuilds(packages, workerCount) {
  // Sort descending by estimated build time (LPT heuristic)
  const sorted = [...packages].sort((a, b) => b.estimatedMs - a.estimatedMs)
 
  const workers = Array.from({ length: workerCount }, () => ({
    load: 0,
    packages: [],
  }))
 
  for (const pkg of sorted) {
    // Always assign to the currently least-loaded worker
    const worker = workers.reduce((min, w) => (w.load < min.load ? w : min))
    worker.packages.push(pkg.name)
    worker.load += pkg.estimatedMs
  }
 
  return workers.map((w) => w.packages)
}
 
// Build times are sampled from recent CI runs and committed alongside the script
const buildTimes = require('./build-times.json')
const chunks = packBuilds(buildTimes, Number(process.env.WORKER_COUNT ?? 4))
 
// Write the assignment for the current worker index
const workerIndex = Number(process.argv[2])
process.stdout.write(chunks[workerIndex].join('\n'))
.github/workflows/ci.yml
jobs:
  build:
    strategy:
      fail-fast: false
      matrix:
        worker: [0, 1, 2, 3]
    runs-on: ubuntu-latest
    env:
      WORKER_COUNT: 4
    steps:
      - uses: actions/checkout@v4
 
      - name: Determine packages for this worker
        id: packages
        run: |
          pkgs=$(node scripts/pack-builds.js ${{ matrix.worker }})
          echo "list=$pkgs" >> "$GITHUB_OUTPUT"
 
      - name: Build and test
        run: |
          echo "${{ steps.packages.outputs.list }}" | xargs -I{} yarn workspace {} run build
          echo "${{ steps.packages.outputs.list }}" | xargs -I{} yarn workspace {} run test

The matrix-based approach has a fundamental limitation: the number of workers must be declared statically in the workflow file. You cannot dynamically allocate more workers for a large PR without editing the YAML. This is an inherent constraint of how GitHub Actions resolves matrix values at workflow parse time — not at job execution time.

The LPT assignment was also static: build times were sampled from recent CI history and committed to the repo. If a package's build time changed substantially (new tests, a larger bundle) the assignment drifted out of date. Keeping the build-times.json accurate became a minor but real maintenance cost.

The right solution is a job queue: a shared queue of build tasks that any number of workers can pull from dynamically. This eliminates both the static worker count and the stale timing data. No worker sits idle; no worker is overloaded. Tools like Nx implement exactly this — distributing tasks across a fleet of agents backed by a remote cache.

The LPT/matrix solution was in place for four months. It reduced CI times from 45 minutes to ~12 minutes. When Nx was adopted it brought that down further to ~6 minutes and removed the maintenance overhead entirely.

The Component Library Split

The third friction point was design. Multiple product designers were working across the 26 brands simultaneously, and they were making different decisions: slightly different button radii, different spacing scales, different component compositions for what was functionally the same UI pattern.

When these decisions landed in the shared component library, they collided. Which variant was canonical? Whose design was right?

The pragmatic answer, after several tense design reviews, was: agree to disagree — for now.

The component library was extended to support multiple explicit variants, named after the product context they originated in. Both variants coexisted in the codebase and both were shippable. This was not ideal from a design systems perspective, but it unblocked feature delivery while a longer conversation about design convergence played out in parallel.

packages/design-system/src/Button/index.tsx (simplified)
type ButtonVariant = 'default' | 'compact'
 
interface ButtonProps {
  variant?: ButtonVariant
  label: string
  onClick?: () => void
}
 
export function Button({ variant = 'default', label, onClick }: ButtonProps) {
  const styles: Record<ButtonVariant, string> = {
    default: 'px-6 py-3 rounded-lg text-base font-semibold',
    compact: 'px-4 py-2 rounded-md text-sm font-medium',
  }
  return (
    <button className={`bg-indigo-600 text-white ${styles[variant]}`} onClick={onClick}>
      {label}
    </button>
  )
}

The deliberate acceptance of temporary divergence bought enough goodwill to hold the shared library together. Six months later the design teams ran a convergence sprint, reviewed every variant pair, and collapsed most of them into a single canonical form — with the receipts from production usage data to back the decision.

Lessons

The BFF is load-bearing. Without the GraphQL layer absorbing backend churn, the frontend migration would have been far slower. The ability to mock the BFF in development and protect consumers from upstream changes was worth the cost of the extra layer.

Visibility accelerates alignment. The monorepo did not create the team dependencies — it surfaced ones that already existed. Making them visible in CI was uncomfortable but productive. Every organisation doing this work should expect a period of friction as the true dependency graph becomes legible.

Org changes are a prerequisite, not a follow-up. Scrum of scrums and the 15% allocation were not responses to the monorepo's success — they were conditions for it. Technical unification without organisational alignment produces a monorepo full of teams still acting as silos, which is worse than separate repos.

Static solutions buy time; dynamic ones scale. The LPT/matrix build distribution was correct to ship quickly and replace deliberately. It solved the immediate problem and was simple enough to reason about. Recognising when a temporary solution has served its purpose and committing to replace it is a skill that most teams undervalue.

Design systems are also team agreements. The component library split was a design failure that was resolved by a team process, not a technical one. The code to support variants was trivial; the willingness to name the disagreement and set a timeline for resolving it was not.

Related Articles