React vs SolidJS: Dynamic Graphs, Static Graphs, and the Price of Flexibility

React and SolidJS share JSX syntax but are built on fundamentally different computational models. Understanding why illuminates a much older tension in functional programming — and an eerily similar split that once divided PyTorch from TensorFlow.

·9 min read

React and SolidJS look almost identical on the surface. Both use JSX. Both compose UIs from functions. Both have a story about reactivity. Yet the mental model underneath each one is so different that the same intuitions that make you productive in React will actively mislead you in SolidJS — and vice-versa.

The split is not accidental. It maps onto a fundamental tension in how we structure computation, one that surfaces in functional algebra, in deep-learning frameworks, and now in the front-end.


The algebraic analogy: monoids vs applicatives

Before touching a single component, it helps to see the same tension in a purer setting.

A monoid is a type with a combining operation and a neutral element. Arrays in JavaScript are the canonical example:

[1, 2].concat([3, 4]) // [1, 2, 3, 4]
[].concat(xs) === xs  // identity holds

You can concatenate any two arrays, in any order of evaluation, and the result is always consistent. The structure is flat and closed — you can analyse it, optimise it, even parallelise it, without knowing anything about what will happen at runtime. A static analyser or bundler can reason about it completely.

An applicative computation is something richer. Think of Promise.all:

const result = await Promise.all([fetchUser(id), fetchPosts(id)]);

The shape — which promises to wait for — is declared upfront. Both fetches are independent; neither's existence depends on the other's value. A scheduler can look at the whole list and optimise freely.

A monadic computation, by contrast, is like chained .then():

const user = await fetchUser(id);
const posts = await fetchPosts(user.preferredFeedType); // shape depends on value

Now the second operation cannot even be known until the first one resolves. The structure of the computation depends on its runtime values. A static analyser cannot determine what fetchPosts will be called with — or even whether it will be called — without executing the code.

This is the same axis that separates React from SolidJS.


PyTorch vs TensorFlow: the same split, one decade earlier

The deep-learning world lived through a very public version of this debate.

TensorFlow 1.x was a define-then-run framework. You first constructed a static computation graph — nodes for matrix multiplications, activations, loss functions — and only then fed data through it. That graph was an explicit data structure. TensorFlow could inspect it, optimise it (fusing ops, pruning dead branches), serialise it, and deploy it to production without a Python runtime. The price was ergonomics: debugging required special tooling, control flow had to be expressed in graph primitives (tf.while_loop, tf.cond), and the feedback loop during research was painful.

PyTorch chose define-by-run. The graph is not declared upfront; it is traced dynamically as tensors flow through Python code. if and for are just Python if and for. Debugging is printing. The graph only exists as a consequence of execution, not as a prior artefact. PyTorch became the dominant research framework almost immediately. TensorFlow eventually added eager mode (and later tf.function as a bridge), largely conceding the ergonomics argument.

The tradeoff:

TF 1.x (static)PyTorch (dynamic)
Debug experiencePainfulNatural
Control flowGraph primitivesHost language
OptimisationAggressiveLimited (without tracing)
DeploymentSelf-contained graphNeeds runtime

React: the dynamic graph

React's rendering model is fundamentally dynamic. The component tree is rebuilt from scratch on every render — conceptually, at least. useState and useReducer hold values; when they change, React re-calls the component function, diffs the resulting element tree against the previous one, and commits the delta.

This means the shape of the UI can change arbitrarily between renders:

function Feed({ user }) {
  if (!user) return <LoginPrompt />;
  return user.isPremium ? <PremiumFeed /> : <BasicFeed />;
}

React does not know, before calling Feed, which branch will be taken. The tree is the output of execution, not its input. Like PyTorch, the graph is a consequence of running the code, not a prior constraint on it.

Hooks are monadic

The hooks API makes this explicit. useState initialises a cell; the value it returns could influence which hooks are called next — except React forbids this (the Rules of Hooks) precisely because it would make the dependency graph unanalysable:

// ❌ Forbidden — hook call order must not depend on runtime values
const [flag] = useState(false);
if (flag) {
  const [x] = useState(0); // illegal
}

React needs hooks to be called in the same order on every render so it can correlate them with the underlying fibre slots by position. The sequencing (useState, then useEffect, then useMemo, ...) is a fixed chain whose shape React controls — but whose values can feed into one another like a chain of .then() calls, not a static Promise.all.

The dynamic model buys React extraordinary flexibility:

  • Components can conditionally include or exclude entire sub-trees.
  • Lazy loading, Suspense boundaries, and concurrent features all rely on the ability to suspend, discard, or replay renders without a static graph to contradict them.
  • Third-party libraries can wrap anything: context, portals, refs, error boundaries all compose freely.

The cost is performance. Re-rendering the whole subtree on every state change is expensive. React's answer is an elaborate system of escape hatches — useMemo, useCallback, React.memo — that let you manually annotate what should not re-run. This is not a bug; it is the inherent tax of operating without a static dependency graph.


SolidJS: the static reactive graph

SolidJS makes the opposite bet. Its compilation model is closer to TensorFlow 1.x: the reactive graph is built once, at initialisation time, and never torn down.

function Counter() {
  const [count, setCount] = createSignal(0);
  return <button onClick={() => setCount(c => c + 1)}>{count()}</button>;
}

The JSX here compiles to direct DOM operations. count() inside the template is wrapped in a createEffect (or a fine-grained derived computation) that subscribes precisely to the signal it reads. When count changes, only that specific DOM text node is updated — no diffing, no re-render of the component function, no reconciliation.

The component function runs exactly once. After that, it is the reactive graph — a set of signals, effects, and memos connected by tracked dependencies — that drives all updates.

Static dependencies, not dynamic chains

The dependency tracking in SolidJS is closer to the Promise.all model: "this DOM node depends on these signals, and the set of signals is known at setup time." The topology of the graph is fixed — like a Promise.all whose list of promises is declared once and never changes. You cannot do this in SolidJS:

// ❌ This does not work as you'd expect in SolidJS
function Broken({ flag }) {
  if (flag()) {
    const [x, setX] = createSignal(0); // only run on first call
  }
}

createSignal is not a hook that runs per-render — it creates a node in the reactive graph. Conditionally creating nodes after the component has initialised is not part of the model. The structure must be derivable from the first execution.


Dynamism vs static graph: the compensation

React's dynamic model is expressive but requires the programmer to manage performance manually. SolidJS's static model is fast by default but trades away some of React's compositional freedom.

SolidJS compensates with a considerably larger primitive API:

ConcernReactSolidJS
StateuseStatecreateSignal
Derived stateuseMemocreateMemo
Side effectsuseEffectcreateEffect, onMount, onCleanup
Conditional renderingJSX && / ternary<Show> component
List renderingArray.map<For>, <Index>
Async / loading statesSuspense + usecreateResource + <Suspense>
ContextuseContextuseContext (same)
Error boundaries<ErrorBoundary> class<ErrorBoundary> component
Dynamic componentJSX expression<Dynamic>
PortalReactDOM.createPortal<Portal>

In React, Array.map is idiomatic for lists because every render re-runs anyway — the mapping is just JavaScript. In SolidJS, you must use <For> or <Index> because the component function does not re-run; SolidJS needs a primitive it controls in order to diff the list reactively and update only the changed items.

Similarly, <Show> exists because an if inside JSX would run once and never update — you need a reactive conditional that re-evaluates when its condition signal changes.

Every place React uses JavaScript's native control flow and relies on re-rendering to update it, SolidJS must provide a dedicated reactive primitive. This is not a design flaw; it is the direct consequence of the static-graph model.

💡

If you find yourself reaching for <Dynamic component={...}>, you are at the boundary where SolidJS's static graph requires an explicit escape hatch for something React handles with a plain variable.


Which model fits which problem?

The choice mirrors the PyTorch/TensorFlow split in practice:

Reach for React when:

  • The component tree is highly dynamic — deep conditional branches, runtime composition of component types, plugin architectures.
  • You need the full power of the ecosystem: Server Components, Suspense for data, concurrent rendering, the enormous library tail.
  • Your team is already fluent in React's mental model; the memoisation overhead is manageable.

Reach for SolidJS when:

  • Raw rendering performance matters and you want the framework to handle it, not you.
  • The reactive graph is naturally static — dashboards, data-driven UIs where structure is stable and only values change.
  • You are comfortable learning a larger set of primitives in exchange for fewer footguns at the performance boundary.

The deeper pattern

The tension between dynamic and static computation graphs is not a front-end quirk. It is a recurring trade-off across the entire stack: eager vs lazy evaluation, interpreted vs compiled, chained .then() vs Promise.all, define-by-run vs define-then-run.

React chose the dynamic end because it optimises for the thing that is hardest to recover once you have given it up: expressive freedom. SolidJS chose the static end because it optimises for what the static end always buys: analysability and performance.

Neither is wrong. They are different answers to the same question that functional programmers, database query planners, and deep-learning researchers have been arguing about for decades — and probably will be for decades more.

Related Articles

·11 min read

The Challenges of WebGL

WebGL lets you run shaders in the browser — and that's genuinely magical. But using it for arbitrary GPU computation quickly reveals a thicket of constraints that OpenCL and WebGPU were designed to escape.