The Fourier Transform: From Sines to Signals

A visual and mathematical journey through one of the most beautiful ideas in all of mathematics — decomposing any signal into its frequency components.

·3 min read

The Fourier transform is everywhere — in your phone's WiFi, in JPEG compression, in quantum mechanics. At its core it answers a deceptively simple question: what frequencies make up this signal?

The Core Idea

Any periodic signal f(t)f(t) can be written as a sum of sines and cosines:

f(t)=a02+n=1[ancos(2πntT)+bnsin(2πntT)]f(t) = \frac{a_0}{2} + \sum_{n=1}^{\infty} \left[ a_n \cos\left(\frac{2\pi n t}{T}\right) + b_n \sin\left(\frac{2\pi n t}{T}\right) \right]

The coefficients ana_n and bnb_n tell us how much of each frequency is present. Computing them is the Fourier transform.

The complex exponential form is more elegant: f^(ξ)=f(t)e2πiξtdt\hat{f}(\xi) = \int_{-\infty}^{\infty} f(t)\, e^{-2\pi i \xi t}\, dt

Interactive: Fourier Series Approximation

The pink wave below is a square wave — famously impossible to represent with a finite number of sines. Watch what happens as we add more harmonics:

Square wave via Fourier series

Harmonics: 1 + 3 + 5

The ripples near the discontinuities that never quite go away are called Gibbs phenomenon. Even with infinite harmonics they persist — peaking at about 9% overshoot.

The Continuous Fourier Transform

Moving from discrete harmonics to a continuous spectrum, the Fourier transform of ff is:

f^(ξ)=f(t)e2πiξtdt\hat{f}(\xi) = \int_{-\infty}^{\infty} f(t)\, e^{-2\pi i \xi t}\, dt

And the inverse recovers ff from its spectrum:

f(t)=f^(ξ)e2πiξtdξf(t) = \int_{-\infty}^{\infty} \hat{f}(\xi)\, e^{2\pi i \xi t}\, d\xi

This is profound: you can freely move between the time domain and frequency domain.

Computing it in Python

The standard algorithm is the Fast Fourier Transform (FFT)O(nlogn)O(n \log n) instead of the naive O(n2)O(n^2).

fourier.py
import numpy as np
import matplotlib.pyplot as plt
 
# Generate a noisy signal
t = np.linspace(0, 1, 1000, endpoint=False)
signal = np.sin(2 * np.pi * 50 * t) + 0.5 * np.sin(2 * np.pi * 120 * t)
signal += 0.2 * np.random.randn(len(t))   # add noise
 
# Compute the FFT
freqs = np.fft.fftfreq(len(t), d=t[1] - t[0])
spectrum = np.abs(np.fft.fft(signal))
 
# Plot only positive frequencies
pos = freqs > 0
plt.plot(freqs[pos], spectrum[pos])
plt.xlabel("Frequency (Hz)")
plt.ylabel("|X(f)|")
plt.title("Frequency Spectrum")
plt.show()

The two peaks at 50 Hz and 120 Hz pop out clearly, even though the signal looked like noise in the time domain.

The Convolution Theorem

One of the Fourier transform's killer applications is convolution. In the time domain:

(fg)(t)=f(τ)g(tτ)dτ(f * g)(t) = \int_{-\infty}^{\infty} f(\tau)\, g(t - \tau)\, d\tau

This is expensive — O(n2)O(n^2). But in the frequency domain it becomes pointwise multiplication:

fg^=f^g^\widehat{f * g} = \hat{f} \cdot \hat{g}

This is why image blurring, audio filtering, and polynomial multiplication can all be done in O(nlogn)O(n \log n).

convolution.py
def fast_convolve(f, g):
    """Convolve two signals using FFT — O(n log n)."""
    n = len(f) + len(g) - 1
    F = np.fft.rfft(f, n=n)
    G = np.fft.rfft(g, n=n)
    return np.fft.irfft(F * G, n=n)
💡

Try experimenting with the sine wave below — change the frequency and amplitude to see how individual components look before they're summed together.

Single frequency component

Further Reading

  • Stein & Shakarchi, Fourier Analysis: An Introduction
  • 3Blue1Brown's visual explanation on YouTube
  • Cooley & Tukey's 1965 paper introducing the FFT algorithm

Related Articles

·17 min read

Automatic Differentiation

A deep-dive into automatic differentiation: symbolic vs. numerical vs. AD, AST transformation, dual numbers, tapes, hybrid methods, and a generic C++ implementation that computes gradients and solves optimisation problems.