Spectral Graph Theory: Laplacians, Eigenvalues & the Cheeger Inequality
The graph Laplacian encodes connectivity through its eigenvalues and eigenvectors; the Cheeger inequality connects the second smallest eigenvalue to how well-connected the graph is. These spectral properties power clustering, random walk analysis, and the theoretical foundations of graph neural networks.
Concepts
Graph Laplacian $L = D - A$. The Fiedler value λ₂ (second smallest eigenvalue) measures algebraic connectivity: λ₂ = 0 iff the graph is disconnected.
Eigenvalues λ:
The adjacency matrix tells you which vertices are connected. The Laplacian tells you how information flows through the graph — and its eigenvalues tell you how fast it flows and where the bottlenecks are. Every spectral graph method, from clustering to GCN to random walk analysis, is reading properties of this single matrix.
The Graph Laplacian
For an undirected graph with adjacency matrix and degree matrix , the combinatorial Laplacian is:
is symmetric and positive semidefinite. For any vector :
This quadratic form measures how much varies across edges. The constant vector satisfies (since row sums of are zero), so is always an eigenvalue with eigenvector .
The Laplacian had to take this form. If you want an operator on graph signals that is zero on constant functions (no variation across a constant signal) and measures variation by summing over edges (so each edge contributes locally), then is the unique symmetric bilinear form with these properties — and it factors as .
The full spectrum encodes graph structure. The normalized Laplacian is preferable when degrees vary widely; its eigenvalues lie in and it removes degree bias.
Algebraic Connectivity and the Fiedler Value
The second eigenvalue of is the algebraic connectivity or Fiedler value. By the min-max (Courant-Fischer) theorem:
Two key facts follow: (1) if and only if is connected (disconnected graphs have as a zero eigenvector for each component ); (2) is large when the graph has no bottleneck — many edges crossing any partition.
The eigenvector corresponding to , the Fiedler vector, provides a natural vertex ordering: vertices with similar Fiedler vector values tend to be well-connected to each other. Cutting the graph at the median of the Fiedler vector is a principled bipartition.
The Cheeger Inequality
The conductance (or Cheeger constant) of a graph measures the minimum bottleneck ratio over all cuts. For :
where and counts edges crossing the cut. The Cheeger inequality connects to of :
The lower bound is exact for expander graphs; the upper bound is constructive — the Fiedler vector sweep gives a cut achieving . This is the discrete analogue of the Cheeger inequality in Riemannian geometry, which bounds the Chebyshev constant of a manifold by the spectral gap of its Laplace-Beltrami operator.
Spectral Clustering
Spectral clustering embeds vertices into using the first eigenvectors of (or ), then applies -means in this embedding space. The algorithm is:
- Compute the smallest eigenvectors of .
- Form matrix with these as columns; row-normalize if using .
- Cluster the rows of with -means.
This works because eigenvectors of are smooth with respect to graph structure — the Fiedler vector changes slowly across connected subgraphs. The embedding maps the combinatorial clustering problem into a geometric one where Euclidean -means is effective.
Random Walks and Mixing Time
The lazy random walk on stays put with probability and moves to a uniform random neighbor with probability . Its transition matrix is:
If are eigenvalues of , then eigenvalues of are . The distribution after steps converges to the stationary distribution (proportional to degree) at rate proportional to . The mixing time — steps until the walk is close to stationary — is therefore . Large (high conductance) means fast mixing.
Worked Example
Computing and finding the Fiedler vector. Take a 5-node path graph: , edges . The degree matrix is and adjacency matrix has 1s at positions . The Laplacian has diagonal and off-diagonal s at adjacent pairs.
The eigenvalues of the path Laplacian are for . For : , . Since , the graph is connected. The Fiedler vector has components proportional to for , giving values approximately after appropriate sign convention — monotone along the path, so the optimal bipartition cuts between vertices 2 and 3 or 3 and 4 (the middle), as expected.
Random walk convergence. On this path graph with , the spectral gap of is . After steps, the total variation distance to stationarity is bounded by . To reach distance : steps. The path graph mixes slowly because it has low conductance (removing one middle edge cuts volume roughly in half), consistent with small .
Connections
Where Your Intuition Breaks
The Fiedler vector gives the best 2-partition in the sense of minimizing conductance (the Cheeger constant) — but conductance and minimum edge cut are different objectives. A graph with one high-degree hub connected to two dense subgraphs will be bisected by the Fiedler vector to separate the hub from one subgraph (because including the hub in any partition inflates the denominator ), not to cut between the two dense subgraphs (the intuitively "correct" clustering). Spectral partitioning is sensitive to vertex degree in exactly this way: the combinatorial Laplacian treats all edges equally, so high-degree vertices dominate. The normalized Laplacian corrects for this by weighting by degree — always use for clustering on graphs with heterogeneous degree distributions, not .
The Cheeger inequality is the bridge between geometry and combinatorics. In Riemannian geometry, Cheeger's theorem bounds the first nonzero eigenvalue of the Laplace-Beltrami operator below by the square of the Cheeger constant of the manifold. The discrete graph version is structurally identical: . This is not a coincidence — graphs are discretizations of manifolds, and the Laplacian is the natural differential operator on both. This connection inspired the design of graph convolutional networks, which perform graph Fourier analysis using eigenvectors of just as classical Fourier analysis uses eigenfunctions of the continuous Laplacian.
Spectral clustering is the origin point for graph neural network design. The graph Fourier transform represents a signal (one value per vertex) in the eigenbasis of : where contains eigenvectors as columns. Spectral graph convolution filters this signal: where is a filter function applied to eigenvalues. ChebConv approximates with Chebyshev polynomials; GCN uses a first-order approximation. Every GNN layer is therefore performing localized spectral filtering on the graph signal — the algebraic connectivity controls the spatial scale of the filter.
Full spectral methods require eigendecomposition of , which costs and is completely impractical for million-node graphs (citation networks, social networks, protein interaction networks). For large graphs, two approximations are standard: Lanczos iteration computes the smallest eigenvectors in time with restarts; Chebyshev polynomial approximation of spectral filters avoids eigendecomposition entirely by computing matrix-vector products directly in time. These scalable approximations are exactly what make GCN and related architectures tractable on large graphs.
Enjoying these notes?
Get new lessons delivered to your inbox. No spam.