Bridge: Kernel Methods, Attention as Operators & Neural Operators (FNO)
The tools of functional analysis — Hilbert spaces, operator theory, and RKHS — are not abstract curiosities; they are the mathematical substrate of kernel SVMs, Gaussian processes, self-attention, and infinite-width neural network theory. This lesson maps each machine learning construct to its functional-analytic foundation, showing why the abstractions are load-bearing and not merely decorative.
Concepts
Orthogonal Projection — drag the vector tip
Rotate subspace W
angle = 30°
The residual v − P(v) is always perpendicular to W (right-angle box). This is the Best Approximation Theorem: P(v) is the closest point in W to v.
SVMs, Gaussian processes, self-attention, and infinite-width neural networks look like unrelated models — but each is computing projections in a function space defined by a kernel, and the mathematical structure they share is the RKHS. The functional-analytic lens unifies these methods: training a kernel SVM, conditioning a GP posterior, and running gradient descent on a wide network all reduce to finding the element of a Hilbert space closest to the data, with the kernel determining what "close" means.
RKHS in Kernel SVMs and Gaussian Processes
A kernel SVM solves the dual problem — entirely in terms of kernel evaluations. The primal solution lives in the RKHS by the representer theorem. The margin being maximized is exactly the RKHS norm being controlled.
In a Gaussian process (GP), the covariance function is the kernel, and the prior places a Gaussian measure on the RKHS. The posterior mean — the MAP estimate after observing data — is exactly the representer theorem solution for squared loss: where is the row vector of kernel evaluations. Mercer's theorem connects GP covariance to the eigenfunction expansion: the GP prior concentrates on functions with large projections onto eigenfunctions with large eigenvalues .
Neural Tangent Kernel Theory
Consider a network with parameters . The neural tangent kernel is
In the limit of infinite width, two things happen simultaneously: (1) the NTK converges to a deterministic kernel that depends on the architecture but not on the specific weight initialization, and (2) the NTK stays constant during training (the "lazy training" or "kernel regime"). Under these conditions, gradient descent on the squared loss gives a trajectory that satisfies the linear ODE:
where is the empirical measure on training data. This is gradient flow in function space — an infinite-dimensional ODE whose solution is computable in terms of the NTK eigendecomposition.
Spectral bias / frequency principle. Decompose the training error in the basis of NTK eigenfunctions with eigenvalues . The component along decays as . Modes with large (low-frequency, smooth components) decay fastest — networks learn low-frequency functions first. This explains empirical observations of generalization before memorization and connects to the implicit regularization of early stopping.
The spectral bias is not a property of gradient descent in weight space — it is a property of gradient descent in function space. The NTK eigendecomposition shows that gradient descent implicitly projects the target onto the RKHS defined by , weighting each eigenfunction by its eigenvalue. The kernel had to be defined as because this inner product is exactly what a gradient step computes: the change in from a step at training point is proportional to how correlated the parameter gradients at and are — which is .
Gradient Flows in Function Space
The NTK describes gradient descent as a flow in function space. More generally, given a divergence or energy functional over probability distributions , the Wasserstein gradient flow is the flow that decreases most rapidly in the geometry of the Wasserstein-2 metric on probability measures:
Training a generative model by minimizing KL divergence or the Wasserstein distance is a gradient flow in the space of probability measures . The functional derivative is the infinite-dimensional analogue of a gradient — evaluated at a point , it gives the direction of steepest descent for the "particle" at .
Stein Variational Gradient Descent (SVGD). Rather than flowing a continuous density, SVGD transports a set of particles that approximate . The update is:
where is a kernel and is the Stein operator applied to the empirical measure. The first term pushes particles toward high-probability regions; the second term is a repulsion force (via the kernel gradient) that prevents collapse. SVGD is a deterministic particle transport method derived from minimizing the Kernelized Stein Discrepancy (a RKHS-based divergence).
Worked Example
NTK for a Single-Layer Network in the Lazy Training Regime
Consider where only and are trained (the output weights are fixed at initialization). The gradient with respect to is . The NTK is:
As , by the law of large numbers this converges to the deterministic limit:
Crucially, this kernel is data-dependent (through ) but not weight-dependent at initialization — it is determined entirely by the architecture. In the lazy training regime, the kernel stays approximately constant during training, so gradient descent reduces to kernel regression with .
Connecting GP Posterior to Operator Projection
For a GP with kernel and observations with noise , the posterior mean is
where . In Mercer's basis , this is a projection: each mode is retained with weight — a Wiener filter. Modes with are passed through; modes with are shrunk toward zero.
Connections
Where Your Intuition Breaks
The NTK analysis shows that infinitely wide networks are equivalent to kernel regression — which might suggest it gives a complete theory of deep learning. It does not. Real networks of practical width operate in the "feature learning" regime: their NTK changes throughout training as the network learns useful internal representations, violating the frozen-kernel assumption. The kernel regime and the feature learning regime make opposite predictions about the role of architecture: in the kernel regime, only the initial NTK determines generalization; in the feature learning regime, the representation learned during training matters far more. Empirically, networks operating in the feature learning regime generalize substantially better than kernel regression with the corresponding initial NTK — the theoretical clean room and the practical regime are different worlds, and intuition from one does not transfer to the other.
The NTK regime is theoretically clean because it linearizes training dynamics, turning a nonlinear optimization into a linear ODE whose solution is governed by eigenvalue decay. The spectral bias follows directly: modes with large NTK eigenvalues (smooth, low-frequency functions) converge exponentially faster than high-frequency modes. This is why gradient descent on overparameterized networks first fits the smooth signal before fitting noise — it is a consequence of the functional-analytic structure, not a property specific to any particular architecture.
The Wasserstein gradient flow connects deep learning optimization to optimal transport theory. Training a neural network by minimizing a divergence between the model distribution and the data distribution (as in variational inference or GANs) is a gradient flow in the space of probability measures . The Wasserstein metric gives this space a Riemannian structure, and the flow follows the Riemannian gradient — a functional derivative that generalizes the Euclidean gradient to the space of distributions.
The NTK regime is theoretically tractable but practically limited. Real networks of finite width operate in the "feature learning" regime: the NTK changes substantially during training as the network learns internal representations. NTK theory predicts that overparameterized networks can interpolate training data (correct) but also that they generalize no better than kernel regression with the initial NTK (often incorrect for large networks). The gap between NTK theory and empirical behavior — particularly for networks that learn useful features from data — is an active area of research.
SVGD requires computing all pairwise kernel evaluations and their gradients at each iteration, giving cost per step. This limits practical SVGD to particle counts of hundreds to a few thousand. For high-dimensional distributions (e.g., posterior inference over neural network weights), naive SVGD is computationally infeasible. Approximations include random feature expansions of the kernel (reducing to for random features) and structured kernels that exploit problem geometry — active areas in scalable Bayesian inference.
Enjoying these notes?
Get new lessons delivered to your inbox. No spam.