Circular Law

This issue we take a brief foray into Random Matrix Theory, and examine the Circular Law ā€” a central result that for many years was only a conjecture, until 2010, when it was finally proved jointly by Terence Tao and Van Vu (Yale!).

First a quick linear algebra review. An eigenvalue Ī» is a number that solves the matrix equation: $$\mathrm{M} \mathbf{x} = \lambda \mathbf{x}$$ When this equation is solved over the complex numbers $\mathbb{C}$, the fundamental theorem of algebra tells us that every n-by-n (nš—‘n, square) matrix has n (not necessarily unique) eigenvalues.

Consider an nš—‘n matrix with real entries that are independent and identically distributed random variables, all with mean 0 and variance 1/n. We find the n eigenvalues of this matrix, and plot them in the complex plane. The Circular Law states that as n gets large, the distribution of eigenvalues converges almost surely to the uniform distribution on the unit disk. This is a very general (and powerful) statement, and as a result, it can be difficult to understand what this means. We visualize some simulated examples below.

The Circular Law in Action

The first random matrix we consider is one with standard Normal entries (i.e. the Normal distribution we are already familiar with, having mean 0 variance 1). It comes pre-standardized! Observe below, that it clearly fulfills the Circular Law, even for relatively low-dimensions n=50.

Another interesting thing to note is that the Real part and the Imaginary parts of the eigenvalues each follow the Wigner Semicircle distribution, represented by the dashed black line. However, as we know from the Circular Law's general conditions, it holds not only for square matrices of Normal random variables, but also for (properly centered and standardized) uniform random variables:

And exponential random variables:

Seems like they all look pretty much the same, but you're going to have to trust me on this one ā€” I really did use different distributions for each of these!

Infallible?

It should be noted, that when we demand the variance to be standardized to 1/n, we are assuming implicitly that it is possible to standardize the variance at all. In other words, the Circular Law does not hold for random variables that do not have finite second moment (variance), such as the Cauchy distribution:

Here we see that as the dimension of the matrix increases, the (sample) variance of the Cauchy random variable entries spikes, thus the eigenvalues implode towards the origin (clearly not uniformly distributed over the disk).


Code for plots Seeing Statistics aims to animate the predictable structures that emerge from repeated randomness.

Code for algorithm