Monday, September 21, 2015

The Configuration Space of All Lines In the Plane



Time for another classic post from the old site, while I'm at it… As we have seen in a bunch of previous posts, the notion of configuration space has held a prominent place in my mathematical explorations, because, I can't emphasize enough, it's not just the geometry of things that you can directly see that are important; it's the use of geometric methods to model many things that is.

Consider all possible straight lines in the plane (by straight line I mean those that are infinitely long). This collection is a configuration space—a catalogue of these lines. What does that mean? Of course, in short words, it's a manifold, and we've talked about many of those before. But it always helps to examine examples in detail to help develop familiarity. Being a manifold means we can find local parametrizations of our collection by $n$-tuples of real numbers. How do we do this for lines in the plane?

Finding Some Coordinates: The Slope-Intercept Equation

First off, almost anyone reading this, regardless of mathematical background, probably was drilled at one time or another on the following famous equation, called the slope-intercept equation:
\[
y = mx + b.
\] The variable m if you recall is the slope of a line, and b is the y-intercept which means where the line crosses the y-axis. However few people realize that what they're doing, really, is indexing every non-vertical line in the plane with some point in another plane (called, say, the mb-plane rather than the old traditional xy-plane). That is, we have a correspondence between points in one abstract mb-plane to the lines in the xy-plane, where m is the slope, and b the y-intercept. It doesn't get all lines, though—vertical lines have infinite slope, and you can't have a coordinate on any plane with an infinite value. That is, you have charted out the space of all possible nonvertical lines with points in a different plane.

Of course how would we catch the vertical lines? We chart it out using different coordinates. There's the possibility of using inverse slope and x-intercept, which merely means we write the equation using x as a function of y. In other words, all non-horizontal lines are given by:
\[
x=ky + c.
\] It's essentially obtained by reflecting everything about the diagonal line x = y and finding the usual slope and intercept. In other words we can chart out all non-horizontal lines on this new kc-plane. So you could represent the collection of all lines in the plane by having two charts... an atlas or catalogue of all lines, with these two sheets, the mb-plane and the kc-plane.

However, note that most lines in the plane—ones that are neither vertical nor horizontal, can be represented in both forms. That is, they correspond to points on both sheets. So, we would say in our usual terminology that these two sheets, or charts, determine a manifold of all possible lines; we just need to check that the overlap is smooth. But let's stop to think about what that means. We can think of it as trying to glue together both sheets to form just a single catalogue. The method of gluing is that we glue together the points that represent the same line in the plane. There is a nice, exact formula for the gluing, which is very simply determined: $(m,b)$ and $(k,c)$ represent the same line if and only if $y = mx + b$ and $x = ky + c$ are equations of the same line. All we have to do to convert from one to the other is solve for x in terms of y, that is, invert the functions. So we solve
\[
y = mx + b \iff y-b = mx \iff y/m - b/m = x \iff x = (1/m) y - b/m
\] that is, if $k = 1/m$ and $c = -b/m$, then $(m,b)$ and $(k,c)$ represent the same line in the two sheets. So to "glue" the sheets together, we glue every $(m,b)$ in the mb-plane to $(1/m,-m/b)$ in the kc-plane. Obviously the sheets need to be made of some very stretchable material, because it is going to be awfully hard to glue points together. Actually it's pretty hard to physically do this so don't try this at home, but just try to imagine it (don't you just love thought experiments?). For example, points in the mb-plane $(1,1)$, $(2,2)$, $(3,3)$, and $(4,4)$ get glued to the corresponding points $(1,-1)$, $(1/2,-1)$, $(1/3,-1)$, and $(1/4,-1)$. You glue them in a very weird way, but if you suppose for a moment, that you allow all sorts of moving, rotating, shrinking, stretching in this process (topological deformations), but without tearing, creasing, or collapsing, you can preseve the "shape" of this space, and yet make it look like something more familiar. This would be our new catalogue of lines. In addition, the catalogue has an additional property: nearby points in the catalogue translate to very similar-looking lines.

So, What Is It?

One should wonder what kind of overall "shape" our nice spiffy catalogue has, after gluing together the two possible charts we've made for it. As it turns out, its shape is the Möbius strip! That's right, the classic one-sided surface (without, as it turns out, its circle-boundary).

That is to say, if you give me a point on the Möbius strip, it specifies one and only one line in the plane. One would not, initially, be able see why non-orientability enters the picture. But a little interpretation is in order. First, if we take a particular line and rotate it through 180 degrees, we get the same line back. Everything in between gives every possible (finite) slope. It so happens that as far as slopes of lines is concerned, $\infty=-\infty$, and if you go "past" this single projective infinity, as they call it, you go to negative slopes. In other words, if you start on a journey on rotating a line through 180 degrees, from vertical back to vertical, you come back to the same line, except with orientation reversed (because what started out as pointing up now points down).


If you fix an origin and declare that it correspond to a certain special line in the plane, and then select a "core circle" for the Möbius strip, then as you travel around this circle, the distance traveled represents rotation angle for this special line. Traveling from the origin along the core circle and making one full loop should correspond to rotating the special line by 180 degrees. If you instead move up or down on the core circle, you instead end up sliding the line along a perpendicular direction, without changing its angle. So moving up and down the strip corresponds to parallel sliding of lines, and moving around the strip along a circle corresponds to rotating a line.

The Derivation

The specific formula we use is \[
F(m,b) = \left(\cot^{-1} m, \frac{b}{\sqrt{m^2 + 1}}\right),
\] which sends the line to the angle it makes with the $y$-axis, and its signed perpendicular distance to the origin (the sign is determined by $b$). For the other chart, \[ F(k,c) = \begin{cases}\left( \cot^{-1} \left( \frac{1}{k}\right), \dfrac{c}{\operatorname{sgn}(-k)\sqrt{k^2+1}}\right) \quad & \text{ if } k \neq 0 \\
(0,-c) & \quad \text{ if } k = 0
\end{cases}
\] We'll show how we got this in an update to this post, or perhaps a "Part 2." This can be readily checked by simply substituting the transition charts for $(m,b)$ and $(k,c)$. However, the extra case for $k=0$ here is simply gotten by taking the limit as $k$ goes to zero from above in the other case. What proves that it is a Möbius strip is that, if we take the limit as $k$ goes to zero from below, it will approach $(\pi,c)$ instead of $(0,-c)$. This would make it discontinuous, unless we decide to identify $(\pi,c)$ with $(0,-c)$: the $c$ going to $-c$ means we take the strip at $\pi$ and flip it around to glue it to the strip at $0$ (see this post for another example of defining a Möbius strip this way). Technically, we need an infinitely wide Möbius strip for this, but we can always scrunch it down into a finite-width strip without its boundary circle (using something like arctangent). It's just that the closer you get to the edge, the quicker things go off to infinity.

The animation is an example "path" through "line space," The blue dot travels around the white circle, and the line in the plane that corresponds to it is the blue line. The red line is a reference line perpendicular to the blue one, and always passes through the origin. The distance the blue dot from the core circle (in turquoise) indicates how far from the origin that the blue and red lines intersect. Because of the "scrunching down," though, the closer to the edge of the strip we get, the more dramatic the change in distance the blue line is from the origin. Here it is again in the plane with the Möbius strip as a picture-in-picture reference:




The more general object here we are describing is closely related to the notion of Grassmannian manifold, which are all $k$-dimensional subspaces in an $n$-dimensional vector space (the only difference is that Grassmannians only consider spaces through the origin).

Friday, September 18, 2015

Two Classic Clifford Tori Animations

After much rummaging around my hard drive, I finally found some Clifford tori animations from my old site that clearly give a much better sense of how the (stereographically projected) tori change as the angle $\varphi$ changes from $0$ to $\pi/2$ (in the notation of the last 2 posts on this subject). Here, we've used the Clifford circles to see its effect on them as well.


It starts off with the first degenerate case of one single unit circle, and expands from there. We see that it eventually comes very close to the other degenerate case, that of the straight line, the $z$-axis.

Our next video requires more explanation. This time, we take one particular torus, namely the one of identical radii $\frac{1}{\sqrt{2}}$ (in the video, these two particular circles are highlighted red and blue). Now, a $3$-sphere, like any sphere, can be rotated (by a matrix, or a whole path of matrices, in $SO(4)$). Thus, of course, such a rotation can always be realized as a rigid motion of the ambient $4$-space containing this $3$-sphere. It is possible to continuously rotate it so that the torus within has beginning and ending configurations that look the same, except that the red and blue circles have been swapped. If we restrict ourselves to $3$-space, such a rigid motion is impossible, but if we allow ourselves to let the torus pass through itself, then it, too, can be done. However, visualizing the $3$-sphere version in stereographic projection, with a $4$-space rotation, we effectively allow ourselves to distort distances (actually the $4$-space distance is not distorted; the distortion we see is an artifact of the stereographic projection), and add a "point at infinity," so a continuous rotation is allowed to take things through that point. The rotation of the ambient $3$-sphere does not preserve our usual set of nested tori, as can be seen by letting a matrix in $SO(4)$ act directly on the coordinates of our parametrization: it jumbles up all the components. So, of course, the torus undergoes a completely different kind of motion than in our previous "expander" video.


What happens is we inflate our inner tube, so a part of it gets puffed up to infinity, and wraps back around, turning the torus inside-out. In fact, after wrapping back around, we're "inflating" the outside of the torus. Or equivalently, getting back to donuts with frosting, the dough gets bigger and bigger, and when wrapping back around, almost all of space (plus a point at infinity) is dough, and the frosting bounds an inner-tube-shaped pocket of air.

Anyway, the full turning inside-out (which also swaps the red and blue circles, as promised) occurs exactly halfway through the movie (the rotation continues to restore the torus to its original state in the second half). Notice how the stripes on the torus which started out horizontal now are vertical, and what used to be the "apple core" shape which surrounds the donut hole now has become a "donut segment." Plus it just looks totally awesome!

Friday, September 11, 2015

Clifford Circles

30 Clifford circles with with $\varphi = \pi/8$ and $\theta_0$ ranging from $0$ to $2\pi$
A discussion about Clifford tori would never be complete without a corresponding discussion about Clifford circles. These were featured as the logo of the UCSD math site for many years (not the case anymore, though, but I saved a screenshot!):


Just as the Clifford tori foliate $S^3$, the Clifford circles foliate each of the Clifford tori. As part of ongoing efforts to revisit algebraic topology, this example is one of the best to explore. We begin with classical mapping, called the Hopf fibration, which maps $S^3$ to $S^2$ by
\[
p\begin{pmatrix}x_1\\x_2\\x_3\\ x_4\end{pmatrix} =\begin{pmatrix}2(x_1 x_3 + x_2 x_4)\\ 2(x_2 x_3 -x_1x_4)\\ x_1^2 + x_2^2 -x_3^2 -x_4^2\end{pmatrix}.
\]
In fancy-schmancy homotopy-theory speak, $p$ is a generator of $\pi_3(S^2)$. Considering our previous parametrization $F$ from last time:
\[
F\begin{pmatrix}\varphi \\ \alpha \\ \beta\end{pmatrix} = \begin{pmatrix} \cos \alpha \cos \varphi \\ \sin\alpha\cos\varphi \\ \cos\beta\sin\varphi \\ \sin\beta\sin\varphi \end{pmatrix},
\]
we recall that the last coordinate of $p$ is simply $A^2 - B^2$, or $\cos^2(\varphi) - \sin^2(\varphi)$. Although I tell my students to not bother memorizing trigonometric identities, we derived that from a $\sin^2 \alpha + \cos^2 \alpha = 1$ and $\sin^2 \beta + \cos^2 \beta = 1$. We can further simplify that last coordinate to $\cos(2\varphi)$.

The key property that we want to demonstrate is that each point of $S^2$ corresponds to a whole circle in $S^3$ (for any $\boldsymbol \xi$ in $S^3$, the circle in question is the inverse image $p^{-1}(\boldsymbol\xi)$), and that each of these circles corresponding to distinct $\boldsymbol \xi$, though disjoint, are nevertheless linked together.

To do this, we visit the first two coordinates:
\[
2(x_1x_3 + x_2x_4) = 2(\cos\alpha \cos\varphi \cos\beta \sin\varphi+ \sin\alpha\cos\varphi \sin\beta\sin\varphi)
\] \[= 2\sin\varphi\cos\varphi(\cos\alpha\cos\beta +\sin\alpha\sin\beta) = \sin(2\varphi) \cos(\alpha-\beta),\] where the last equation is gotten either by trolling the back of a calculus book for some trig identities, or using complex numbers. Similarly,
\[
2(x_2x_3 - x_1x_4) = 2(\sin\alpha \cos\varphi \cos\beta \sin\varphi- \cos\alpha\cos\varphi \sin\beta\sin\varphi)
\] \[= 2\sin\varphi\cos\varphi(\sin\alpha\cos\beta -\cos\alpha\sin\beta) = \sin(2\varphi) \sin(\alpha-\beta).\]
All together, we have \[
p \circ F\begin{pmatrix}\alpha\\ \beta\\ \varphi\end{pmatrix} = \begin{pmatrix}\sin(2\varphi)\sin(\alpha-\beta) \\ \sin(2\varphi)\cos(\alpha-\beta) \\ \cos(2\varphi)\end{pmatrix}.
\]
Letting $\theta = \alpha-\beta$, we see that this looks almost like our standard parametrization of a $3$-sphere, except that the polar angle $\varphi$ is off by a factor of $2$. No matter, it's still a sphere; we just have to take care to remember that the range of this $\varphi$ is $[0,\pi/2]$ rather than $[0,\pi]$. This confirms, incidentally, that $p$ really maps onto the sphere, rather than merely mapping into some amorphous blob in $\mathbb{R}^3$, as one can only assume at first, because the destination of the map $p$ has $3$ coordinates. The important thing to realize is that given any $\boldsymbol \xi$ in $S^2$, there is a unique $\varphi_0$ in $[0,\pi/2]$ and $\theta_0$ in $[0,2\pi]$ that correspond, under the usual parametrization, to $\boldsymbol \xi$. So, to calculate $p^{-1}(\boldsymbol \xi)$, we have to see how much we can change $\alpha$, $\beta$, and $\varphi$ in order to always get $(\varphi_0,\theta_0)$. By our above computations, this is easy: $\alpha$ and $\beta$ must satisfy $\alpha - \beta = \theta_0$, and $\varphi = \varphi_0$ is already completely determined. So our only degree of freedom here is $\alpha-\beta$: given some point in the fiber $p^{-1}(\boldsymbol \xi)$, if we add the same thing to both $\alpha$ and $\beta$, we will stay in the fiber. This means the fiber has a parametrization that looks like
\[
\beta \mapsto F\begin{pmatrix}\theta_0 + \beta \\ \beta \\ \varphi_0\end{pmatrix} =  \begin{pmatrix} \cos (\theta_0 + \beta) \cos \varphi \\ \sin(\theta_0 + \beta)\cos\varphi \\ \cos\beta\sin\varphi \\ \sin\beta\sin\varphi \end{pmatrix}.
\]
We finally finish things off by composing with the stereographic projection $P$ as before:
\[
\beta \mapsto \frac{1}{1-\sin\beta\sin\varphi } \begin{pmatrix}\cos (\theta_0 + \beta) \cos \varphi \\ \sin(\theta_0 + \beta)\cos\varphi \\ \cos\beta\sin\varphi \end{pmatrix}.
\]
Plotting this out, this gives us the nice circles shown at the start of the post. We'll continue to explore the properties of $p$ and its visualizations as we move along in algebraic topology.

Wednesday, September 9, 2015

Some Cool Views of Projective Space

While finally revisiting one of the geometry books on my shelf, Glen Bredon's Topology and Geometry, I encountered an exercise about showing that the projective space is homeomorphic to the mapping cone of a map that doubles a circle on itself (the complex squaring map $z \mapsto z^2$). The mapping cone has a nice visualization, first as a mapping cylinder, which takes a space $X$ and crosses it with the interval $I$ to form $X \times I$ (thus forming a "cylinder"), and then glues the bottom of it to another space $Y$ using a given continuous map $f : X \to Y$. Finally, to make the cone, it collapses the top to a single point. Of course, this can be visualized as deforming the bottom part of $X \times I$ through whatever contortion $f$ does, which might include self-intersection (and of course, it could be more gradual). So I used a good old friend, parametrizations, to help set up an explicit example. Take a look!

An immersion of projective space into $\mathbb R^3$. Shown as a mesh to make the self-intersecting portion visible. Looks a little like a molar, although one would hope I take good enough care of my teeth to not have that many holes in it…
A cutaway view, now as a more solid surface, basically illustrating it now as a mapping cylinder (it is homeomorphic to projective space minus a disk, which is a Möbius strip). Anyone know a good glassblower so we can make vases that look like this?
A view from the open top, allowing us to see the self-intersecting part of the surface from above
The equation, in "cylindrical coordinates", is $r = (2z+\cos\theta) \sqrt{1-z^2}$ for $0\leq z \leq 1$ (for the closed surface), $0 \leq z \leq 0.97$ (for the cutaway), and $0 \leq \theta \leq$ (what else?) $2\pi$. I say it in scare quotes because technically, it allows negative values of $r$. For fun, though (and to make it totally legit, even if you have qualms about negative radii), we rewrite it as a (Cartesian) parametrization (by substituting for $r$):
\[
\begin{pmatrix} x \\ y \\ z\end{pmatrix} = \begin{pmatrix} (2u + \cos v)\sqrt{1-u^2} \cos v\\ (2u + \cos v) \sqrt{1-u^2}\sin v \\ u\end{pmatrix}
\]
with $0\leq u \leq 1$ or $0.97$, and $0 \leq v \leq 2\pi$.

The motivation for this is that the equations $r = a + \cos\theta$, as $a$ varies, goes from a loop traversed once to a loop that folds over itself exactly in a $2:1$ manner:


but now visualized stacked on top of one another. Then to collapse the top, we shrink the diameter by a function that vanishes at $1$ and approaches $0$ at infinite slope to make it smooth (topologically speaking, it is valid to let it collapse to a corner point, though). Flowers, however, are not included (you'll have to fiddle with things like $r = \cos(k\theta)$ if you want that…)

Saturday, September 5, 2015

Distributing Points on Spheres and other Manifolds

One of the classic problems of random-number generation and generally representing probability distributions is the problem of uniform distribution of points on a ($2$-)sphere (we, of course, clarify the dimension, having gone on too many extradimensional journeys lately—but we'll quickly see that these methods are good for those cases too!). Namely, how does one pick points at random on a sphere such that every spot on it is equally likely? Naïvely, one tries to jump to latitude and longitude. But this is because we are trained to think of the sphere in terms of parametrizations (yes, we like those, obviously, but they are only a means of representation but not the be-all and end-all of geometric objects), and not in terms of quantities defined directly on the sphere itself. Uniform for points directly on the sphere need not correspond to uniform with respect to some real parameters defining those points. After all, the whole point of parametrization is to distort very simple domains into something more complicated. What will happen, for example, with latitude and longitude, if one distributes points uniformly in latitude, i.e., the interval $[-\pi/2,\pi/2]$ and then also distributes points uniformly in longitude, the interval $[-\pi,\pi]$, then, after mapping to the corresponding points on a sphere, one will get points concentrated near the poles:

Sphere with a distribution of 2000 points, uniform in latitude and longitude, generated by MATLAB. Notice it clusters at the poles. (Here the north pole of the sphere is tipped slightly out of the plane of the page so we can see the points getting denser there. Click to enlarge.)

This occurs essentially because, for example, at different latitudes, a degree of longitude can be a very different physical distance (~70 miles at the equator and goes to zero at the poles): the numerical correspondence does not match up to other more relevant physical measures. In more fancy-schmancy speak, uniformity in latitude and longitude (or the equivalent spherical $\varphi$ and $\theta$) means uniformly in some imaginary rectangle that we deform to a sphere. Actually, without much work, we can already intuitively see what we have to do to achieve uniformity on the sphere in terms of those coordinates: make the distribution thin out when the latitude indicates we're near the poles, i.e., we want the distribution of points in the parameter rectangle to look like this:

Nonuniform distribution of points on the parameter rectangle $(\theta,\varphi)$. Notice the points get more sparse at the top and bottom, corresponding to $\varphi$ at its extreme values.
How do we express this in formulas? We need to take a look at the area element. The naïve uniform distribution gives $\frac{1}{2\pi^2}d\varphi\; d\theta$, $\frac{1}{2\pi^2}$ times the area element of the rectangle $[0,\pi]\times[0,2\pi]$, where the $1/2\pi^2$ is there to make the integral come out to $1$, as required for all probability densities. But anyone who has spent a minute in a multivariable calculus class probably got it drilled into their heads that the area element of the sphere, namely, that which measures the area of a piece of sphere of radius $r$, is $r^2 \sin \varphi\; d\varphi\; d\theta$ (or $r^2\sin \theta\; d\theta\; d\phi$ if you're using the physicists' convention… that's a whole 'nother headache right there). We'll take $r = 1$ here, and of course, we have to now multiply by $\frac{1}{4\pi}$ to make the density integrate to $1$. It is reasonable that actual physical area would correspond to an actual uniform distribution, because the area of some plot of land on a sphere will be the same no matter how the sphere is rotated: to distribute one sample point per square inch everywhere on a sphere would indeed truly give a uniform distribution. So this gives us the solution: we can still uniformly distribute in "longitude" $\theta \in [0,2\pi)$ (which accounts for the factor of $\frac{1}{2\pi} d\theta$), since the area element doesn't have any functional dependence on $\theta$ other than $d\theta$. For the $\varphi$, however, we need to weight the density, in such a manner that if we take $d\varphi$ for the moment to be the classical interpretation as an infinitesimal or just small increment in $\varphi$, the proportion of points landing between $\varphi$ and $\varphi+ d\varphi$ is approximately $\frac{1}{2}\sin\varphi \;d\varphi$ (the $\frac{1}{2}$ comes from removing the previous factor of $1/2\pi$ from the total $1/4\pi$). But that, we recognize, as $\frac{1}{2}d(-\cos \varphi)$, so if we define a new variable $u = -\cos \varphi$, then our area form is $\frac{1}{2} du\; \frac{1}{2\pi} d\theta$. Thus if we uniformly distribute $u$ in $[-1,1]$ (an interval of length $2$, so justifying the $\frac{1}{2}$), and only then calculate $\varphi = \cos^{-1}(-u)$ (and uniformly distribute $\theta$ in $[0,2\pi]$ as before), we do in fact get a uniform distribution on the sphere.

Sphere with a uniform distribution of 2000 points (click to enlarge), generated by MATLAB

So What's the More General Thing Here?

As veteran readers might have expected, this situation is not as specific as it may seem. Transforming probability density functions (pdfs), even just on a line, sometimes seems very mysterious, because it doesn't work the same way as other coordinate changes for functions one often encounters. It is not enough to simply evaluate the pdf at a new point (corresponding to the composition of the pdf with the coordinate change). Instead what is reported in most books, and sometimes proved using the Change of Variables theorem, is some funny formula involving a lot of Jacobians. But this is the real reason: it's because the natural object associated to probability densities is a top-dimensional differential form (actually, a differential pseudoform, which I have spent some time advocating, for the simple reason that modeling things in a geometrically correct manner really clarifies things—just think, for example, how one may become confused by a right-hand rule?). A way to view such objects, besides as a function times a standardized "volume" form, is a swarm, which is precisely one of these distributions of points (although they don't have to be random, generating them this way is an excellent way to get a handle on it).

If we have some probability distribution on our space, and near some point it is specified (parametrized) by variables $x = (x_1, x_2,\dots,x_n)$, then the pdf should (locally) look like $\rho(x_1,x_2,\dots,x_n) dx_1 dx_2\dots dx_n$. Most probability texts, of course, just consider the $\rho$ part without the differential $n$-form part $dx_1\dots dx_n$. But when transforming coordinates to $y$ such that $x = f(y)$, then the standard sources simply state that $\rho$ in the new coordinates is $\rho(f(y)) \left|\det \left(\frac{\partial f}{\partial y}\right)\right|$. But really, it's because the function part $\rho(x)$ becomes $\rho(f(y))$ as a function usually would, and the $dx_1 \dots dx_n$ becomes $\left|\det \left(\frac{\partial f}{\partial y}\right) \right| dy_1\dots dy_n$. In total, we have the transformation law
\[
\rho(x_1,\dots,x_n) dx_1 \dots dx_n=\rho(f(y_1,\dots,y_n)) \left|\det \left(\frac{\partial f}{\partial y}\right) \right| dy_1\dots dy_n.
\]
The usual interpretation of the $n$-form $\rho(x) dx_1 \dots dx_n$ is simply that, when integrated over some region of space, gives the total quantity of whatever such a form is trying to measure (volume, mass, electric charge, or here, probability).

Another Curious Example

One perennial interesting example is the computation of two independent, normally distributed random variables (with mean $0$ and variance $\sigma^2$), and considering the distribution of their polar coordinates. For one variable, the normal distribution is, using our fancy-schmancy form notation,
\[
\rho(x)\; dx = \frac{1}{\sqrt{2\pi \sigma^2}} e^{-x^2/2\sigma^2} dx
\]
From the definition of independence, two normally distributed variables have a density obtained by multiplying them together:
\[ \omega = \rho(x)\rho(y) \; dx\;dy = \frac{1}{2\pi\sigma^2} e^{-(x^2 + y^2)/2\sigma^2} dx\;dy.\]
So if we want the distribution in polar coordinates, where $x = r \cos \theta$ and $y = r \sin \theta$, this means we transform the form part as the usual $dx\;dy = r\; dr\; d\theta$, and write the density part as $\frac{1}{2\pi\sigma^2}e^{-r^2/2\sigma^2}$. In total,
\[
\omega = \frac{1}{2\pi\sigma^2} r e^{-r^2/2\sigma^2} dr \; d\theta
\]
This factors back out into two independent pdfs (i.e., $r$ and $\theta$ are independent random variables) as
\[
\omega = \left(\frac{1}{\sigma^2} r e^{-r^2/2\sigma^2} dr\right) \left( \frac{1}{2\pi} d\theta\right)
\]
which means we can take $\theta$ and uniformly distribute it in $[0,2\pi]$ as before. To get $r$, we can proceed by two ways. First, we could imitate what we did earlier with the sphere: use integration by substitution or the Chain Rule, to deduce that $\frac{1}{\sigma^2} r e^{-r^2/2\sigma^2} dr= d\left(- e^{-r^2/2\sigma^2}\right)$, and thus, making the substitution $v = - e^{-r^2/2\sigma^2}$, we now have $\omega = \frac{1}{2\pi} dv\; d\theta$. The range of $v$ has to be in $[-1,0)$, since $e^{-r^2/2\sigma^2}$ goes to zero as $r \to \infty$ and has a maximum at $1$. With this, we uniformly distribute $v \in [-1,0)$ and invert the function:
\[
r=\sqrt{-2\sigma^2 \log (-v)}.
\]

Alternatively, we can substitute $u = r^2/2\sigma^2$ (and the inverse, $r = \sqrt{2\sigma^2 u}$). Then $du = (r/\sigma^2) dr$, and $\frac{1}{\sigma^2} re^{-r^2/2\sigma^2} dr = e^{-u} du$, with $u \geq 0$. This means $u$ is distributed as an exponential random variable with parameter $\lambda = 1$. However, this is often computed using uniform $[0,1]$ variables as well, for which we simply end up using something similar to the above method. Still, however, it gives another way of understanding the distributions: it certainly is very interesting that two normally distributed random variables becomes, via such an ordinary transformation such as polar coordinates, (the square root of) an exponentially distributed random variable and a uniform random variable.

What about three independent normally distributed variables (with mean zero and all the same variance)? It turns out the $r$ coordinate is not so easy (it's a gamma distribution), but the $\varphi$ and $\theta$ variables yield a uniform distribution on the sphere! However, it shouldn't be too surprising: given three independent such variables, there should be no bias on their directionality in $3$-space. We can also turn this around to give ourselves an alternate method of uniformly distributing points on a sphere (since normal random variables are very easy to generate): take three such variables, consider it a point in $\mathbb R^3$, and divide by its magnitude (I thank Donald Knuth's Art of Computer Programming, Volume 2 for this trick; it has an excellent discussion on random number generation).

Tuesday, September 1, 2015

Clifford Tori


As part of the ongoing celebration that is the relaunch, here, we finally give the long-awaited in-depth look at the site's namesake hinted at in the opening post. Perhaps surprisingly in a post that ostensibly is about tori, we begin our story with a different well-known (and hopefully loved) family of spaces: the spheres. The $1$- and $2$-dimensional spheres hardly need introduction, being undoubtely the first curved geometrical objects that one studies, the ordinary circle and sphere (which for mathematicians, only refers to the boundary surface, not the interior, of a ball, so is $2$-dimensional). These shapes of course appear all the time in nature, since they satisfy many optimality properties. Much, of course, has been written about their "perfect form."

Higher-dimensional spheres are easy to define: the set of all points a given distance from a given point. Where, of course, "all points" start out as being in some higher dimensional space, say, n. High-dimensional spheres have interesting applications, such as in statistical mechanics, where the dimension of the state space in question is on the order of $10^{24}$ (every particle gets its own set of 3 dimensions! Again, this is why state space is awesome). Spheres occur in this context because the total energy of the system remains constant, so the sum of the squares of all their momenta has to be constant—namely, the momenta are all a certain "distance" from the origin. As crazy as it may sound, it is, at its root, a description (using Hamiltonian mechanics) of many more than the two particles that we spent a whole 5-part series talking about! But of course, it'd take us a bit far afield to explore this (at least in this post; we should eventually feature some statistical-mechanical calculations here, because it is illustrative of how we can deal with overwhelmingly large-dimensional systems and, rather amazingly, be able to extract useful information from it!).

So let's get back to nearly familiar territory: $n = 3$, the $3$-sphere $S^3$ (the boundary of a ball in 4-dimensional space $\mathbb{R}^4$). Being $3$-dimensional, one would think we can visualize it or experience it viscerally somehow. As noted by Bill Thurston, the key to visualization of $3$-manifolds is to imagine living inside a universe that is shaped like one (this is also elaborated upon, working up from $2$-dimensional examples, by Jeffrey Weeks, with interactive demos). This isn't so straightforward to visualize (literally), because it is a curved 3-dimensional space. There are a number of ways of doing this; they give the essence of $S^3$ by understanding it in terms of some more familiar objects from lower dimensions, such as slicing by hyperplanes, and of course, by tori.

The Clifford Tori as a Foliation of $S^3$

So where do tori (nested or not) come in? Let's get back to the defining formula of $S^3$,
\[
x_1^2 + x_2^2 + x_3^2 + x_4^2 = 1.
\] If we consider groups of $2$ terms, $x_1^2 + x_2^2 = A^2$, and $x_3^2 + x_4^2 = B^2$, we have, $A^2 + B^2 = 1$. But for each fixed $A$ and $B$ satisfying that relation, we get two circles: one for the coordinates $(x_1,x_2)$ and another for the coordiantes $(x_3,x_4)$. We can use this information to help parametrize $S^3$. Let's first ask: what are some familiar things satisfying $A^2+B^2 = 1$? If we let $A = \cos \varphi$ and $B = \sin \varphi$, then $A$ and $B$ will always satisfy this relation for any $\varphi \in \mathbb{R}$: so $\varphi$ is one possible parameter of this system, and the full possible range of $A^2$ and $B^2$ is assumed by letting $\varphi$ vary from $0$ to $\frac \pi 2$. This gives us: \[
x_1^2 + x_2^2 =\cos^2 \varphi
\] and \[
x_3^2 + x_4^2 = \sin^2 \varphi.
\]
In other words, as $\varphi$ varies, the two sets of coordinates represent one circle that grows in radius, and another that shrinks, in such a way that the sum of the squares of the two radii are always $1$. This realizes the $3$-sphere as a collection of sets of the form $S^1(A) \times S^1(B) = S^1(\cos\varphi) \times S^1(\sin\varphi)$, the Cartesian product of circles of radius $A$ and $B$. But what is the Cartesian product of two circles? A torus. These tori fill up all of $S^3$ (except for two degenerate cases: two circles, corresponding to the Cartesian product of a unit circle and a single point—a circle of radius $0$). The technical term for this is that they foliate $S^3$ (from the Latin folium for "leaf"). They are called Clifford tori. Hard as it may be to believe, their intrinsic geometry, as inherited from $\mathbb{R}^4$ is flat (although this is not true of their extrinsic geometry). It means that if we have a sheet of paper, we could lay it flat on a Clifford torus in $\mathbb{R}^4$ (you can't do that in $\mathbb{R}^3$ with your garden-variety donut-shaped torus). However, a full study of the geometry of these tori will have to wait for another time. Here, we will be content to visualize them (which, unfortunately, will not preserve that flat geometry we are claiming they have). For the visualization, we use another tool, the stereographic projection. Before we get to that, we finally note that, for each given $\varphi$, each of the circles $S^1(\cos \varphi)$ and $S^1(\sin\varphi)$ can be further parametrized with other angles; we take
\[
(x_1,x_2) = (\cos \alpha \cos \varphi, \sin \alpha \cos \varphi)
\] and \[
(x_3,x_4) = (\cos \beta \sin \varphi, \sin \beta \sin \varphi),
\]
where $0\leq \alpha,\beta \leq 2\pi$. So this means we have parametrized $S^3$ by 3 variables, $(\varphi,\alpha,\beta)$ varying over $[0,\pi/2]\times [0,2\pi]\times [0,2\pi]$.

The Stereographic Projection

There's a way to project (almost) the whole $3$-sphere into ordinary $3$-space $\mathbb{R}^3$. To understand how we can project the $3$-sphere (minus a point) into $3$-space, let's look at the analogous problem for the $2$-sphere, first. It so happens that the stereographic projection is extremely useful in that case as well, and is the source for many arguments involving "the point at infinity" in complex analysis. The way it works is to imagine screwing in a light bulb at the top of a sphere resting on a plane, and given a point on a sphere, its shadow cast on the plane by this light is the corresponding plane (shown in the figure below for a line: the dots connected by the blue rays correspond).

Some corresponding points for the stereographic projection of a circle (here of radius $\frac 1 2$) to a line.
Notice that the closer it gets to the top, the farther out the rays go (thus, the farther out the corresponding point). And the top point falls on a horizontal line, which will never intersect the corresponding plane of projection: it is said to be sent to the "point at infinity". But the thing is, that extra point at infinity is, at least for the plane as we've defined it, truly extra, so the stereographic projection really simply maps the sphere minus one single point to the plane. In formulas, for a sphere of radius $\frac{1}{2}$ centered at $(0,0,\frac{1}{2})$, this is
\[
\left(\frac{x}{1-z},\frac{y}{1-z}\right).
\]
Of course, if we want to map the unit (radius 1 and origin-centered) sphere, we have to do a little finessing with an extra transformation, namely, $(x',y',z') = (2x, 2y, 2z-1)$. It so happens that we get the exact same formula back, just a different domain:
\[
\left(\frac{x}{1-z},\frac{y}{1-z}\right) = \left(\frac{\frac{1}{2}x'}{1-\frac{1}{2}z'-\frac{1}{2}},\frac{\frac{1}{2}y'}{1-\frac{1}{2}z'-\frac{1}{2}}\right)  =\left(\frac{x'}{1-z'},\frac{y'}{1-z'}\right).
\]
For this reason, many people start off with this formula for the unit sphere instead. The picture associated to this actually is nice: it projects rays of light from the top, through the sphere, and onto a plane that slices the sphere exactly in half at its equator. The consequence is that the light hits the corresponding point in the plane first before reaching a point in the lower hemisphere (but, mathematically, we keep the mapping as always from the sphere to the plane, regardless which the light beam would hit first). It should also be noted that the inverse mapping, of course, will be different (we won't be needing the inverse mapping here, but it is also useful in other contexts, and can be derived using elementary, if tedious, means, via messy algebra).

Stereographic projection of the unit, origin-centered sphere to the plane containing its equator. The projection associates the point $P$ to the point $Q$.

So we generalize this formula: for the unit 3-sphere, given as $x_1^2 + x_2^2 + x_3^2 + x_4^2 = 1$, we have a stereographic projection $P$ of all of its points except the point $(0,0,0,1)$, to $\mathbb{R}^3$, as follows:
\[
P  \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ x_4\end{pmatrix}  =\frac 1 {1-x_4}\begin{pmatrix} x_1\\ x_2 \\ x_3\end{pmatrix}.
\]
In order to visualize our Clifford Tori, then, we recall the parametrization derived above:
\[
\begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ x_4\end{pmatrix} = F\begin{pmatrix}\varphi \\ \alpha \\ \beta\end{pmatrix} = \begin{pmatrix} \cos \alpha \cos \varphi \\ \sin\alpha\cos\varphi \\ \cos\beta\sin\varphi \\ \sin\beta\sin\varphi \end{pmatrix}.
\]
The two circles that we want are when $\alpha$ and $\beta$ vary, so if we select a few values of $\varphi$ and draw the surface by the parametrization with the remaining variables, we will get different tori for each value of $\varphi$. But, of course, this still gives the torus as being in $\mathbb R^4$. So we compose this with the stereographic projection: for fixed $\varphi$, and letting $\alpha, \beta$ vary, we consider parametrizations
\[
\Phi \begin{pmatrix} \varphi \\ \alpha \\ \beta \end{pmatrix} = P \circ F\begin{pmatrix} \varphi \\ \alpha \\ \beta \end{pmatrix} = \frac{1}{\sin\beta\sin\varphi} \begin{pmatrix} \cos \alpha \cos \varphi \\ \sin\alpha\cos\varphi \\ \cos\beta\sin\varphi  \end{pmatrix}.
\]


These are the tori given in the opening post (the tori are deliberately shown incomplete, with $\beta$ not going full circle, so that you can see how they change with different $\varphi$ and that they are nested. Note that this transformation does not preserve sizes, so even though we said that "one circle gets bigger as the other gets smaller", this is not what happens in the projection. Think of a sphere with circles of latitude; they grow from one pole to the equator and shrink back down from the equator to the other pole, but their stereographic projections to the plane just keep growing. The other view of this, and the site's logo, is simply a cutaway view of the stereographic projections by one vertical plane. As a note of thanks, bridging years, one of the sources of inspiration for learning about Clifford tori and their visualizations has come from Ivars Peterson's The Mathematical Tourist. (He maintains a blog as well).

We should finally note that this is not the same parametrization as our usual torus of revolution. Parametrizations are not unique. The Clifford tori have many interesting, topologically important properties that will certainly be fodder for future posts.