Saturday, October 31, 2020

Happy Halloween from Nested Tori!


Spooky greetings. A quick post with more complex analysis for you. Though maybe not as visually interesting as fractals, but getting back to some basics and/or nuance regarding complex functions. For one thing, I just never had a chance to really fit in a good study of Riemann surfaces (interest in them was partially stirred on by all that recent cubic hackery). Also, a lot of stuff involving branch cuts and branch points are something quickly gone over in standard complex analysis and never really explored in depth, other than to say "we take the principal branch of the logarithm/root/power" function to be... and you don't get into the nuance with the details of differences between different branches of functions. This example here is exploring the mapping $z \mapsto \sqrt{z^2 + 1}$. This is already interesting from the Riemann surface point of view, because neither the function nor its inverse is definable in its most interesting form by mapping (subsets of) the complex plane alone. Formally, to work on $\mathbb{C}$, we have to define things by taking a continuous choice of square root on $\mathbb{C}$ minus some ray from $0$ to $\infty$. The choice here is between a square root and its opposite (which is also a square root of the same complex number!) is not natural in the complex plane (unlike in the real numbers where you can always take the positive square root of a positive number, which is in some sense, a defining characteristic of real numbers, cementing why it is the unique complete ordered field), and being forced to choose forces a discontinuity somewhere. Riemann surface theory is, in some sense, about never having to make the choice. To quote Sir Roger Penrose (congrats on that Nobel Prize!): 

In particular, the domain of the logarithm function would be ‘cut’ in some arbitrary way, by a line out from the origin to infinity. To my way of thinking, this was a brutal mutilation of a sublime mathematical structure. (The Road to Reality, p. 136)

A couple of references (e.g. Penrose's student Tristan Needham, in his otherwise fantastic Visual Complex Analysis) suggest the most common branch cut for this is function is two horizontal lines to the left of $\mathsf{i}$ and $-\mathsf{i}$. I wrangled with this a bit, but I believe a subtlety is left out. To study $\sqrt{z^2 + 1}$ as truly the composition of a square root function, and the mapping $z^2 + 1$, your branch cuts will have to be the corresponding inverse image of the branch cut of the square root, under $z^2 + 1$. Using the negative real axis, this is the ray above $\mathsf{i}$ and the ray below $-\mathsf{i}$. Two rays to the left of $\pm \mathsf{i}$ cannot be realized as such an inverse image, for any ray to infinity (the image under $z^2 + 1$ is a whole parabola to infinity). Of course, the essential definition for complex analysis is that you define the function in this set by taking paths from an origin point and analytically continuing, and entrusting your result to algebraic topology. But this is not very effective, computationally. This is basically saying that for some selections of branch cuts, the function composition $\sqrt{z^2 + 1}$ is a lie: you cannot get it by assembling square root, squaring, and plus one. What turns out to work, in terms of assembling functions rather than continuing along curves, is not $\sqrt{z^2 + 1}$, but $\sqrt{z+\mathsf{i}} \sqrt{z-\mathsf{i}}$, that is, if you take the branch of square root, along the negative axis, subtract and add $\mathsf{i}$, take the square root of each, and then multiply, you do get something that reproduces what you theoretically get by paths. One might wonder what is the difference between $\sqrt{z^2 + 1} = \sqrt{(z+\mathsf{i})(z-\mathsf{i})}$ and $\sqrt{z+\mathsf{i}}\sqrt{z- \mathsf{i}}$; these are not the same because $\sqrt{ab} \neq \sqrt{a}\sqrt{b}$ in the complex plane unless you regard it as a multivalued set equality.

The best rendering of this function is, I've discovered, is taking the branch $-\mathsf{i}$ to $\mathsf{i}$ along the imaginary axis—something that doesn't even go out to infinity. To his credit, Needham does mention that connecting the two branch points in a finite part of the plane is possible, and it just has to be done in a manner that disables you from being able to complete a turn around the two branch points $\pm \mathsf{i}$. But there's no explicit computation that develops a visceral understanding of this fact (and so I made this post).

The title picture is the image of a large disk (parametrized by the usual polar coordinates) under the mapping $\sqrt{z-\mathsf{i}} \sqrt{z+ \mathsf{i}}$. The cut (deleted out of the disk) is taken on the interval $-\mathsf{i}$ to $\mathsf{i}$ ($y=-1$ to $y=1$ in the picture). Computationally this means the square roots were taken with the angles in the interval $-\pi/2$ to $3\pi/2$, which required me to define the square root using some hackery of the arctan function. I leave you with the conformal version of this (where the concentric ring radii increase exponentially rather than linearly; the linear increase looks better for looking like a cobweb, but this version looks better in preserving the proportions of the grid rectangles formed by the intersecting lines).


Or, take a look at it in Desmos:


Sunday, October 11, 2020

Chaos Still Reigns!!

A quick follow-on to last week's post. It would be blasphemous to talk about chaos without at least mentioning the most iconic fractal since fractals became a thing via The Fractal Geometry of Nature, the set of complex numbers named after its author, Benoit Mandelbrot. I've spent a lot of time exploring the Mandelbrot set since I've been able to use computers, so you'd think I'd have plenty to say about it. And I do, but not this week. Indeed it is this, more than anything else, that started me on the visualization track. Instead, I'll leave you with the real reason why the thing is so damn captivating in the first place, with, what else, a visualization.

The Mandelbrot Set with $-1.01 \leq\operatorname{Re}(z) \leq -1.006$ and $0.309 \leq \operatorname{Im}(z) \leq 0.313$.
 
The Mandelbrot set is in fact relevant to strongest theme of this blog, that of parameter spaces. Namely, it is the set of all parameters $c$ such that the iteration $z \mapsto z^2 + c$, starting at $z=0$, remains bounded. It in fact is a catalogue of another bunch of fractals (just like having a catalogue of all lines in the plane), Julia sets, which describe for each fixed $c$, what starting values of the iteration $z \mapsto z^2 + c$ stay bounded. In other words, Julia sets describe a collection involving the $z$ values of the iteration, and the Mandelbrot set describes a collection involving $c$ values. The main dark blobby part of the image is the actual set; the fancy colors are just colorings according to how long it takes for a point there to escape outside a certain disk in the plane ($|z|\leq 2$). Here the colors are assigned to the Viridis palette, and the iteration is done up to 1000 times (at 1000, it's just considered to be in the set). It cycles through the Viridis palette by the number of iterations modulo 50 (every 50 times, the color repeats), which is why you see interesting discontinuous jumps in color. Enjoy. (The next post will probably include some examples of Julia sets, showing precisely how the Mandelbrot set is a catalogue of them). I leave you with my all-time favorite from the 90s, the "Jewel Box" as coined by Rollo Silver in a fractal newsletter Amygdala back in the '90s. Unfortunately, I don't know where he is now and what the state of the newsletter is!

Centered at $-0.74542846 + 0.11300888\mathsf{i}$ and going about $10^{-5}$ on either side in the $x$-direction


Saturday, October 3, 2020

Chaos Reigns!!!

 

Chaos and fractals were all the rage when I was growing up, and was probably one of the major factors in getting me interested in math, programming, and differential equations, since much of this is ultimately about how systems evolve over time. Oh, and getting me into TOTALLY AWESOME visualizations. This is a graph of the various possible final states of the quadratic iterator: $x_{n+1} = ax_n (1-x_n)$. What this means is, for a given parameter $a$, and some start value in the range $(0, 1)$, we consider iterating the map $F(a, x) = ax(1-x)$, namely, the sequence $F(a, x)$, $F(a, F(a, x))$, $F(a, F(a, F(a, x)))\dots$. For some ridiculous history here, one of my favorite things to do as a child was to repeatedly press buttons on a calculator over and over again to see what happens. Little did I know that this is basically, in math, the only thing that actually exists: everything is built on the foundation of recursion. Of course, I don't actually believe recursion is the only thing that exists, rather, there is interesting conceptual structure that is built up upon fundamental components, and it is damn amazing how something so simple can ultimately create something so complex. And here is one example in which this is true.

Anyway, to get back to it: if we keep iterating $F$ in the variable $x$, we will eventually fall into some stable pattern. For small $a$, iterating $F$ in the variable $x$ eventually gives convergence to a fixed point, namely $1-\frac{1}{a}$. How did we find this? If we have a fixed point $x = F(a, x)$, then $x = ax(1-x)$ or $-ax^2 + (a-1)x = 0$. This gives two solutions $x = 0$, and $-ax + (a-1) = 0$, or $x = (a-1)/a$. 

Now, for each such $a$ (which we take to be the horizontal axis), we plot this fixed point (on the vertical axis). And as you move up to $a=3$, all is peaceful, as all iterations eventually settle on one and only one point. But at $a = 3$, something strange happens. You stop getting a single fixed point, but rather, you start bouncing between two values. We can actually solve for what these two values are using none other than ... the cubic equation (that's the small extent of continuity here with the previous posts!). The way to see this: if there's a period-2 sequence, $x = F(a, y)$ and $y = F(a, x)$ for the two values, which means $x = F(a, F(a, x))$ and therefore $x = aF(a, x)(1-F(a, x)) =  a(ax(1-x))(1- ax(1-x))$. This is a priori a quartic equation, since the number of $x$'s eventually multiply out to degree $4$, but once you realize there's a trivial solution $x = 0$ here (because there's a factor of $x$ multiplying on both side), it's a cubic equation $a^2(1-x)(1-ax(1-x))  -1 = 0$. I'm not going to try solving this, despite having had much fun wrangling with Cardano's formula in the past few weeks, because that distracts from the ultimate insight (and indeed, as fun as that formula is, and the beautiful theory that eventually comes from trying to solve polynomial equations, Galois theory), you quickly enter territory in which you stop being able to solve for things in terms of roots. To visualize we instead proceed numerically, directly iterating from a start value and waiting for things to stabilize around a set of values. One sees that as $a$ grows higher and higher, the number of final points starts doubling and doubling, faster and faster (in fact, a factor of the Feigenbaum constant faster), until all chaos breaks loose and the iterator never settles on any fixed point. It's very weird how just varying the parameter $a$, one can go from very nice fixed point behavior to wild chaos.

Ok, you can't go very far analytically, but we can still try anyway:


To explain it, we'll have to back up a little. Note that I said the $2$-periodic solution involves the solution to a cubic equation. Where is the third one? (actually, as previously mentioned, it's really quartic, and $x=0$ is a solution). Is the cubic always one that has a zero discriminant? Probably not. We mentioned that these "fixed" points come about from iterating the mappings, i.e., there is dynamics involved. Solving the cubic, you will actually get $1 - 1/a$, the original fixed point, as one of the solutions (in fact, here is again where numerical analysis helps: you can get around using Cardano's formula if you realize from the picture that the original fixed point continues; then you guess the solution $1-1/a$, factor it out with synthetic division, and solve the remaining quadratic equation — this is exactly how I taught college algebra students to solve cubics). Of course, this has to satisfy $x = F(a, F(a, x)) = G(a, x)$, and the two solutions to this equation fixed points of $G$, and thus when $F$ is evaluated on it, it alternates between the two values. What goes wrong here is that this point (as well as the point $x=0$) is unstable, namely, if you start iterating from some point even slightly away from this fixed point, iterating the map will run away from that point $1 - 1/a$. What is significant about the value $a = 3$ is that the discriminant of the cubic forces two of the roots to become complex conjugates, so there is only one real solution, and the discriminant also is the switchover point from when the iteration is stable there versus anywhere else. A fixed point $x$ is stable whenever $|\partial G/\partial x| < 1$ at that point. This can be roughly seen by linearizing about the fixed point, a standard technique:

\[G(a, x+h) \approx G(a, x) + (\partial G/\partial x(a, x))h + O(h^2) = x + (\partial G/\partial x) h + O(h^2).\]

In other words, $G(a, x+h)$ is closer to $x$ than $x + h$ is, because $h$ is being multiplied by something of magnitude $< 1$. So iterating brings things close to the fixed point even closer. It can be seen that at least for a little bit after $a = 3$, the central $1-1/a$ continues, but the other two solutions are the stable fixed points. So if you plot the curves $x = G(a, x))$, you'll get something that divides into $3$ branches at $a = 3$ (referred to as a pitchfork bifurcation). Similarly, if you plot $x = F(a, F(a, F(a, F(a, x)))) = H(a, x)$, the 4th order iterate, it will divide into pitchforks twice. This is given in the above plot. However, to figure out which fixed points will actually happen if you iterate from a random starting point, you can figure out where $|\partial H/\partial x| < 1$. This gives the regions in the plane where solutions are stable. You can alternatively plot the complement, where it is $\geq 1$, which can be made to cover up the parts of the pitchforks that are not stable solutions. The weird gloopy visualization of these regions is just cool to look at, even where there aren't any stable points. The curves you see inside there are still the correct initial ones before it branches out into a complete mess.  It of course quickly becomes ferociously hard to compute for higher degrees, so a numerical study is still the way to go. But this is conceptually fun, though!

I leave you with a similar implicit function graph, but now color-coded:


The red and blue correspond to $|\partial G/\partial x| \geq 1$ and the greens are where it is $< 1$.

Another version, executed much faster using TensorFlow (yes, it can be used for non-machine learning applications!)