tag:blogger.com,1999:blog-59500747944658318982018-09-12T17:30:07.430-07:00Nested ToriA blog dedicated to finding AWESOME visualizations in mathematics and other scientific disciplines. Even if you're not a mathematician or scientist, enjoy the view!Chris Tieenoreply@blogger.comBlogger36125tag:blogger.com,1999:blog-5950074794465831898.post-43810661061072188232017-02-07T13:12:00.000-08:002017-02-07T15:44:56.119-08:00Finite Element Methods on SurfacesIn this post, we return to one of the roots of Nested Tori, namely, visualization of problems solved using finite element methods. There's quite a few projects that I worked on for that PhD, whose scattered results have not managed to make it on here today, concerning curved domains. For some topics we <i>have </i>covered, see <a href="http://www.nestedtori.com/search/label/finite%20element%20methods">posts with the tag <i>finite element methods</i></a>. These include, e.g., <a href="http://www.nestedtori.com/2012/10/fun-with-fenics.html">basic diffusion in planar domains</a>, <a href="http://www.nestedtori.com/2012/11/hyperbolic-equations-and-symplectic.html">wave equations in planar domains using symplectic methods</a>, and <a href="http://www.nestedtori.com/2012/11/sneak-peek-vector-finite-elements.html">electromagnetic wave simulation using vector finite elements</a>. These posts will be a useful preview of today's post.<br /><br />Our first two examples concern heat diffusion on evolving surfaces with two interesting topologies: the sphere and the torus. The evolution is fairly simple: rotation about an axis. The heat source remains fixed in the surrounding 3D space, and focuses on a fixed (in the ambient space) spot on the sphere; the rotation carries the heat away and it spreads throughout the sphere (or torus).<br /><br /><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/yI1QdupBkPc/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/yI1QdupBkPc?feature=player_embedded" width="320"></iframe></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/TQ2Rc-rYi4c/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/TQ2Rc-rYi4c?feature=player_embedded" width="320"></iframe></div><br /><br />Next, we consider a case where the solution can't be effected by an obvious or simple change of coordinates (i.e. in the terminology of fluids and continuum mechanics, switching from Eulerian to Lagarangian coordinates): heat on a bouncing sphere that stretches and shrinks vertically. Here the heat source builds with time and cuts off midway to just let the heat spread:<br /><br /><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/P8ua2oRaDeE/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/P8ua2oRaDeE?feature=player_embedded" width="320"></iframe></div><br />The implementation of all of these cases are via surface finite elements in the Eulerian description: the surface is considered a material that moves through space, but the spatial coordinates are fixed.<br /><br />Our last example is a wave equation on a sphere, using surface finite elements and <a href="http://www.nestedtori.com/2012/11/hyperbolic-equations-and-symplectic.html">symplectic methods</a>. The oscillating quantity $u$ is depicted as displacement along the normal direction of a sphere (so that, for example, if $u$ has some small positive value at one point and decreases rapidly to 0 as you move away, it looks like a small lump over that point). The initial condition is a ruffly structure (with speed 0), and we just <a href="https://www.youtube.com/watch?v=moSFlvxnbgk">let it go</a>.<br /><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/4kPT-P_K5dY/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/4kPT-P_K5dY?feature=player_embedded" width="320"></iframe></div><div class="separator" style="clear: both; text-align: left;">Enjoy!</div><br />Chris Tieehttps://plus.google.com/101200599508754408406noreply@blogger.com0tag:blogger.com,1999:blog-5950074794465831898.post-41455994402292648472016-08-27T11:55:00.000-07:002016-09-02T03:21:09.210-07:00The Best CakeOptimization is one of the most useful tools in all of math. Many of the visualizations here based in finding the optimum solution to something (and indeed, it drives many of the solution methods of differential equations). Here's an optimization <a href="http://fivethirtyeight.com/features/can-you-bake-the-optimal-cake/">puzzler</a> brought to you by <a href="http://fivethirtyeight.com/">FiveThirtyEight</a>. What are the optimal thicknesses of a 3-layer cake (each layer is cylindrical) that can fit into a given (right, circular) cone, that maximizes the volume? My <a href="https://ccom.ucsd.edu/~ctiee/538puzzler-0826.pdf">solution</a> is complicated, but here's a picture of the result... (it's red velvet)<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-_snFdugjICk/V8HhamOK2QI/AAAAAAAAD0U/lu_y5w08_vgnU-elYQgC6L1Jl2g7BqsPACLcB/s1600/Screen%2BShot%2B2016-08-27%2Bat%2B11.46.15%2BAM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="484" src="https://1.bp.blogspot.com/-_snFdugjICk/V8HhamOK2QI/AAAAAAAAD0U/lu_y5w08_vgnU-elYQgC6L1Jl2g7BqsPACLcB/s640/Screen%2BShot%2B2016-08-27%2Bat%2B11.46.15%2BAM.png" width="640" /></a></div><br />Chris Tieehttps://plus.google.com/101200599508754408406noreply@blogger.com0tag:blogger.com,1999:blog-5950074794465831898.post-77146309295529261762016-05-18T13:01:00.000-07:002016-05-19T02:48:29.819-07:00A Calculation in Statistical MechanicsWe've been harping about phase and state space for a long while, now, and have mentioned at more than one point that statistical mechanics is a subject dealing with, in some sense, the ultimate state space, of many gazillions of dimensions. It is amazing that we can even calculate anything about such spaces. So we take a break from tori, and return to the (music of the) spheres, today, to give an example. We consider $N$-dimensional spheres, for $N = 0, 1, 2, 3, \dots$ and their "volumes". Note, of course, by $N$-sphere, we always mean the curved surface of points in $(N+1)$-dimensional space with unit distance from the orgin, and not the whole ball. [The $3$-sphere, as we know, is foliated by nested tori. We aren't going TOO far afield].<br /><br />To talk about this right, we need a little physics. Most of you probably have heard of the principle of conservation of energy... If you haven't, or even if you have, make sure to read about how <a href="http://www.nestedtori.com/2015/08/hamiltonian-mechanics-is-awesome-double.html">Hamiltonian Mechanics is Awesome</a>. Anyway, so if you imagine little billiard balls of uniform mass $m$ freely zipping about, banging into stuff, each particle having position $\mathbf{q}_i$ and momentum $\mathbf{p}_i$, then since we know <a href="http://www.nestedtori.com/2015/08/hamiltonian-mechanics-is-awesome-double.html">Hamiltonian Mechanics is Awesome</a>, this says that the two quantities evolve in such a way that the energy of the $i$th particle \[<br />E_i = \frac{|\mathbf{p}_i|^2}{2m}<br />\]<br />stays constant. But if you imagine having a very, very large number of such billiard balls (several billiards of such (1 billiard = $10^{15}$ in most dialects of English, or such equivalent cognates) HAR HAR!), and compute the <i>total energy</i>, summing them all up,\[<br />H = \sum_i E_i = \sum_i \frac{|\mathbf{p}_i|^2}{2m}.<br />\] To say that the <i>total</i> energy is constant, then, is to say \[<br />\sum_i \frac{|\mathbf{p}_i|^2}{2m} = E<br />\] for some fixed number $E$. But what does this say? If we write out each \[<br />|\mathbf{p}_i|^2 = p_{ix}^2 + p_{iy}^2 + p_{iz}^2,<br />\] then \[<br />2mE=\sum_i p_{ix}^2 + p_{iy}^2 + p_{iz}^2,<br />\] meaning that if we sum over all the particles, say $N$ of them, where $N \approx 10^{24}$ (a milliard billiard of billiard balls), this means <i>we're summing the squares of a $3N$-dimensional vector and equating it to a constant</i>. So if we ask ourselves:<br /><br /><i>What is the collection of all possible momenta of $N$ particles that have one constant energy level $E$,</i><br /><br /><i></i>this is the <i>same</i> as saying,<br /><i><br /></i><i>What are all the points on the $(3N-1)$-dimensional sphere of radius $\sqrt{2mE}$?</i><br /><i><br /></i>(the -1 from the fact that you're expressing the surface as the solution set to one equation: the boundary of the $k$-ball is the $(k-1)$-sphere)<i> Interesting</i>, you might say, but is there any occasion which actually requires you to <i>truly</i> do calculations treating it precisely as a geometrical $(3N-1)$-dimensional object?<br /><br />Well, what's one of the first things you think about when talking geometry? Areas and volumes, of course. There are situations in which you need to <i>integrate</i> over such energy "surface", or within the $N$-ball contained within, for $N \sim 10^{24}$. Basically, to find quantities like entropy, one needs to "count some number of cells" inside the $3N$-ball . This yields the total "number" of energy states with level <i>less than or equal to</i> $E$. "Number" in quotations because classically, of course, the actual number of states is the number of points on a sphere, which is of course uncountably infinite. We instead mean, per unit $3N$-dimensional volume, which actually leads to the <i>density</i> of states. The distinction is a little blurry, because, with a little help of quantum mechanics, we can think of each state as a cell of volume $(2\pi \hbar)^{3N}$, and each distinct state of momentum, which must be quantized, ultimately arises from boundary conditions in solving Schrödinger's equation, or eigenstuff.<br /><br />Suffice to say, however, let's talk as if calculating the volume of the $3N$-ball does indeed count the total number of momentum states with energy less than or equal to $E$, i.e., let's calculate \[<br />\frac{1}{(2\pi \hbar)^{3N}}\int_{|\mathbf{p}|^2 \leq 2mE} d\mathbf{q}d\mathbf{p} =\frac{V^N}{(2\pi \hbar)^{3N}}\int_{|\mathbf{p}|^2 \leq 2mE} d\mathbf{p} .<br />\]<br />where $V$ is the ordinary 3D volume of the spatial region where all our particles live. For the momentum integral, we need the formula (whose derivation is fodder for another post!) for the <a href="https://en.wikipedia.org/wiki/N-sphere">volume of an $K$-ball</a> \[<br />V_K(R) = \frac{\pi^{K/2}}{\Gamma\left(1+\frac{K}{2}\right)}R^K<br />\]<br />where $\Gamma(1+ K/2)$ is $(K/2)!$, the factorial of $K/2$, if $K$ is even, and $\pi^{1/2} K!!/2^{K/2}$ where $K!!$ is the double factorial $1\cdot 3 \cdot 5 \cdots k$, for $k$ odd.<br /><br />Plugging this in with $R = \sqrt{2mE}$ and $K=3N$ we have the total volume of a $3N$-ball in momentum space: \[<br />V_{3N}(\sqrt{2mE}) = \frac{\pi^{3N/2}}{\Gamma\left(1+\frac{3N}{2}\right)} (2mE)^{3N/2}<br />\]<br />To make headway here, we can use <a href="https://en.wikipedia.org/wiki/Stirling%27s_approximation">Stirling's Approximation</a> for the Gamma function:\[<br />\Gamma\left(1+\frac{3N}{2}\right) \approx \sqrt{3N\pi} \left(\frac{3N}{2e}\right)^{3N/2}<br />\] This gives \[<br />V_{3N}(\sqrt{2mE}) \approx \frac{1}{\sqrt{3\pi N}}\left(\frac{4\pi meE}{3N} \right)^{3N/2}<br />\] or, the total number of states is \[<br />\#\text{states} = \frac{V^N V_{3N}(\sqrt{2mE})}{(2\pi \hbar)^{3N}} \approx \frac{1}{\sqrt{3\pi N}}\left(\frac{4\pi meEV^{2/3}}{3N (2\pi \hbar)^2} \right)^{3N/2}<br />\]<br />Considering the smallness of $\hbar$ ($10^{-34}$ in everyday units) and the exponent $3N/2$ being on the order of $10^{24}$, this could be a staggeringly huge number (but that's what you expect: for lots of particles, they really can occupy even more states. For example, two particles behaving independently is the <i>square</i> the total number of states of for one particle). Thus to measure entropy, we take the natural log of all of this (and add a factor of the Boltzmann constant, $k$ which is on the order of $10^{-23}$ in ordinary units), we get \[<br />k\ln(\#\text{states}) = \frac{3Nk}{2} \left( \ln \left(\frac{4\pi mEV^{2/3}}{3N(2\pi \hbar)^2}\right) +1\right) -\frac{1}{2}\ln(3\pi N)k.<br />\]<br />Since those last terms are small potatoes (not being multiplied by $N$), we usually just drop it to arrive at the <b>entropy</b> of our system of particles as<br />\[<br />S = \frac{3Nk}{2} \left( \ln \left(\frac{4\pi mEV^{2/3}}{3N(2\pi \hbar)^2}\right)\right)+\frac{3Nk}{2}.<br />\] And for you chemists, $Nk = nR$, where $n$ is the number of moles and $R$ is the gas constant:<br />\[<br />S = \frac{3nR}{2}\left( \ln \left(\frac{4\pi mEV^{2/3}}{3N(2\pi \hbar)^2}\right)\right)+\frac{3nR}{2}.<br />\]<br />That's really nasty. How about we try to plug numbers into this and see what it is? We'll do that more concretely next time. But suffice to say, let's get back to what's interesting: we calculated the volume of some high-dimensional stuff!Chris Tieehttps://plus.google.com/101200599508754408406noreply@blogger.com0tag:blogger.com,1999:blog-5950074794465831898.post-37193832194499826782016-05-13T10:48:00.000-07:002016-05-13T10:48:16.659-07:00The Volume of Intersecting Cylinders<div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-ILqovXN5arQ/VyIw7EQhbxI/AAAAAAAADts/8vfTEZC_c8caJiEaRa_kQgzhpmRcyT70wCLcB/s1600/classic-intersecting-cyls-larger-view.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="466" src="https://4.bp.blogspot.com/-ILqovXN5arQ/VyIw7EQhbxI/AAAAAAAADts/8vfTEZC_c8caJiEaRa_kQgzhpmRcyT70wCLcB/s640/classic-intersecting-cyls-larger-view.png" width="640" /></a></div><div class="separator" style="clear: both;"><br /></div><div class="separator" style="clear: both;">Now for another classic, although not as "kewel" as the spiral example. This time, it was <i>I</i> who asked for the help: once upon a time in a calculus class, we were asked to compute the volume of the solid that is the intersection of two cylinders at right angles. This is a surprisingly subtle calculation, and one of the more difficult-to-visualize things. Fortunately, I've managed to reproduce it above, with some transparency so the intersection is a little bit more visible. Now, here is the full intersection itself without the extra cylinder parts (called the <a href="http://mathworld.wolfram.com/SteinmetzSolid.html">Steinmetz Solid</a>):</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-Ki-CRYQxxiA/VyHFzc6GHCI/AAAAAAAADtI/Ko_-Y5hhTLQENxJYgRZ8T_OUvLpYvbxlgCLcB/s1600/classic-intersecting-cylinders.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="554" src="https://4.bp.blogspot.com/-Ki-CRYQxxiA/VyHFzc6GHCI/AAAAAAAADtI/Ko_-Y5hhTLQENxJYgRZ8T_OUvLpYvbxlgCLcB/s640/classic-intersecting-cylinders.png" width="640" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">It is said that the Chinese mathematician <span style="background-color: #f9f9f9; font-family: sans-serif; font-size: 16px; line-height: 18.48px;">祖沖之</span> (Zu Chongzhi) computed the volume of this solid in the 400s ... I don't know how he did it without calculus, which is the way we're going to do it today. To do this, we write the equations of the two cylinders, say of radius $R$, which we will assume to have their axes in the $xy$-plane: </div>\[ x^2 + z^2 = R^2 \] \[ y^2 + z^2 = R^2. \]<br />Let's see what the actual intersection is. We set the two equations equal to each other:<br />\[<br />x^2 + z^2 = y^2 +z^2<br />\]<br />which means $x^2 = y^2$. This in turn means $y = \pm x$. In the $xy$-plane, this looks like two intersecting lines in an "X" shape, which you can see by looking at the solid above from the very tippy top. Of course, this is not exactly what the intersection actually looks like: we still have that there is a $z$-coordinate, given by $z = \pm \sqrt{R^2-x^2} =\pm \sqrt{R^2-y^2}$, so that the complete parametrization of the curves (four of them corresponding to each choice of $+$ or $-$) are given by, for example<br />\[<br />x = t<br />\]\[<br />y = \pm t<br />\]\[<br />z = \pm\sqrt{R^2-t^2}<br />\]<br />for $t$ varying from $-R$ to $R$ (the maximum allowable range given that it has to lie on a cylinder). This gives the "corners" of the solid given above. The solid itself is given by stacking squares with edges running from one curve to another (you can also see that in the solid above). To calculate the volume, we integrate the area of cross-sections perpendicular to the $z$-axis:<br />\[<br />V=\int_{-R}^R A(z)\,dz = \int_{-R}^R s^2\,dz.<br />\]<br />Therefore, we must find the side length $s$, which requires us to solve things in terms of $z$: $z =\pm\sqrt{R^2-x^2}$ means $z^2 = R^2 - x^2$, or $x^2 = R^2 - z^2$, so finally $x = \pm\sqrt{R^2 - z^2}$. Since the sides of the square are the same on either side, we can choose the positive square root. Then, since one side of the square goes between the two different values of $y$ for a given $x$, we find $s = \sqrt{R^2 - z^2}-(-\sqrt{R^2 - z^2}) = 2\sqrt{R^2 - z^2}$. Finally, we can find the integral:<br />\[<br />V = \int_{-R}^R s^2\, dz = \int_{-R}^{R} 4(R^2-z^2)\,dz = \left.4R^2 z - \frac{4}{3}z^3\right|_{-R}^R = \left(8-\frac{8}{3}\right)R^3 = \frac{16}{3}R^3.<br />\]Chris Tieehttps://plus.google.com/101200599508754408406noreply@blogger.com0tag:blogger.com,1999:blog-5950074794465831898.post-49691584676816312832016-05-11T13:53:00.002-07:002016-05-12T13:30:17.102-07:00Yitz..'s Kewel SpiralEvery once in a while, I get an "assignment" to calculate some interesting parametrization (because parametrizations are awesome like that). For example, the <a href="http://www.nestedtori.com/2016/04/pasta-parametrization-high-resolution.html">bowtie pasta</a> was originally a request from a student. Today's goes way back, to someone asking a question on a forum in math way back when. He needed a spiral that started out and twisted its way through space as it got larger. In essence, a spiral that <i>doesn't</i> stay in the same plane. Obviously, polar curves are out of the question, since they are, well, in the plane. But what's one dimension up from polar coordinates? <i>Spherical</i> coordinates $(r,\varphi,\theta)$, of course (or if you're a physicist, $(r,\theta,\phi)$).<sup><a href="http:/#footnote_1">1 </a></sup>We fall back on a good ol' tool, <a href="http://www.nestedtori.com/2015/08/parametric-pasta-relaunch-of-nested-tori.html">parametrization</a>.<br /><br />To discover what its equation might be, we first consider an Archimedean spiral ($r = a\varphi$) in the $r\varphi$-plane (or $yz$-plane), or possibly a logarithmic one, like $e^{\varphi/3}$, which grows as it goes around (like some spirals <a href="http://www.nestedtori.com/2016/04/more-polar-classics-spirals.html">last time</a>) the "pole" which is now the $x$-axis. This ends up growing as you move clockwise, rather than counterclockwise, since $\varphi$ increases as it goes from north toward east.<br /><br />This is most efficiently represented in a parametric manner: first, in the $yz$-plane:\[<br />x=0,<br />\] \[<br />y = at \sin t,<br />\] \[<br />z = at \cos t.<br />\]<br />gives the parametric equation of the Archimedean spiral in the $yz$-plane going clockwise. This gives $\varphi = t$ and $r = at$, which recovers the original polar equation $r = a\varphi$.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-m-NcQsSuRoQ/VzOaPxpbR-I/AAAAAAAADuw/7EmSQyJWa8YuPz2txZz_7v7SFGr2Lj88ACLcB/s1600/archim-to-yitz.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="640" src="https://1.bp.blogspot.com/-m-NcQsSuRoQ/VzOaPxpbR-I/AAAAAAAADuw/7EmSQyJWa8YuPz2txZz_7v7SFGr2Lj88ACLcB/s640/archim-to-yitz.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Archimedean Spiral in the $yz$-plane, with $a = 1/20$.</td></tr></tbody></table><br /><br />How do we give it a twist? We reason that, after each revolution, we want it twisted out of the plane by some small amount (say, rotated about the $z$-axis by $b$ times a full revolution): we apply the rotation to the $x$ and $y$, giving:\[<br />x=at\sin t \cos (bt),<br />\] \[<br />y = at \sin t \sin(bt),<br />\] \[<br />z = at \cos t.<br />\]<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-1k252_DuEMc/VzObfgatL3I/AAAAAAAADu8/nin_kVt0EhgvCZO2e-6nrDF-KnhB0HyDwCLcB/s1600/Screen%2BShot%2B2016-05-11%2Bat%2B1.51.57%2BPM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="640" src="https://2.bp.blogspot.com/-1k252_DuEMc/VzObfgatL3I/AAAAAAAADu8/nin_kVt0EhgvCZO2e-6nrDF-KnhB0HyDwCLcB/s640/Screen%2BShot%2B2016-05-11%2Bat%2B1.51.57%2BPM.png" width="604" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Twisting out of the plane with parameter $b = 1/20$, also.</td></tr></tbody></table><br />Here's the original one I delivered to Yitz.. (it is actually exponential):<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-JGFEBVQkTpM/VyHDmG2wbDI/AAAAAAAADs4/otSrfMItorIZXF1yWv7jK-gbo5ATdw7NwCLcB/s1600/yitz-kewel-spiral.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="640" src="https://4.bp.blogspot.com/-JGFEBVQkTpM/VyHDmG2wbDI/AAAAAAAADs4/otSrfMItorIZXF1yWv7jK-gbo5ATdw7NwCLcB/s640/yitz-kewel-spiral.png" width="532" /></a></div><div class="separator" style="clear: both; text-align: center;"><span style="text-align: start;">Pretty </span><i style="text-align: start;">kewel</i><span style="text-align: start;">, right? (that was his exact word to describe it).</span></div><br /><sup>1<span style="font-size: small;"> </span></sup><span style="font-size: x-small;">I always use the convention that if I indeed need to refer to a physics text, do calculations to be consistent... but I always write my $\phi$'s as non-curly in that context... I haven't yet figured how to draw my $\theta$s different though. Yes, I know there is $\vartheta$, but it hasn't caught on in my notation. It just doesn't feel enough like a $\theta$.</span>Chris Tieehttps://plus.google.com/101200599508754408406noreply@blogger.com0tag:blogger.com,1999:blog-5950074794465831898.post-86980686316109223802016-04-30T16:43:00.000-07:002016-04-30T16:43:15.797-07:00More Polar Classics: SpiralsSome more from the classic polar curves collection... today we specialize in <i>spirals</i>. The basic concept of what a spiral should be, in polar coordinates, is any plot $r$ varying with $\theta$, $r=f(\theta)$, where $f$ is always increasing, and $\theta$ is allowed to go beyond the usual range of $0\leq \theta < 2\pi$ (or $-\pi < \theta \leq \pi$). The most fundamental one, of course, is $r = \theta$, which says the distance from the origin is proportional to the amount you've wound around the origin:<br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-mTYFTBWaRLQ/VyU-4TT400I/AAAAAAAADt8/ZtOkLyfHxpU-it1vMjNHFLqdwyUwVOJZACLcB/s1600/archimedes-spiral.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="608" src="https://4.bp.blogspot.com/-mTYFTBWaRLQ/VyU-4TT400I/AAAAAAAADt8/ZtOkLyfHxpU-it1vMjNHFLqdwyUwVOJZACLcB/s640/archimedes-spiral.png" width="640" /></a></div><br />(here the angle is in radians and the equation is $r = \theta$). Of course, some people thought this was a bit boring, so Fermat spiced it up and thought what if we make the spirals wind tighter as you go around? Basically, make it a concave-down, increasing function, $r = \sqrt{\theta}$. Actually, it's more fun to also consider $r = -\sqrt{\theta}$:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-E5Yj1MQyg9A/VyU_6M_LwjI/AAAAAAAADuI/Zv4Q9K6NSLcERdiAxPCoYcWTxgkXMJc0wCLcB/s1600/fermat-spiral.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="622" src="https://3.bp.blogspot.com/-E5Yj1MQyg9A/VyU_6M_LwjI/AAAAAAAADuI/Zv4Q9K6NSLcERdiAxPCoYcWTxgkXMJc0wCLcB/s640/fermat-spiral.png" width="640" /></a></div><div class="separator" style="clear: both; text-align: center;">($r=\sqrt{\theta}$ is in blue, and $r = -\sqrt{\theta} is in yellow$)</div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">This of course could more succinctly be written $\theta = r^2$. But perhaps the second-most famous, due to its ubiquity in nature (because stuff tends to grow exponentially) is the famous logarithmic spiral $r = e^{c \theta}$ or $\theta = \frac{1}{c} \log r$, which magnifies every time you spin it.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-__f-A7PJMuk/VyVBbyl_hwI/AAAAAAAADuU/Y9Sm0F9OORQNnofxYc68L6Qxb_zOvWnBgCLcB/s1600/logspiral.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="628" src="https://1.bp.blogspot.com/-__f-A7PJMuk/VyVBbyl_hwI/AAAAAAAADuU/Y9Sm0F9OORQNnofxYc68L6Qxb_zOvWnBgCLcB/s640/logspiral.png" width="640" /></a></div><div class="separator" style="clear: both; text-align: center;">($r=e^{\theta/5}$)</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">And finally, where would we be in polar graphs if we didn't have the analogue of stuff with asymptotes? Consider $r^{-2} = \theta$ (instead of $r^+2$):</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-yW5SDFloqPM/VyVC_yrM4oI/AAAAAAAADug/BQRg-N4k_S8t7d1Ch5J_ldbB2tTPrhMqQCLcB/s1600/lituus-of-cotes.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="212" src="https://2.bp.blogspot.com/-yW5SDFloqPM/VyVC_yrM4oI/AAAAAAAADug/BQRg-N4k_S8t7d1Ch5J_ldbB2tTPrhMqQCLcB/s640/lituus-of-cotes.png" width="640" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">This is called the <i>lituus of Cotes. </i>There's all sorts of curves from back in the day with very funny names; I used to look for a lot of them on the <a href="http://www-history.mcs.st-and.ac.uk/">MacTutor History of Math Page</a> (more specifically, its <a href="http://www-history.mcs.st-and.ac.uk/Curves/Curves.html">Curves Index</a>).</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">There's one other spiral that's my favorite, but (1) it isn't polar, and (2) there's a lot of interesting differential geometry going on in it, so I'm saving it for another post.</div>Chris Tieehttps://plus.google.com/101200599508754408406noreply@blogger.com2tag:blogger.com,1999:blog-5950074794465831898.post-56928340116156762162016-04-28T10:00:00.000-07:002016-05-10T12:16:35.051-07:00Variance Solids: Volumes and $\mathscr{L}^2$, Part 2<div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-UT3x3c-SNZs/VyG6sQ0OrrI/AAAAAAAADsc/iRlu3VLxpCEaCYDc0fgRWOYJWAm3pRXQgCLcB/s1600/var-surf-revolution-2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="310" src="https://4.bp.blogspot.com/-UT3x3c-SNZs/VyG6sQ0OrrI/AAAAAAAADsc/iRlu3VLxpCEaCYDc0fgRWOYJWAm3pRXQgCLcB/s640/var-surf-revolution-2.png" width="640" /></a></div><br /><a href="http://www.nestedtori.com/2016/04/volumes-of-solids-and-squared-mathscrl2.html">Last time</a>, we talked about how the classical $\mathscr{L}^2$ norm of a (one-variable) function could be visualized as a solid consisting of squares. Of course, there's another formula that is defined by almost the same formula, with an extra factor of $\pi$: the <i>solid of revolution</i> of the function about the $x$-axis: \[<br />V = \int_a^b \pi f(x)^2\,dx<br />\]<br />It looks pretty cool, although not quite as cool as twisted square cross-sections.<br /><br />I wanted to mention another common application of $\mathscr{L}^2$ norms that probably is more familiar than any other: the statistical concept of <i>variance</i>. For functions $f$ defined on a continuous domain, we can define an average value:\[<br />\overline{f} = \frac1{b-a}\int_a^b f(x)\,dx.<br />\] which basically is the height of a rectangle over the same domain $[a,b]$ <i>that has the same area as the area under </i>$f$. For example, for \[<br />f(x) = \sin \left( x \right)\sin \left( t \right)+\frac{1}{3}\sin \left( 5x \right)\sin \left( 5t \right)+\frac{1}{3}\sin \left( 10x \right)\sin \left( 10t \right)+\frac{1}{5}\sin \left( 15x \right)\sin \left( 15t \right)<br />\]<br />(choosing $t=3.030303$), it looks like:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-JD8Iu1hG80s/VyG8zRYmQrI/AAAAAAAADso/jX1iJogYuSEEMmZwnjvlx-yPE95xHWdzgCLcB/s1600/var-surf-generator.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="476" src="https://3.bp.blogspot.com/-JD8Iu1hG80s/VyG8zRYmQrI/AAAAAAAADso/jX1iJogYuSEEMmZwnjvlx-yPE95xHWdzgCLcB/s640/var-surf-generator.png" width="640" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div>(the blue line in the middle is the mean value). The variance $\sigma^2$ is then a measure of how far $f$ is from its mean:\[ \sigma^2 = \frac{1}{b-a}\int_a^b (f(x) - \overline{f})^2\,dx. \] This of course can be visualized exactly as <a href="http://www.nestedtori.com/2016/04/volumes-of-solids-and-squared-mathscrl2.html">last time</a>, namely, make a bunch of square cross sections about the line $y=\overline{f}$, and possibly do funny things to such squares, such as rotate it. But ... for kicks, let's add a factor of $\pi(b-a)$:\[ \pi(b-a)\sigma^2 = \int_a^b\pi (f(x) - \overline{f})^2\,dx, \] <i>which is precisely the same quantity as the solid obtained by revolving $f(x)$ around its mean $y=\overline{f}$.</i> The resulting solid is the title picture.Chris Tieehttps://plus.google.com/101200599508754408406noreply@blogger.com0tag:blogger.com,1999:blog-5950074794465831898.post-50147011693742093822016-04-24T02:38:00.001-07:002016-04-28T00:50:29.474-07:00Volumes of Solids and (squared) $\mathscr{L}^2$ Norms<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-MwFzcldDZPw/VxyOV91DoMI/AAAAAAAADr4/00N4z_eO5V01iXQudfgj4kHHn5gPfUu5gCLcB/s1600/l2norm-swirl.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="478" src="https://4.bp.blogspot.com/-MwFzcldDZPw/VxyOV91DoMI/AAAAAAAADr4/00N4z_eO5V01iXQudfgj4kHHn5gPfUu5gCLcB/s640/l2norm-swirl.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">The volume of this solid is the squared $\mathscr{L}^2$ distance between $y=x^2$ and $y=x^3$ on the interval $[0,\frac{3}{2}]$.</td></tr></tbody></table><br />The good ol' $\mathscr{L}^2$ norm has an outsized importance in many applications of math, principally, because Hilbert spaces (spaces with inner product) are useful for, well, just about anything. Much of a fan of Hilbert spaces I might have been for much of my recent life, it is only when teaching how to find the volume of solids (and a little help from Math Homie William) when it hit me what the best visualization for them is, at least for functions defined on $\mathbb{R}$. (For the longest time, I just said "It's sort of like the area between two curves, but, you know, squared, and stuff.") But we learn in integration theory that we can base a solid by putting squares (or semidisks or triangles, or whatever) in a perpendicular, 3rd dimension, over the area between curves; the volume of the solid is \[<br />V=\int_a^b A(x) \, dx<br />\] where $A(x)$ is the area of the cross section. But of course there's an obvious candidate: if you choose <i>squares</i> to have side length being the height of your function $f(x)$, then $A(x)$ is just $f(x)^2$. Thus,\[<br />V = \int_a^b f(x)^2 dx.<br />\] But that's none other than the squared $\mathscr{L}^2$ norm of $f$! Similarly, for the volume of a solid consisting of squares between two curves $f(x)$ and $g(x)$, this is just \[<br />V=\int_a^b (f(x) - g(x))^2dx,<br />\] the (squared) $\mathscr{L}^2$ distance between the two functions. Now if we treat each of these squares like a deck of cards, and rotate them, it shouldn't change the volume. So we apply this to the functions $f(x)=x^2$ and $g(x)=x^3$ on $[0,\frac{3}{2}]$:<br /><br /><a href="https://3.bp.blogspot.com/-GsiiAwh_d3M/VxyTS6QA3EI/AAAAAAAADsI/1FqcoZgn9XksrHcjSdZ5X00ZjnbrL476gCLcB/s1600/abetween.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em; text-align: center;"><img border="0" height="476" src="https://3.bp.blogspot.com/-GsiiAwh_d3M/VxyTS6QA3EI/AAAAAAAADsI/1FqcoZgn9XksrHcjSdZ5X00ZjnbrL476gCLcB/s640/abetween.png" width="640" /></a><br /><br />Now making vertical segments between the curve, and making a square with that side length pop half out of the plane, and half into the plane, gives us a nice solid. Rotating it gives the picture at the top.<br /><br />Chris Tieehttps://plus.google.com/101200599508754408406noreply@blogger.com0tag:blogger.com,1999:blog-5950074794465831898.post-91919210525230935752016-04-15T21:02:00.000-07:002016-04-28T00:49:23.760-07:00Polar Coordinate ClassicIn some sense, my first "Nested Tori" was a notebook which contained really cool plots and graphs. There's lots of old gems there worth posting and updating. Today, I bring you a nice polar plot. I was really intrigued by plots in polar coordinates, because it represented the first real departure from simply graphing $y=f(x)$. We instead (most commonly) write $r = f(\theta)$ where $r$ is distance from the origin and $\theta$ is counterclockwise angle, and let things get farther or closer as we go around. Without further ado, here is the famed butterfly (it does have a little post-warping afterward, though), which shows you that you don't need fancy-schmancy 3D stuff to still find great things:<br /><div><br /><div>$r = e^{\sin \theta} - 2 \cos(4\theta) +\sin^5\left(\dfrac{\theta-\frac{\pi}{2}}{12}\right), 0 \leq \theta \leq 24\pi$</div><div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-0EWMF1HfAAE/VxG5GSV9uOI/AAAAAAAADqo/KSlmb8S8LUIxA_HAxzTakgJqxv_3D-TcACLcB/s1600/butterfly.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="640" src="https://1.bp.blogspot.com/-0EWMF1HfAAE/VxG5GSV9uOI/AAAAAAAADqo/KSlmb8S8LUIxA_HAxzTakgJqxv_3D-TcACLcB/s640/butterfly.png" width="494" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div><br /></div></div></div>Chris Tieehttps://plus.google.com/101200599508754408406noreply@blogger.com0tag:blogger.com,1999:blog-5950074794465831898.post-78036224843839431422016-04-11T13:45:00.001-07:002016-04-11T18:16:11.421-07:00Pasta Parametrization, High ResolutionIntegrating (ha ha) Nested Tori back into life (of course I'm doing a lot of integrating, if I'm teaching, anyway). Lots of change was (and is) afoot.<br /><div><br /></div><div>Anyway, instead of continually getting peeved with Apple for <i>not</i> ever fixing any bugs in Grapher for over 10 years (a job that I would <i>gladly</i> take the mantle on, by the way: anyone at Apple listening??), I took matters into my own hands and started experimenting with some open source plotting libraries. In particular, I've taken somewhat of a liking for <a href="http://matplotlib.org/">matplotlib</a>, a famous Python package. The boon is, it can generate pdf files properly (unlike Grapher, which basically puts the bitmap into a pdf file and adds the extension .pdf, but when you zoom in, it's blocky and clearly still a bitmap). Honestly, though, it <i>is</i> in fact a lot more work (which was mitigated somewhat by defining some functions).</div><div><br /></div><div>Here are the results for plotting the <a href="http://www.nestedtori.com/2015/08/parametric-pasta-relaunch-of-nested-tori.html">Pasta Parametrization</a> in matplotlib (specifically, the mplot3d toolkit):</div><div><br /></div><div><a href="http://3.bp.blogspot.com/--AxUj1fQpyQ/VwwHUgSa5RI/AAAAAAAADpo/MHWKKQauWiMhxd0US-0rY0-PrcV9k96Iw/s1600/pasta-hi-res-bitmap.png" imageanchor="1"><img border="0" height="526" src="https://3.bp.blogspot.com/--AxUj1fQpyQ/VwwHUgSa5RI/AAAAAAAADpo/MHWKKQauWiMhxd0US-0rY0-PrcV9k96Iw/s640/pasta-hi-res-bitmap.png" width="640" /></a></div><div><br /></div><div>Of course, I should probably figure out how to control the lighting better: it doesn't look as good as Grapher's (at a scale for which the resolution issue is not a problem). And here are the matching formulas (at a larger size than the old version):</div><div><br /></div><div><a href="http://3.bp.blogspot.com/-CI0lPWpxZV4/VwwMmSD38vI/AAAAAAAADp4/qnfMCeDyEpEzo8JKNA6ViY90uSknzoKag/s1600/latex-image-1.png" imageanchor="1"><img border="0" height="506" src="https://3.bp.blogspot.com/-CI0lPWpxZV4/VwwMmSD38vI/AAAAAAAADp4/qnfMCeDyEpEzo8JKNA6ViY90uSknzoKag/s640/latex-image-1.png" width="640" /></a></div><div><br /></div><div>Happy parametrizing! (see <a href="http://www.nestedtori.com/2015/08/parametric-pasta-relaunch-of-nested-tori.html">here</a> for the answer key, and <a href="https://ccom.ucsd.edu/~ctiee/nt/PastaParam.py">here</a> for the Python source)<br />(Astute readers may notice that the above images are actually <i>still</i> bitmaps... but that's an artifact of the blogging software, which does not natively display pdf's ... see <a href="https://ccom.ucsd.edu/~ctiee/nt/pasta-param.pdf">here</a> for the pdf for that)</div>Chris Tieehttps://plus.google.com/101200599508754408406noreply@blogger.com0tag:blogger.com,1999:blog-5950074794465831898.post-25204059094702932302015-09-21T16:03:00.000-07:002015-09-26T00:14:11.914-07:00The Configuration Space of All Lines In the Plane<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-82R6NrD2NlY/UIEbNq7ngaI/AAAAAAAAATc/WGrucl4duiQ/s1600/Mo%25CC%2588bius%2Bstrip.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="522" src="http://1.bp.blogspot.com/-82R6NrD2NlY/UIEbNq7ngaI/AAAAAAAAATc/WGrucl4duiQ/s640/Mo%25CC%2588bius%2Bstrip.png" width="640" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Time for another classic post from the old site, while I'm at it… As we have seen in a bunch of previous posts, the notion of <i>configuration space</i> has held a prominent place in my mathematical explorations, because, I can't emphasize enough, it's not just the geometry of things that you can directly <i>see</i> that are important; it's the use of geometric methods to model many things that is.</div><br />Consider all possible straight lines in the plane (by <em>straight line</em> I mean those that are infinitely long). This collection is a configuration space—a <em>catalogue </em>of these lines. What does that mean? Of course, in short words, it's a manifold, and we've talked about many of those before. But it always helps to examine examples in detail to help develop familiarity. Being a manifold means we can find local <a href="http://www.nestedtori.com/2015/08/parametric-pasta-relaunch-of-nested-tori.html">parametrizations</a> of our collection by $n$-tuples of real numbers. How do we do this for lines in the plane?<br /><br /><h4>Finding Some Coordinates: The Slope-Intercept Equation</h4>First off, almost anyone reading this, regardless of mathematical background, probably was drilled at one time or another on the following famous equation, called the <em>slope-intercept equation</em>:<br />\[<br />y = mx + b.<br />\] The variable <em>m</em> if you recall is the slope of a line, and <em>b</em> is the <em>y-intercept</em> which means where the line crosses the <em>y</em>-axis. However few people realize that what they're doing, really, is <em>indexing</em> every non-vertical line in the plane with some <em>point</em> in <em>another</em> plane (called, say, the <em>mb</em>-plane rather than the old traditional <em>xy</em>-plane). That is, we have a correspondence between points in one abstract <em>mb</em>-plane to the lines in the <em>xy</em>-plane, where <em>m</em> is the slope, and <em>b</em> the <em>y</em>-intercept. It doesn't get all lines, though—vertical lines have infinite slope, and you can't have a coordinate on any plane with an infinite value. That is, you have <em>charted out</em> the space of all possible nonvertical lines with points in a <em>different</em> plane.<br /><br />Of course how would we catch the vertical lines? We chart it out using different coordinates. There's the possibility of using <em>inverse slope</em> and <em>x</em>-intercept, which merely means we write the equation using <em>x</em> as a function of <em>y</em>. In other words, all <em>non-horizontal</em> lines are given by:<br />\[<br />x=ky + c.<br />\] It's essentially obtained by reflecting everything about the diagonal line <em>x</em> = <em>y</em> and finding the usual slope and intercept. In other words we can <em>chart out</em> all non-horizontal lines on this new <em>kc</em>-plane. So you could represent the collection of all lines in the plane by having two charts... an <em>atlas</em> or <em>catalogue</em> of all lines, with these two sheets, the <em>mb</em>-plane and the <em>kc</em>-plane.<br /><br /><em>However</em>, note that <em>most</em> lines in the plane—ones that are neither vertical nor horizontal, can be represented in both forms. That is, they correspond to points on <em>both</em> sheets. So, we would say in our usual terminology that these two sheets, or charts, determine a manifold of all possible lines; we just need to check that the overlap is smooth. But let's stop to think about what that means. We can think of it as trying to <em>glue together</em> both sheets to form just a single catalogue. The method of gluing is that we glue together the points that represent the same line in the plane. There is a nice, exact formula for the gluing, which is very simply determined: $(m,b)$ and $(k,c)$ represent the same line if and only if $y = mx + b$ and $x = ky + c$ are equations of the same line. All we have to do to convert from one to the other is solve for <em>x</em> in terms of <em>y</em>, that is, <em>invert</em> the functions. So we solve<br />\[<br />y = mx + b \iff y-b = mx \iff y/m - b/m = x \iff x = (1/m) y - b/m<br />\] that is, if $k = 1/m$ and $c = -b/m$, then $(m,b)$ and $(k,c)$ represent the same line in the two sheets. So to "glue" the sheets together, we glue every $(m,b)$ in the <em>mb</em>-plane to $(1/m,-m/b)$ in the <em>kc</em>-plane. Obviously the sheets need to be made of some very stretchable material, because it is going to be awfully hard to glue points together. Actually it's pretty hard to physically do this so don't try this at home, but just try to imagine it (don't you just love thought experiments?). For example, points in the <em>mb</em>-plane $(1,1)$, $(2,2)$, $(3,3)$, and $(4,4)$ get glued to the corresponding points $(1,-1)$, $(1/2,-1)$, $(1/3,-1)$, and $(1/4,-1)$. You glue them in a very weird way, but if you suppose for a moment, that you allow all sorts of moving, rotating, shrinking, stretching in this process (<em>topological</em> deformations), but without tearing, creasing, or collapsing, you can preseve the "shape" of this space, and yet make it look like something more familiar. This would be our new catalogue of lines. In addition, the catalogue has an additional property: nearby points in the catalogue translate to very similar-looking lines.<br /><br /><h4>So, What Is It?</h4>One should wonder what kind of overall "shape" our nice spiffy catalogue has, after gluing together the two possible charts we've made for it. As it turns out, its shape is the <em>Möbius strip! </em>That's right, the classic one-sided surface (without, as it turns out, its circle-boundary). <br /><br />That is to say, if you give me a point on the <em>Möbius strip</em>, it specifies one and only one line in the plane. One would not, initially, be able see why non-orientability enters the picture. But a little interpretation is in order. First, if we take a particular line and rotate it through 180 degrees, we get the same line back. Everything in between gives every possible (finite) slope. It so happens that as far as slopes of lines is concerned, $\infty=-\infty$, and if you go "past" this single <i>projective infinity,</i> as they call it, you go to negative slopes. In other words, if you start on a journey on rotating a line through 180 degrees, from vertical back to vertical, you come back to the same line, except with orientation reversed (because what started out as pointing up now points down).<br /><br /><div style="text-align: center;"><span style="text-align: center;"><iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.blogger.com/video.g?token=AD6v5dwqkcREsXQG1HjuOGtK61QEfIhFXX-jlCHHj9TIxveyHeTHqgydG0LQzrxLShmFhiG5g04eez0kf6uyw3mnzg' class='b-hbp-video b-uploaded' frameborder='0' /></span></div><br />If you fix an origin and declare that it correspond to a certain special line in the plane, and then select a "core circle" for the Möbius strip, then as you travel around this circle, the distance traveled represents rotation angle for this special line. Traveling from the origin along the core circle and making one full loop should correspond to rotating the special line by 180 degrees. If you instead move up or down on the core circle, you instead end up sliding the line along a perpendicular direction, without changing its angle. So moving up and down the strip corresponds to parallel sliding of lines, and moving around the strip along a circle corresponds to rotating a line.<br /><br /><h4>The Derivation</h4>The specific formula we use is \[<br />F(m,b) = \left(\cot^{-1} m, \frac{b}{\sqrt{m^2 + 1}}\right),<br />\] which sends the line to the angle it makes with the $y$-axis, and its signed perpendicular distance to the origin (the sign is determined by $b$). For the other chart, \[ F(k,c) = \begin{cases}\left( \cot^{-1} \left( \frac{1}{k}\right), \dfrac{c}{\operatorname{sgn}(-k)\sqrt{k^2+1}}\right) \quad & \text{ if } k \neq 0 \\<br />(0,-c) & \quad \text{ if } k = 0<br />\end{cases}<br />\] We'll show how we got this in an update to this post, or perhaps a "Part 2." This can be readily checked by simply substituting the transition charts for $(m,b)$ and $(k,c)$. However, the extra case for $k=0$ here is simply gotten by taking the limit as $k$ goes to zero <i>from above</i> in the other case. What proves that it is a Möbius strip is that, if we take the limit as $k$ goes to zero <i>from below, </i>it will approach $(\pi,c)$ instead of $(0,-c)$. This would make it discontinuous, unless we decide to <i>identify</i> $(\pi,c)$ with $(0,-c)$: the $c$ going to $-c$ means we take the strip at $\pi$ and flip it around to glue it to the strip at $0$ (see <a href="http://www.nestedtori.com/2012/10/an-unexpectedly-delightful-application.html">this post</a> for another example of defining a Möbius strip this way). Technically, we need an infinitely wide Möbius strip for this, but we can always scrunch it down into a finite-width strip without its boundary circle (using something like arctangent). It's just that the closer you get to the edge, the quicker things go off to infinity.<br /><br />The animation is an example "path" through "line space," The blue dot travels around the white circle, and the line in the plane that corresponds to it is the blue line. The red line is a reference line perpendicular to the blue one, and always passes through the origin. The distance the blue dot from the core circle (in turquoise) indicates how far from the origin that the blue and red lines intersect. Because of the "scrunching down," though, the closer to the edge of the strip we get, the more dramatic the change in distance the blue line is from the origin. Here it is again in the plane with the Möbius strip as a picture-in-picture reference:<br /><br /><div style="text-align: center;"><span style="text-align: center;"><span style="text-align: center;"><iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.blogger.com/video.g?token=AD6v5dxIDx6QBzdnzSufDaWfioI2KijwDYFTEAd_ECS0NfokM1FaITXDlkBLWIno3UTn9OgrB99uLyN2URQe3w3caw' class='b-hbp-video b-uploaded' frameborder='0' /></span></span><br /><span style="text-align: center;"><span style="text-align: center;"><br /></span></span><br /><div style="text-align: left;"><span style="text-align: center;">The more general object here we are describing is closely related to the notion of <i>Grassmannian manifold</i>, which are all $k$-dimensional subspaces in an $n$-dimensional vector space (the only difference is that Grassmannians only consider spaces through the origin).</span></div></div><span style="text-align: center;"></span>Chris Tieehttps://plus.google.com/101200599508754408406noreply@blogger.com0tag:blogger.com,1999:blog-5950074794465831898.post-48757998725360116492015-09-18T17:13:00.000-07:002015-09-18T17:21:58.676-07:00Two Classic Clifford Tori Animations<div>After much rummaging around my hard drive, I finally found some Clifford tori animations from my old site that clearly give a much better sense of how the (stereographically projected) tori change as the angle $\varphi$ changes from $0$ to $\pi/2$ (in the notation of the <a href="http://www.nestedtori.com/2015/09/clifford-circles.html">last</a> 2 <a href="http://www.nestedtori.com/2015/09/clifford-tori.html">posts</a> on this subject). Here, we've used the Clifford circles to see its effect on them as well.</div><div><br /><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.blogger.com/video.g?token=AD6v5dzbPR9ItTPNioElCjOAxPrIvj2uBrP2EewZZQ-ahMn74dP1hCUaAWvAJNoIOrBfEjihXMfEckeUsv5tJEEJvg' class='b-hbp-video b-uploaded' frameborder='0' /></div><div><br /></div><div>It starts off with the first degenerate case of one single unit circle, and expands from there. We see that it eventually comes very close to the other degenerate case, that of the straight line, the $z$-axis.</div><div><br /></div><div>Our next video requires more explanation. This time, we take one particular torus, namely the one of identical radii $\frac{1}{\sqrt{2}}$ (in the video, these two particular circles are highlighted red and blue). Now, a $3$-sphere, like any sphere, can be <i>rotated</i> (by a matrix, or a whole path of matrices, in $SO(4)$). Thus, of course, such a rotation can always be realized as a rigid motion of the ambient $4$-space containing this $3$-sphere. It is possible to continuously rotate it so that the torus within has beginning and ending configurations that look the same, <i>except that the red and blue circles have been swapped</i>. If we restrict ourselves to $3$-space, such a rigid motion is impossible, but if we allow ourselves to let the torus pass through itself, then it, too, can be done. However, visualizing the $3$-sphere version in stereographic projection, with a $4$-space rotation, we effectively allow ourselves to distort distances (actually the $4$-space distance is <i>not</i> distorted; the distortion we see is an artifact of the stereographic projection), and add a "point at infinity," so a continuous rotation is allowed to take things through that point. The rotation of the ambient $3$-sphere does not preserve our usual set of nested tori, as can be seen by letting a matrix in $SO(4)$ act directly on the coordinates of our parametrization: it jumbles up all the components. So, of course, the torus undergoes a completely different kind of motion than in our previous "expander" video.</div><div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.blogger.com/video.g?token=AD6v5dwYzGspnyIlEnrbTPMrtV0mSGP4_uAK07ZFM8qXQEsEdDh2IZ2Q21amCG1gSLJRKkcMF52EIibU-BbwUh1DaQ' class='b-hbp-video b-uploaded' frameborder='0' /></div><div><br /></div><div><div>What happens is we inflate our inner tube, so a part of it gets puffed up to infinity, and wraps back around, turning the torus inside-out. In fact, after wrapping back around, we're "inflating" the <i>outside</i> of the torus. Or equivalently, getting back to donuts with frosting, the dough gets bigger and bigger, and when wrapping back around, almost all of space (plus a point at infinity) is dough, and the frosting bounds an inner-tube-shaped pocket of air.</div><div><br /></div>Anyway, the full turning inside-out (which also swaps the red and blue circles, as promised) occurs exactly halfway through the movie (the rotation continues to restore the torus to its original state in the second half). Notice how the stripes on the torus which started out horizontal now are vertical, and what used to be the "apple core" shape which surrounds the donut hole now has become a "donut segment." Plus it just looks totally awesome!</div></div></div>Chris Tieehttps://plus.google.com/101200599508754408406noreply@blogger.com0tag:blogger.com,1999:blog-5950074794465831898.post-40507281791039059642015-09-11T14:11:00.004-07:002015-09-15T20:49:35.922-07:00Clifford Circles<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td><a href="http://1.bp.blogspot.com/-y45VLO722bE/VfNB7h1oP_I/AAAAAAAADK0/xxJRqVg7p1s/s1600/clifford%2Btori%2B--%2Bnew.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="560" src="http://1.bp.blogspot.com/-y45VLO722bE/VfNB7h1oP_I/AAAAAAAADK0/xxJRqVg7p1s/s640/clifford%2Btori%2B--%2Bnew.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="font-size: 12.8px;">30 Clifford circles with with $\varphi = \pi/8$ and $\theta_0$ ranging from $0$ to $2\pi$</td></tr></tbody></table>A discussion about <a href="http://nestedtori.blogspot.com/2015/09/clifford-tori.html">Clifford tori</a> would never be complete without a corresponding discussion about <i>Clifford circles</i>. These were featured as the logo of the UCSD math site for many years (not the case anymore, though, but I saved a screenshot!):<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-_ArY64CdIho/Vei3Jb-KK2I/AAAAAAAADEQ/j4wTMtKzNnU/s1600/Screen%2Bshot%2B2015-09-03%2Bat%2B2.08.28%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="92" src="http://1.bp.blogspot.com/-_ArY64CdIho/Vei3Jb-KK2I/AAAAAAAADEQ/j4wTMtKzNnU/s320/Screen%2Bshot%2B2015-09-03%2Bat%2B2.08.28%2BPM.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">Just as the Clifford tori foliate $S^3$, the Clifford circles foliate each of the Clifford tori. As part of ongoing efforts to revisit algebraic topology, this example is one of the best to explore. We begin with classical mapping, called the <i><a href="https://en.wikipedia.org/wiki/Hopf_fibration">Hopf fibration</a>, </i>which maps $S^3$ to $S^2$ by</div>\[<br />p\begin{pmatrix}x_1\\x_2\\x_3\\ x_4\end{pmatrix} =\begin{pmatrix}2(x_1 x_3 + x_2 x_4)\\ 2(x_2 x_3 -x_1x_4)\\ x_1^2 + x_2^2 -x_3^2 -x_4^2\end{pmatrix}.<br />\]<br />In fancy-schmancy homotopy-theory speak, $p$ is a generator of $\pi_3(S^2)$. Considering our previous parametrization $F$ from <a href="http://nestedtori.blogspot.com/2015/09/clifford-tori.html">last time</a>:<br />\[<br />F\begin{pmatrix}\varphi \\ \alpha \\ \beta\end{pmatrix} = \begin{pmatrix} \cos \alpha \cos \varphi \\ \sin\alpha\cos\varphi \\ \cos\beta\sin\varphi \\ \sin\beta\sin\varphi \end{pmatrix},<br />\]<br />we recall that the last coordinate of $p$ is simply $A^2 - B^2$, or $\cos^2(\varphi) - \sin^2(\varphi)$. Although I tell my students to not bother memorizing trigonometric identities, we derived that from a $\sin^2 \alpha + \cos^2 \alpha = 1$ and $\sin^2 \beta + \cos^2 \beta = 1$. We can further simplify that last coordinate to $\cos(2\varphi)$.<br /><br />The key property that we want to demonstrate is that each point of $S^2$ corresponds to a whole circle in $S^3$ (for any $\boldsymbol \xi$ in $S^3$, the circle in question is the inverse image $p^{-1}(\boldsymbol\xi)$), and that each of these circles corresponding to distinct $\boldsymbol \xi$, though disjoint, are nevertheless linked together.<br /><br />To do this, we visit the first two coordinates:<br />\[<br />2(x_1x_3 + x_2x_4) = 2(\cos\alpha \cos\varphi \cos\beta \sin\varphi+ \sin\alpha\cos\varphi \sin\beta\sin\varphi)<br />\] \[= 2\sin\varphi\cos\varphi(\cos\alpha\cos\beta +\sin\alpha\sin\beta) = \sin(2\varphi) \cos(\alpha-\beta),\] where the last equation is gotten either by trolling the back of a calculus book for some trig identities, or using complex numbers. Similarly,<br />\[<br />2(x_2x_3 - x_1x_4) = 2(\sin\alpha \cos\varphi \cos\beta \sin\varphi- \cos\alpha\cos\varphi \sin\beta\sin\varphi)<br />\] \[= 2\sin\varphi\cos\varphi(\sin\alpha\cos\beta -\cos\alpha\sin\beta) = \sin(2\varphi) \sin(\alpha-\beta).\]<br />All together, we have \[<br />p \circ F\begin{pmatrix}\alpha\\ \beta\\ \varphi\end{pmatrix} = \begin{pmatrix}\sin(2\varphi)\sin(\alpha-\beta) \\ \sin(2\varphi)\cos(\alpha-\beta) \\ \cos(2\varphi)\end{pmatrix}.<br />\]<br />Letting $\theta = \alpha-\beta$, we see that this looks almost like our standard parametrization of a $3$-sphere, except that the polar angle $\varphi$ is off by a factor of $2$. No matter, it's still a sphere; we just have to take care to remember that the range of <i>this</i> $\varphi$ is $[0,\pi/2]$ rather than $[0,\pi]$. This confirms, incidentally, that $p$ really maps onto the sphere, rather than merely mapping into some amorphous blob in $\mathbb{R}^3$, as one can only assume at first, because the destination of the map $p$ has $3$ coordinates. The important thing to realize is that given <i>any </i>$\boldsymbol \xi$ in $S^2$, there is a unique $\varphi_0$ in $[0,\pi/2]$ and $\theta_0$ in $[0,2\pi]$ that correspond, under the usual parametrization, to $\boldsymbol \xi$. So, to calculate $p^{-1}(\boldsymbol \xi)$, we have to see how much we can change $\alpha$, $\beta$, and $\varphi$ in order to always get $(\varphi_0,\theta_0)$. By our above computations, this is easy: $\alpha$ and $\beta$ must satisfy $\alpha - \beta = \theta_0$, and $\varphi = \varphi_0$ is already completely determined. So our only degree of freedom here is $\alpha-\beta$: given some point in the fiber $p^{-1}(\boldsymbol \xi)$, <i>if we add the same thing to both $\alpha$ and $\beta$,</i> we will stay in the fiber. This means the fiber has a parametrization that looks like<br />\[<br />\beta \mapsto F\begin{pmatrix}\theta_0 + \beta \\ \beta \\ \varphi_0\end{pmatrix} = \begin{pmatrix} \cos (\theta_0 + \beta) \cos \varphi \\ \sin(\theta_0 + \beta)\cos\varphi \\ \cos\beta\sin\varphi \\ \sin\beta\sin\varphi \end{pmatrix}.<br />\]<br />We finally finish things off by composing with the stereographic projection $P$ as before:<br />\[<br />\beta \mapsto \frac{1}{1-\sin\beta\sin\varphi } \begin{pmatrix}\cos (\theta_0 + \beta) \cos \varphi \\ \sin(\theta_0 + \beta)\cos\varphi \\ \cos\beta\sin\varphi \end{pmatrix}.<br />\]<br />Plotting this out, this gives us the nice circles shown at the start of the post. We'll continue to explore the properties of $p$ and its visualizations as we move along in algebraic topology.Chris Tieehttps://plus.google.com/101200599508754408406noreply@blogger.com0tag:blogger.com,1999:blog-5950074794465831898.post-40361259202160590582015-09-09T20:31:00.000-07:002015-09-15T20:49:07.713-07:00Some Cool Views of Projective SpaceWhile finally revisiting one of the geometry books on my shelf, Glen Bredon's <i><a href="https://books.google.com/books/about/Topology_and_Geometry.html?id=wuUlBQAAQBAJ&hl=en">Topology and Geometry</a>, </i>I encountered an exercise about showing that the projective space is homeomorphic to the <i>mapping cone</i> of a map that doubles a circle on itself (the complex squaring map $z \mapsto z^2$). The mapping cone has a nice visualization, first as a <i>mapping cylinder</i>, which takes a space $X$ and crosses it with the interval $I$ to form $X \times I$ (thus forming a "cylinder"), and then glues the bottom of it to another space $Y$ using a given continuous map $f : X \to Y$. Finally, to make the cone, it collapses the top to a single point. Of course, this can be visualized as deforming the bottom part of $X \times I$ through whatever contortion $f$ does, which might include self-intersection (and of course, it could be more gradual). So I used a good old friend, <a href="http://nestedtori.com/2015/08/parametric-pasta-relaunch-of-nested-tori.html">parametrizations</a>, to help set up an explicit example. Take a look!<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-DvBdH3lMOrg/VfDmEFUD1LI/AAAAAAAADJQ/Bh3HAloj3tU/s1600/molar%2Bprojective%2Bspace-2.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="491" src="http://3.bp.blogspot.com/-DvBdH3lMOrg/VfDmEFUD1LI/AAAAAAAADJQ/Bh3HAloj3tU/s640/molar%2Bprojective%2Bspace-2.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">An immersion of projective space into $\mathbb R^3$. Shown as a mesh to make the self-intersecting portion visible. Looks a little like a molar, although one would hope I take good enough care of my teeth to not have that many holes in it…</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-VYWMQLly-Z4/VfDmRjDQGqI/AAAAAAAADJo/5err1LJ01Is/s1600/molar%2Bprojective%2Bspace-3.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="492" src="http://1.bp.blogspot.com/-VYWMQLly-Z4/VfDmRjDQGqI/AAAAAAAADJo/5err1LJ01Is/s640/molar%2Bprojective%2Bspace-3.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">A cutaway view, now as a more solid surface, basically illustrating it now as a mapping cylinder (it is homeomorphic to projective space minus a disk, which is a Möbius strip). Anyone know a good glassblower so we can make vases that look like this?</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-0Qj01BvOQKU/VfDmNRjFhjI/AAAAAAAADJg/d3p2dm4-c0c/s1600/molar%2Bprojective%2Bspace-cutaway.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="493" src="http://4.bp.blogspot.com/-0Qj01BvOQKU/VfDmNRjFhjI/AAAAAAAADJg/d3p2dm4-c0c/s640/molar%2Bprojective%2Bspace-cutaway.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">A view from the open top, allowing us to see the self-intersecting part of the surface from above</td></tr></tbody></table><span id="goog_1536413794"></span>The equation, in "cylindrical coordinates", is $r = (2z+\cos\theta) \sqrt{1-z^2}$ for $0\leq z \leq 1$ (for the closed surface), $0 \leq z \leq 0.97$ (for the cutaway), and $0 \leq \theta \leq$ (what else?) $2\pi$. I say it in scare quotes because technically, it allows negative values of $r$. For fun, though (and to make it totally legit, even if you have qualms about negative radii), we rewrite it as a (Cartesian) parametrization (by substituting for $r$):<br />\[<br />\begin{pmatrix} x \\ y \\ z\end{pmatrix} = \begin{pmatrix} (2u + \cos v)\sqrt{1-u^2} \cos v\\ (2u + \cos v) \sqrt{1-u^2}\sin v \\ u\end{pmatrix}<br />\]<br />with $0\leq u \leq 1$ or $0.97$, and $0 \leq v \leq 2\pi$.<br /><br />The motivation for this is that the equations $r = a + \cos\theta$, as $a$ varies, goes from a loop traversed once to a loop that folds over itself exactly in a $2:1$ manner:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-xElaVPxI2Dg/VfD4efC1CpI/AAAAAAAADKU/UIFS1OuYWC4/s1600/g4410.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="148" src="http://3.bp.blogspot.com/-xElaVPxI2Dg/VfD4efC1CpI/AAAAAAAADKU/UIFS1OuYWC4/s640/g4410.png" width="640" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div>but now visualized stacked on top of one another. Then to collapse the top, we shrink the diameter by a function that vanishes at $1$ and approaches $0$ at infinite slope to make it smooth (topologically speaking, it is valid to let it collapse to a corner point, though). Flowers, however, are not included (you'll have to fiddle with things like $r = \cos(k\theta)$ if you want that…)<br /><br />Chris Tieehttps://plus.google.com/101200599508754408406noreply@blogger.com0tag:blogger.com,1999:blog-5950074794465831898.post-74842018177124216472015-09-05T17:35:00.000-07:002015-09-06T14:19:52.305-07:00Distributing Points on Spheres and other ManifoldsOne of the classic problems of random-number generation and generally representing probability distributions is the problem of <i>uniform distribution of points on a </i>($2$-)<i>sphere</i> (we, of course, clarify the dimension, having gone on too many extradimensional journeys lately—but we'll quickly see that these methods are good for those cases too!). Namely, how does one pick points at random on a sphere such that every spot on it is equally likely? Naïvely, one tries to jump to latitude and longitude. But this is because we are trained to think of the sphere in terms of parametrizations (yes, <a href="http://nestedtori.blogspot.com/2015/08/parametric-pasta-relaunch-of-nested-tori.html">we like those</a>, obviously, but they are only a <i>means</i> of representation but not the be-all and end-all of geometric objects), and not in terms of quantities defined directly on the sphere itself. <i>Uniform</i> for points directly on the sphere need not correspond to uniform with respect to some real parameters defining those points. After all, the whole point of parametrization is to distort very simple domains into something more complicated. What will happen, for example, with latitude and longitude, if one distributes points uniformly in latitude, i.e., the interval $[-\pi/2,\pi/2]$ and then also distributes points uniformly in longitude, the interval $[-\pi,\pi]$, then, after mapping to the corresponding points on a sphere, one will get points concentrated near the poles:<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-yoc0T-uqCkw/VetVg_kE3RI/AAAAAAAADIA/EOM53HpcrRE/s1600/sphere-non-unif.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="298" src="http://4.bp.blogspot.com/-yoc0T-uqCkw/VetVg_kE3RI/AAAAAAAADIA/EOM53HpcrRE/s400/sphere-non-unif.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Sphere with a distribution of 2000 points, uniform in latitude and longitude, generated by MATLAB. Notice it clusters at the poles. (Here the north pole of the sphere is tipped slightly out of the plane of the page so we can see the points getting denser there. Click to enlarge.)</td></tr></tbody></table><div class="separator" style="clear: both; text-align: center;"><br /></div>This occurs essentially because, for example, at different latitudes, a degree of longitude can be a very different physical distance (~70 miles at the equator and goes to zero at the poles): the <i>numerical</i> correspondence does not match up to other more relevant physical measures. In more fancy-schmancy speak, uniformity in latitude and longitude (or the equivalent spherical $\varphi$ and $\theta$) means uniformly in some imaginary rectangle that we deform to a sphere. Actually, without much work, we can already intuitively see what we have to do to achieve uniformity on the sphere in terms of those coordinates: make the distribution <i>thin out</i> when the latitude indicates we're near the poles, i.e., we want the distribution of points in the parameter rectangle to look like this:<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-a5L-i8NXeT8/VetbcuFnNgI/AAAAAAAADIc/rq5vTsaECbU/s1600/%2B%2B.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="478" src="http://2.bp.blogspot.com/-a5L-i8NXeT8/VetbcuFnNgI/AAAAAAAADIc/rq5vTsaECbU/s640/%2B%2B.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Nonuniform distribution of points on the parameter rectangle $(\theta,\varphi)$. Notice the points get more sparse at the top and bottom, corresponding to $\varphi$ at its extreme values.</td></tr></tbody></table>How do we express this in formulas? We need to take a look at the area element. The naïve uniform distribution gives $\frac{1}{2\pi^2}d\varphi\; d\theta$, $\frac{1}{2\pi^2}$ times the area element of the rectangle $[0,\pi]\times[0,2\pi]$, where the $1/2\pi^2$ is there to make the integral come out to $1$, as required for all probability densities. But anyone who has spent a minute in a multivariable calculus class probably got it drilled into their heads that the area element of the <i>sphere</i>, namely, that which measures the area of a piece of sphere of radius $r$, is $r^2 \sin \varphi\; d\varphi\; d\theta$ (or $r^2\sin \theta\; d\theta\; d\phi$ if you're using the physicists' convention… that's a whole 'nother headache right there). We'll take $r = 1$ here, and of course, we have to now multiply by $\frac{1}{4\pi}$ to make the density integrate to $1$. It is reasonable that actual physical area <i>would</i> correspond to an actual uniform distribution, because the area of some plot of land on a sphere will be the same no matter how the sphere is rotated: to distribute one sample point per square inch everywhere on a sphere would indeed truly give a uniform distribution. So this gives us the solution: we can still uniformly distribute in "longitude" $\theta \in [0,2\pi)$ (which accounts for the factor of $\frac{1}{2\pi} d\theta$), since the area element doesn't have any functional dependence on $\theta$ other than $d\theta$. For the $\varphi$, however, we need to weight the density, in such a manner that if we take $d\varphi$ for the moment to be the classical interpretation as an infinitesimal or just small increment in $\varphi$, the proportion of points landing between $\varphi$ and $\varphi+ d\varphi$ is approximately $\frac{1}{2}\sin\varphi \;d\varphi$ (the $\frac{1}{2}$ comes from removing the previous factor of $1/2\pi$ from the total $1/4\pi$). But that, we recognize, as $\frac{1}{2}d(-\cos \varphi)$, so if we <i>define</i> a new variable $u = -\cos \varphi$, then our area form is $\frac{1}{2} du\; \frac{1}{2\pi} d\theta$. Thus if we <i>uniformly distribute </i>$u$ in $[-1,1]$ (an interval of length $2$, so justifying the $\frac{1}{2}$), and <i>only then</i> calculate $\varphi = \cos^{-1}(-u)$ (and uniformly distribute $\theta$ in $[0,2\pi]$ as before), we do in fact get a uniform distribution on the sphere.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-X8hxs3fahQo/VetTIFQsxJI/AAAAAAAADHo/NjRn-2JtfWA/s1600/sphere-unifdist.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="475" src="http://1.bp.blogspot.com/-X8hxs3fahQo/VetTIFQsxJI/AAAAAAAADHo/NjRn-2JtfWA/s640/sphere-unifdist.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Sphere with a uniform distribution of 2000 points (click to enlarge), generated by MATLAB</td></tr></tbody></table><h4></h4><h4></h4><h4>So What's the More General Thing Here?</h4>As veteran readers might have expected, this situation is not as specific as it may seem. Transforming probability density functions (pdfs), even just on a line, sometimes seems very mysterious, because it doesn't work the same way as other coordinate changes for functions one often encounters. It is <i>not</i> enough to simply evaluate the pdf at a new point (corresponding to the <i>composition</i> of the pdf with the coordinate change). Instead what is reported in most books, and sometimes proved using the Change of Variables theorem, is some funny formula involving a lot of Jacobians. But this is the <i>real </i>reason: it's because the natural object associated to probability densities is a <i>top-dimensional differential form</i> (actually, a differential <i>pseudoform</i>, which I have spent some time advocating, for the simple reason that modeling things in a geometrically correct manner really clarifies things—just think, for example, how one may become confused by a right-hand rule?). A way to view such objects, besides as a function times a standardized "volume" form, is a <i>swarm, </i>which is precisely one of these distributions of points (although they don't <i>have</i> to be random, generating them this way is an excellent way to get a handle on it).<br /><br />If we have some probability distribution on our space, and near some point it is specified (parametrized) by variables $x = (x_1, x_2,\dots,x_n)$, then the pdf should (locally) look like $\rho(x_1,x_2,\dots,x_n) dx_1 dx_2\dots dx_n$. Most probability texts, of course, just consider the $\rho$ part without the differential $n$-form part $dx_1\dots dx_n$. But when transforming coordinates to $y$ such that $x = f(y)$, then the standard sources simply state that $\rho$ in the new coordinates is $\rho(f(y)) \left|\det \left(\frac{\partial f}{\partial y}\right)\right|$. But <i>really</i>, it's because the function part $\rho(x)$ becomes $\rho(f(y))$ as a function usually would, and the $dx_1 \dots dx_n$ becomes $\left|\det \left(\frac{\partial f}{\partial y}\right) \right| dy_1\dots dy_n$. In total, we have the transformation law<br />\[<br />\rho(x_1,\dots,x_n) dx_1 \dots dx_n=\rho(f(y_1,\dots,y_n)) \left|\det \left(\frac{\partial f}{\partial y}\right) \right| dy_1\dots dy_n.<br />\]<br />The usual interpretation of the $n$-form $\rho(x) dx_1 \dots dx_n$ is simply that, when integrated over some region of space, gives the total quantity of whatever such a form is trying to measure (volume, mass, electric charge, or here, probability).<br /><br /><h4>Another Curious Example</h4><div>One perennial interesting example is the computation of two independent, normally distributed random variables (with mean $0$ and variance $\sigma^2$), and considering the distribution of their <i>polar coordinates</i>. For one variable, the normal distribution is, using our fancy-schmancy form notation,<br />\[<br />\rho(x)\; dx = \frac{1}{\sqrt{2\pi \sigma^2}} e^{-x^2/2\sigma^2} dx<br />\]<br />From the definition of independence, two normally distributed variables have a density obtained by multiplying them together:</div>\[ \omega = \rho(x)\rho(y) \; dx\;dy = \frac{1}{2\pi\sigma^2} e^{-(x^2 + y^2)/2\sigma^2} dx\;dy.\]<br />So if we want the distribution in polar coordinates, where $x = r \cos \theta$ and $y = r \sin \theta$, this means we transform the form part as the usual $dx\;dy = r\; dr\; d\theta$, and write the density part as $\frac{1}{2\pi\sigma^2}e^{-r^2/2\sigma^2}$. In total,<br />\[<br />\omega = \frac{1}{2\pi\sigma^2} r e^{-r^2/2\sigma^2} dr \; d\theta<br />\]<br />This factors back out into two independent pdfs (i.e., $r$ and $\theta$ are independent random variables) as<br />\[<br />\omega = \left(\frac{1}{\sigma^2} r e^{-r^2/2\sigma^2} dr\right) \left( \frac{1}{2\pi} d\theta\right)<br />\]<br />which means we can take $\theta$ and uniformly distribute it in $[0,2\pi]$ as before. To get $r$, we can proceed by two ways. First, we could imitate what we did earlier with the sphere: use integration by substitution or the Chain Rule, to deduce that $\frac{1}{\sigma^2} r e^{-r^2/2\sigma^2} dr= d\left(- e^{-r^2/2\sigma^2}\right)$, and thus, making the substitution $v = - e^{-r^2/2\sigma^2}$, we now have $\omega = \frac{1}{2\pi} dv\; d\theta$. The range of $v$ has to be in $[-1,0)$, since $e^{-r^2/2\sigma^2}$ goes to zero as $r \to \infty$ and has a maximum at $1$. With this, we uniformly distribute $v \in [-1,0)$ and invert the function:<br />\[<br />r=\sqrt{-2\sigma^2 \log (-v)}.<br />\]<br /><br />Alternatively, we can substitute $u = r^2/2\sigma^2$ (and the inverse, $r = \sqrt{2\sigma^2 u}$). Then $du = (r/\sigma^2) dr$, and $\frac{1}{\sigma^2} re^{-r^2/2\sigma^2} dr = e^{-u} du$, with $u \geq 0$. This means $u$ is distributed as an exponential random variable with parameter $\lambda = 1$. However, this is often computed using uniform $[0,1]$ variables as well, for which we simply end up using something similar to the above method. Still, however, it gives another way of <i>understanding</i> the distributions: it certainly is very interesting that two <i>normally distributed</i> random variables <i>becomes</i>, via such an ordinary transformation such as polar coordinates, (the square root of) an exponentially distributed random variable and a uniform random variable.<br /><br />What about <i>three</i> independent normally distributed variables (with mean zero and all the same variance)? It turns out the $r$ coordinate is not so easy (it's a gamma distribution), but the $\varphi$ and $\theta$ variables yield a uniform distribution on the sphere! However, it shouldn't be too surprising: given three independent such variables, there should be no bias on their directionality in $3$-space. We can also turn this around to give ourselves an alternate method of uniformly distributing points on a sphere (since normal random variables are very easy to generate): take three such variables, consider it a point in $\mathbb R^3$, and divide by its magnitude (I thank Donald Knuth's <i>Art of Computer Programming</i>, Volume 2 for this trick; it has an excellent discussion on random number generation).Chris Tieehttps://plus.google.com/101200599508754408406noreply@blogger.com0tag:blogger.com,1999:blog-5950074794465831898.post-11416410820009554392015-09-01T14:49:00.001-07:002015-09-03T14:36:29.670-07:00Clifford Tori<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-L4NZCsAX9M4/UI4RJc3QvfI/AAAAAAAAAYY/WDHb34uETwQ/s1600/morenestedtori-green-xxl.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="342" src="http://2.bp.blogspot.com/-L4NZCsAX9M4/UI4RJc3QvfI/AAAAAAAAAYY/WDHb34uETwQ/s400/morenestedtori-green-xxl.png" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div>As part of the ongoing celebration that is the relaunch, here, we finally give the long-awaited in-depth look at the site's namesake hinted at in <a href="http://nestedtori.blogspot.com/2012/10/welcome-to-nested-tori.html">the opening post</a>. Perhaps surprisingly in a post that ostensibly is about tori, we begin our story with a different well-known (and hopefully loved) family of spaces: the spheres. The $1$- and $2$-dimensional spheres hardly need introduction, being undoubtely the first curved geometrical objects that one studies, the ordinary circle and sphere (which for mathematicians, only refers to the <i>boundary</i> <i>surface</i>, not the interior, of a ball, so is $2$-dimensional). These shapes of course appear all the time in nature, since they satisfy many optimality properties. Much, of course, has been written about their "<a href="https://books.google.com/books/about/Perfect_Form.html?id=8uWPG0QK0UIC">perfect form</a>."<br /><div><br /></div><div>Higher-dimensional spheres are easy to define: <i>the set of all points a given distance from a given point</i>. Where, of course, "all points" start out as being in some higher dimensional space, say, <i>n</i>. High-dimensional spheres have interesting applications, such as in statistical mechanics, where the dimension of the state space in question is on the order of $10^{24}$ (every particle gets its own set of 3 dimensions! Again, this is why state space is awesome). Spheres occur in this context because the total energy of the system remains constant, so the sum of the squares of all their momenta has to be constant—namely, the momenta are all a certain "distance" from the origin. As crazy as it may sound, it is, at its root, a description (using <a href="http://nestedtori.blogspot.com/2015/08/hamiltonian-mechanics-is-awesome-double.html">Hamiltonian mechanics</a>) of many more than the two particles that we spent a whole <a href="http://nestedtori.blogspot.com/2015/08/the-configuration-space-of-double.html">5-part series</a> talking about! But of course, it'd take us a bit far afield to explore this (at least in this post; we should eventually feature some statistical-mechanical calculations here, because it is illustrative of how we can deal with overwhelmingly large-dimensional systems and, rather amazingly, be able to extract useful information from it!).</div><div><br /></div><div>So let's get back to nearly familiar territory: $n = 3$, the $3$-sphere $S^3$ (the boundary of a ball in 4-dimensional space $\mathbb{R}^4$). Being $3$-dimensional, one would think we can visualize it or experience it viscerally somehow. As noted by <a href="http://press.princeton.edu/titles/6086.html">Bill Thurston</a>, the key to visualization of $3$-manifolds is to imagine living inside a universe that is shaped like one (this is also elaborated upon, working up from $2$-dimensional examples, by <a href="https://books.google.com/books?id=Lurp6nB4LtQC&dq=isbn:0824707095&hl=en&sa=X&ved=0CB4Q6AEwAGoVChMI-bHdifLRxwIVSQOSCh1b8gn9">Jeffrey Weeks</a>, with <a href="http://www.geometrygames.org/CurvedSpaces/index.html">interactive demos</a>). This isn't so straightforward to visualize (literally), because it is a curved 3-dimensional space. There are a number of ways of doing this; they give the essence of $S^3$ by understanding it in terms of some more familiar objects from lower dimensions, such as slicing by hyperplanes, and of course, by tori.<br /><br /></div><div><div><h4>The Clifford Tori as a Foliation of $S^3$</h4></div><div>So where do tori (nested or not) come in? Let's get back to the defining formula of $S^3$,</div></div>\[<br />x_1^2 + x_2^2 + x_3^2 + x_4^2 = 1.<br />\] If we consider groups of $2$ terms, $x_1^2 + x_2^2 = A^2$, and $x_3^2 + x_4^2 = B^2$, we have, $A^2 + B^2 = 1$. But for each fixed $A$ and $B$ satisfying that relation, we get two circles: one for the coordinates $(x_1,x_2)$ and another for the coordiantes $(x_3,x_4)$. We can use this information to help <i><a href="http://nestedtori.blogspot.com/2015/08/parametric-pasta-relaunch-of-nested-tori.html">parametrize</a></i> $S^3$. Let's first ask: what are some familiar things satisfying $A^2+B^2 = 1$? If we let $A = \cos \varphi$ and $B = \sin \varphi$, then $A$ and $B$ will always satisfy this relation for any $\varphi \in \mathbb{R}$: so $\varphi$ is one possible parameter of this system, and the full possible range of $A^2$ and $B^2$ is assumed by letting $\varphi$ vary from $0$ to $\frac \pi 2$. This gives us: \[<br />x_1^2 + x_2^2 =\cos^2 \varphi<br />\] and \[ <br />x_3^2 + x_4^2 = \sin^2 \varphi.<br />\]<br />In other words, as $\varphi$ varies, the two sets of coordinates represent one circle that grows in radius, and another that shrinks, in such a way that the sum of the squares of the two radii are always $1$. This realizes the $3$-sphere as a collection of sets of the form $S^1(A) \times S^1(B) = S^1(\cos\varphi) \times S^1(\sin\varphi)$, the Cartesian product of circles of radius $A$ and $B$. But what is the Cartesian product of two circles? A torus. These tori fill up all of $S^3$ (except for two degenerate cases: two circles, corresponding to the Cartesian product of a unit circle and a single point—a circle of radius $0$). The technical term for this is that they <i>foliate</i> $S^3$ (from the Latin <i>folium</i> for "leaf"). They are called <i>Clifford tori</i>. Hard as it may be to believe, their intrinsic geometry, as inherited from $\mathbb{R}^4$ is <i>flat </i>(although this is not true of their <i>extrinsic</i> geometry). It means that if we have a sheet of paper, we could lay it flat on a Clifford torus in $\mathbb{R}^4$ (you can't do that in $\mathbb{R}^3$ with your garden-variety donut-shaped torus). However, a full study of the geometry of these tori will have to wait for another time. Here, we will be content to <i>visualize</i> them (which, unfortunately, will not preserve that flat geometry we are claiming they have). For the visualization, we use another tool, the <i>stereographic projection.</i> Before we get to that, we finally note that, for each given $\varphi$, each of the circles $S^1(\cos \varphi)$ and $S^1(\sin\varphi)$ can be further parametrized with other angles; we take<br />\[<br />(x_1,x_2) = (\cos \alpha \cos \varphi, \sin \alpha \cos \varphi)<br />\] and \[<br />(x_3,x_4) = (\cos \beta \sin \varphi, \sin \beta \sin \varphi),<br />\]<br />where $0\leq \alpha,\beta \leq 2\pi$. So this means we have parametrized $S^3$ by 3 variables, $(\varphi,\alpha,\beta)$ varying over $[0,\pi/2]\times [0,2\pi]\times [0,2\pi]$.<br /><br /><h4>The Stereographic Projection</h4><div>There's a way to project (almost) the whole $3$-sphere into ordinary $3$-space $\mathbb{R}^3$. To understand how we can project the $3$-sphere (minus a point) into $3$-space, let's look at the analogous problem for the $2$-sphere, first. It so happens that the stereographic projection is extremely useful in that case as well, and is the source for many arguments involving "the point at infinity" in complex analysis. The way it works is to imagine screwing in a light bulb at the top of a sphere resting on a plane, and given a point on a sphere, its shadow cast on the plane by this light is the corresponding plane (shown in the figure below for a line: the dots connected by the blue rays correspond).<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-SmykDdYg_mA/VeSytC9VMpI/AAAAAAAADC4/hA2MC1ngW1w/s1600/projective-line-diagram.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="184" src="http://2.bp.blogspot.com/-SmykDdYg_mA/VeSytC9VMpI/AAAAAAAADC4/hA2MC1ngW1w/s640/projective-line-diagram.jpg" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Some corresponding points for the stereographic projection of a circle (here of radius $\frac 1 2$) to a line.</td></tr></tbody></table>Notice that the closer it gets to the top, the farther out the rays go (thus, the farther out the corresponding point). And the top point falls on a horizontal line, which will never intersect the corresponding plane of projection: it is said to be sent to the "point at infinity". But the thing is, that extra point at infinity is, at least for the plane as we've defined it, truly <i>extra</i>, so the stereographic projection <i>really</i> simply maps the sphere <i>minus one single point</i> to the plane. In formulas, for a sphere of radius $\frac{1}{2}$ centered at $(0,0,\frac{1}{2})$, this is<br />\[<br />\left(\frac{x}{1-z},\frac{y}{1-z}\right).<br />\]<br />Of course, if we want to map the <i>unit</i> (radius 1 and origin-centered) sphere, we have to do a little finessing with an extra transformation, namely, $(x',y',z') = (2x, 2y, 2z-1)$. It <i>so happens</i> that we get the exact same formula back, just a different domain:<br />\[<br />\left(\frac{x}{1-z},\frac{y}{1-z}\right) = \left(\frac{\frac{1}{2}x'}{1-\frac{1}{2}z'-\frac{1}{2}},\frac{\frac{1}{2}y'}{1-\frac{1}{2}z'-\frac{1}{2}}\right) =\left(\frac{x'}{1-z'},\frac{y'}{1-z'}\right).<br />\]<br />For this reason, many people start off with this formula for the unit sphere instead. The <i>picture</i> associated to this actually is nice: it projects rays of light from the top, through the sphere, and onto a plane that <i>slices</i> the sphere exactly in half at its equator. The consequence is that the light hits the corresponding point in the plane first before reaching a point in the lower hemisphere (but, mathematically, we keep the mapping as always from the sphere to the plane, regardless which the light beam would hit first). It should also be noted that the inverse mapping, of course, will be different (we won't be needing the inverse mapping here, but it is also useful in other contexts, and can be derived using elementary, if tedious, means, via messy algebra).<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-XnoVXOX-Bnc/VeXzhQsBFqI/AAAAAAAADDQ/VTZAiBQ-48E/s1600/Screen%2Bshot%2B2015-08-31%2Bat%2B6.23.10%2BPM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="356" src="http://1.bp.blogspot.com/-XnoVXOX-Bnc/VeXzhQsBFqI/AAAAAAAADDQ/VTZAiBQ-48E/s640/Screen%2Bshot%2B2015-08-31%2Bat%2B6.23.10%2BPM.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Stereographic projection of the unit, origin-centered sphere to the plane containing its equator. The projection associates the point $P$ to the point $Q$.</td></tr></tbody></table><br />So we generalize this formula: for the unit 3-sphere, given as $x_1^2 + x_2^2 + x_3^2 + x_4^2 = 1$, we have a stereographic projection $P$ of all of its points except the point $(0,0,0,1)$, to $\mathbb{R}^3$, as follows:<br />\[<br />P \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ x_4\end{pmatrix} =\frac 1 {1-x_4}\begin{pmatrix} x_1\\ x_2 \\ x_3\end{pmatrix}.<br />\]<br />In order to visualize our Clifford Tori, then, we recall the parametrization derived above:<br />\[<br />\begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ x_4\end{pmatrix} = F\begin{pmatrix}\varphi \\ \alpha \\ \beta\end{pmatrix} = \begin{pmatrix} \cos \alpha \cos \varphi \\ \sin\alpha\cos\varphi \\ \cos\beta\sin\varphi \\ \sin\beta\sin\varphi \end{pmatrix}.<br />\]<br />The two circles that we want are when $\alpha$ and $\beta$ vary, so if we select a few values of $\varphi$ and draw the surface by the parametrization with the remaining variables, we will get different tori for each value of $\varphi$. <i>But</i>, of course, this still gives the torus as being in $\mathbb R^4$. So we compose this with the stereographic projection: for fixed $\varphi$, and letting $\alpha, \beta$ vary, we consider parametrizations<br />\[<br />\Phi \begin{pmatrix} \varphi \\ \alpha \\ \beta \end{pmatrix} = P \circ F\begin{pmatrix} \varphi \\ \alpha \\ \beta \end{pmatrix} = \frac{1}{\sin\beta\sin\varphi} \begin{pmatrix} \cos \alpha \cos \varphi \\ \sin\alpha\cos\varphi \\ \cos\beta\sin\varphi \end{pmatrix}.<br />\]<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-8QyCR4wkr2o/UHxaRDhtywI/AAAAAAAABrM/Ke2Q-NabjpY/s1600/clifford%2Btori.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="248" src="http://3.bp.blogspot.com/-8QyCR4wkr2o/UHxaRDhtywI/AAAAAAAABrM/Ke2Q-NabjpY/s400/clifford%2Btori.jpg" width="400" /></a></div><br /><br />These are the tori given in the <a href="http://nestedtori.blogspot.com/2012/10/welcome-to-nested-tori.html">opening post</a> (the tori are deliberately shown incomplete, with $\beta$ not going full circle, so that you can see how they change with different $\varphi$ and that they are <i>nested</i>. Note that this transformation does not preserve sizes, so even though we said that "one circle gets bigger as the other gets smaller", this is not what happens in the projection. Think of a sphere with circles of latitude; they grow from one pole to the equator and shrink back down from the equator to the other pole, but their stereographic projections to the plane just keep growing. The other view of this, and the site's logo, is simply a cutaway view of the stereographic projections by one vertical plane. As a note of thanks, bridging years, one of the sources of inspiration for learning about Clifford tori and their visualizations has come from Ivar Peterson's <i><a href="https://books.google.com/books?id=BhkrN0vL9aYC&source=gbs_book_other_versions">The Mathematical Tourist</a></i>.<br /><br />We should finally note that this is <i>not</i> the same parametrization as our usual <a href="http://nestedtori.blogspot.com/2012/10/fun-equations-of-torus-fenics-part-2.html">torus of revolution</a>. Parametrizations are not unique. The Clifford tori have many interesting, topologically important properties that will certainly be fodder for future posts.<br /><h4></h4></div>Chris Tieehttps://plus.google.com/101200599508754408406noreply@blogger.com0tag:blogger.com,1999:blog-5950074794465831898.post-87312901210873740932015-08-29T10:00:00.000-07:002015-08-29T10:00:02.725-07:00Double Pendulum Concluded (Part 5): Discretization<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-cpWrB5wE_8A/VeDgcfCmOtI/AAAAAAAADAM/dZpABeV1sJU/s1600/final-frame.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="266" src="http://4.bp.blogspot.com/-cpWrB5wE_8A/VeDgcfCmOtI/AAAAAAAADAM/dZpABeV1sJU/s400/final-frame.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Path traced out in state space by a double pendulum</td></tr></tbody></table>In this final part, we discretize the equations for the double pendulum, covered in Parts <a href="http://nestedtori.blogspot.com/2015/08/the-configuration-space-of-double.html">1</a>, <a href="http://nestedtori.blogspot.com/2015/08/lagrangian-mechanics-is-awesome-double.html">2</a>, <a href="http://nestedtori.blogspot.com/2015/08/the-euler-lagrange-equations-for-double.html">3</a>, and <a href="http://nestedtori.blogspot.com/2015/08/hamiltonian-mechanics-is-awesome-double.html">4</a>. As frequently noted, for this problem, we are in favor of <a href="http://nestedtori.blogspot.com/2012/11/hyperbolic-equations-and-symplectic.html">symplectic methods</a>, because qualitatively, the algorithms have very good long-time energy behavior, and you don't get nonsense like the pendulum speeding up uncontrollably. And in order to do this, we rephrased in terms of Hamiltonian mechanics in <a href="http://nestedtori.blogspot.com/2015/08/hamiltonian-mechanics-is-awesome-double.html">Part 4</a>, from which we recall (setting $\mathbf{q} = (\theta_1,\theta_2)$):<br />\[<br />\dot{\mathbf{q}} = I^{-1} \mathbf{p}<br />\]\[<br />\dot{\mathbf{p}} = \frac{1}{2} \dot{\boldsymbol{\theta}}^t \frac{\partial I}{\partial \mathbf{q}}\dot{\boldsymbol{\theta}} - \frac{\partial V}{\partial\mathbf q}<br />\]<br />which, for this system, is<br />\[<br />\dot{p}_1 = \frac{1}{2}\dot{\mathbf{q}}^t \begin{pmatrix} 0 & m_2\ell_1\ell_2 \sin(\theta_2-\theta_1) \\ m_2\ell_1\ell_2 \sin(\theta_2-\theta_1) &0\end{pmatrix} \dot{\mathbf{q}} + (m_1 + m_2) g\ell_1 \sin \theta_1<br />\]\[<br />\dot{p}_2 = \frac{1}{2}\dot{\mathbf{q}}^t \begin{pmatrix} 0 & -m_2\ell_1\ell_2 \sin(\theta_2-\theta_1) \\ -m_2\ell_1\ell_2 \sin(\theta_2-\theta_1) &0\end{pmatrix} \dot{\mathbf{q}} + m_2 g\ell_2 \sin \theta_2,<br />\]<br />with the symmetric moment of inertia<br />\[<br />I = \begin{pmatrix} (m_1 +m_2) \ell_1^2 & m_2\ell_1\ell_2\cos(\theta_2-\theta_1) \\ m_2 \ell_1 \ell_2 \cos(\theta_2 -\theta_1) & m_2 \ell_2^2\end{pmatrix}.<br />\]<br />We use a one-step, first order method to simulate (with time step $h$). The usual thing to do is to consider the derivatives $\dot{\mathbf{q}}$ and $\dot{\mathbf{p}}$ as difference quotients, and solving for the later time step (denoted with the superscript $n+1$) in terms of the earlier (superscript $n$):<br />\[<br />\frac{\mathbf{q}^{n+1} - \mathbf{q}^n}{h} = I^{-1} \mathbf{p}<br />\]and\[<br />\frac{\mathbf{p}^{n+1} - \mathbf{p}^n}{h} = \frac{1}{2} \mathbf{p}^t I^{-1} \frac{\partial I}{\partial \mathbf{q}} I^{-1} \mathbf{p} - \frac{\partial V}{\partial \mathbf{q}}.<br />\]<br />Now of course, the subtlety is how to evaluate the quantities on the right-hand side; since now we don't assume that we have direct access to $\dot{\mathbf q}$, we have re-expressed it in terms of $\mathbf{p}$ in the second equation. Also, since $\partial I/\partial{\mathbf{q}}$ is some higher-order tensor, the notation becomes very confusing, very fast. If we write<br />\[<br />B = \frac{\partial I}{\partial \theta_1}<br />\] then we find $\partial I/\partial \theta_2 = -B$, since, as we saw, $I$ only depends on $\theta_2 -\theta_1$. Therefore, we rewrite it as<br />\[<br />\frac{\mathbf{p}^{n+1} - \mathbf{p}^n}{h} = \frac{1}{2}\begin{pmatrix} \mathbf{p}^t B \mathbf{p} \\ -\mathbf{p}^t B \mathbf{p} \end{pmatrix}- \frac{\partial V}{\partial \mathbf{q}}.<br />\]<br />The Euler-B method uses $\mathbf{p}^{n+1}$ and $\mathbf{q}^n$ on on the right-hand side (we choose it instead of Euler-A, which would use $\mathbf{q}^{n+1}$ and $\mathbf{p}^n$, because the moment-of-inertia tensor $I$ depends nonlinearly on $\mathbf{q}$ and makes the resultant equations harder to solve—we'll see where that comes in, in a little bit).<br /><br />Once we solve for $\mathbf{p}^{n+1}$, then it is clear how to solve for $\mathbf{q}^{n+1}$ (noting we evaluate $I$ at $\mathbf{q}^n$):<br />\[<br />\mathbf{q}^{n+1} = \mathbf{q}^n + h I^{-1} \mathbf{p}^{n+1}.<br />\]<br />Now $\mathbf{p}^{n+1}$ is harder:<br />\[<br />\mathbf{p}^{n+1} = \mathbf{p} ^n + h \left( \frac{1}{2} \begin{pmatrix} (\mathbf{p}^{n+1})^t B \mathbf{p}^{n+1} \\-(\mathbf{p}^{n+1})^t B \mathbf{p}^{n+1}\end{pmatrix}- \frac{\partial V}{\partial \mathbf{q}}\right)<br />\]<br />where, again, the $B$ and $\partial V/\partial{\mathbf{q}}$ are evaluated at $\mathbf{q}^n$. The trouble is, of course, that $\mathbf{p}^{n+1}$ appears on both sides, and it's a nonlinear function of $\mathbf{p}^{n+1}$ (though only quadratic nonlinearity, but nonlinearity, nonetheless). How do we solve this? By...<br /><br /><h4>Using Newton's Method.</h4>To clear the clutter, we let $\mathbf{x}$ be the unknown, namely $\mathbf{p}^{n+1}$, and simply write $\mathbf{p} = \mathbf{p}^n$ as known given data. Thus, bringing all of that junk to one side of the equation, we want to solve $F(\mathbf{x}) = 0$ where $F$ is:<br />\[<br />F(\mathbf{x}) = \mathbf{x} - \mathbf{p} - h\left( \frac{1}{2} \mathbf{x}^t B \mathbf{x}\begin{pmatrix} 1 \\ - 1\end{pmatrix}- \frac{\partial V}{\partial \mathbf{q}}\right).<br />\]<br /><br />Newton's method is to consider the iteration $\mathbf{x}_{m+1} = \mathbf{x}_m - F'(\mathbf{x}_m)^{-1}F(\mathbf{x}_m)$. Now<br />\[<br />F'(\mathbf{x}) = \mathrm{Id} + h \begin{pmatrix} -\mathbf{x}^t B \\ \mathbf{x}^t B \end{pmatrix}<br />\]<br />where $\mathrm{Id}$ is the identity matrix. This, for small enough $h$, is certainly invertible. In fact, the Newton iterations tend to be stable simply because the presence of an identity matrix with small $h$ makes the problem well-conditioned. And as a starting point for Newton's method, we can use $\mathbf{x}_0 = \mathbf{p} = \mathbf{p}^n$, the value of $\mathbf{p}$ at the current timestep; it should be reasonably close, because it is the smooth solution to differential equation defined by a smooth vector field. Finally, we should stop the Newton iterations when the residual $F(\mathbf{x}_m)$ falls within a small tolerance, say, $10^{-7}$; we then simply accept $\mathbf{x}_m$ as the new value $\mathbf{p}^{n+1}$. Here's how to say that in MATLAB:<br /><br /><code> A = [ mm*r1^2, m2*r1*r2*cos(q(1)-q(2));</code><br /><code> m2*r1*r2*cos(q(1)-q(2)), m2*r2^2]; % matrix I</code><br /><code> C = [0 , -m2*r1*r2*sin(q(1)-q(2)) ;</code><br /><code> -m2*r1*r2*sin(q(1)-q(2)), 0]; % matrix dI/dq</code><br /><code> B = -A\(C/A); % derivative of inverse matrix</code><br /><code> % -I^-1 dI/dq I^-1</code><br /><code> dVdq = [mm*g*r1*sin(q(1)); m2*g*r2*sin(q(2))];</code><br /><code><br /></code><code> x = p; % initialize Newton iteration</code><br /><code> F = [1.0; 1.0]; % set initially to something that'll give a big norm</code><br /><code> iter = 0;</code><br /><code> while (norm(F) > 1.0e-7 && iter <= itmax)</code><br /><code> F = x - p + h *(1/2* [x'*B*x; -x'*B*x] + dVdq); % current F</code><br /><code> DF = eye(2) + h*([x'*B; -x'*B]); % Jacobian</code><br /><code> x = x - DF\F; % update iteration</code><br /><code> iter = iter+1;</code><br /><code> end</code><br /><code> p = x; % update p</code><br /><code> q = q + h* (A\p); % update q, using updated p</code><br /><br />Of course, we should save up the values of $q$ as a history if we want to see where it's been in state space. Here's a video of just that: we save the values of $q$ and plot them as a trail following the current point. But in this time we show the angles as being on the abstract phase space, a torus, rather than as the actual angular displacements of a pendulum (as shown in <a href="http://nestedtori.blogspot.com/2015/08/the-configuration-space-of-double.html">Part 1)</a>:<br /><br /><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.blogger.com/video.g?token=AD6v5dxWv_xFVdsIAeOpTNf-7nb1K4n-eASB-rkIqTPHCeO9uo512RQPhiRZAzXVEj9amADBozptyQ_ff35s5cMO1w' class='b-hbp-video b-uploaded' frameborder='0' /></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">The picture that opened this post is simply the final frame of the movie, showing everywhere in state space where the pendulum has been. It is this abstract representation of a physical system that leads to intriguing visualizations! The idea is, hopefully, in some other physical system, the state space of the variables might more clearly point to interesting features of the system. We shall frequently keep this philosophy in mind in our posts!</div>Chris Tieehttps://plus.google.com/101200599508754408406noreply@blogger.com0tag:blogger.com,1999:blog-5950074794465831898.post-10521839165215190222015-08-26T16:20:00.000-07:002016-05-19T02:49:32.132-07:00Hamiltonian Mechanics is Awesome (Double Pendulum, Part 4)In this post, we work toward discretizing the equations for (and simulating) the double pendulum, considered in Parts <a href="http://nestedtori.blogspot.com/2015/08/the-configuration-space-of-double.html">1</a>, <a href="http://nestedtori.blogspot.com/2015/08/lagrangian-mechanics-is-awesome-double.html">2</a>, and <a href="http://nestedtori.blogspot.com/2015/08/the-euler-lagrange-equations-for-double.html">3</a> of this series. It turns out that one very important qualitative behavior that we want is for the total energy to be conserved (one simulation I ran of this, directly discretizing the equations for $\ddot{\boldsymbol\theta}$, eventually ran amok over time, getting faster and faster, clearly something that would not occur in a real system). Thus, one place to look is symplectic methods, which we've <a href="http://nestedtori.blogspot.com/2012/11/hyperbolic-equations-and-symplectic.html">remarked on before</a>. However, at least with the symplectic methods I've learned, they seem more conceptually geared toward solving differential equations involving angular position and <i>momentum</i>, rather than angular position and <i>velocity</i>. This is not strictly true, as I <i>have </i>seen symplectic methods at work with position and velocity. However, this might be an artifact of the relation between velocity and momentum often being very trivial (e.g., just multiplying by some mass). In any case, the position-and-momentum viewpoint is more than important enough to be worth diving into now. This is referred to as <i>Hamiltonian mechanics.</i> We look at $H = T+V$ instead of $L = T-V$ and consider our variables as generalized position and momentum instead of generalized position and velocity (more generally, Hamiltonian mechanics can be derived from Lagrangian mechanics via the <i>Legendre transform</i>).<br /><br />In fact, most basic physics courses tend to focus on the sum of kinetic and potential energies, saying energy is "converted" between these two forms. They just never used the word "Hamiltonian." It's not clear, at first, what the difference between the two approaches is, especially with simple examples where it's pretty trivial to go from one to the other. In mathematical or veteran Nested Tori reader terms, the difference is that of being equations on the tangent bundle vs. the cotangent bundle (called <i>phase space</i>). Momentum takes a life of its own in more complicated systems (and in fact, in quantum mechanics, it's not clear what something can even be the velocity <i>of, </i>so something called "momentum" crops up as its own fundamental quantity without reference to the velocity of anything).<br /><br />So, what is this mystical momentum for our system? We've already calculated something which we called "angular momentum" for the double pendulum: $\mathbf{p} = \partial L/\partial \dot{\boldsymbol \theta}$, the partial "gradient" vector with respect to the angular velocities. This is what it is generally. (As a check, we note for a single particle of mass $m$ and kinetic energy $\frac 1 2 m v^2$, $\partial L/\partial \dot{\mathbf{x}}=\partial L/\partial {\mathbf{v}}$ is indeed the usual $m\mathbf{v}$.) For our example, we derived<br />\[<br />\mathbf{p} = \frac{\partial L}{\partial \dot{\boldsymbol{ \theta}}} = \begin{pmatrix} (m_1 +m_2) \ell_1^2 & m_2\ell_1\ell_2\cos(\theta_2-\theta_1) \\ m_2 \ell_1 \ell_2 \cos(\theta_2 -\theta_1) & m_2 \ell_2^2\end{pmatrix} \begin{pmatrix}\dot \theta_1 \\ \dot\theta_2 \end{pmatrix} = I\dot{\boldsymbol{\theta}}<br />\]<br />That's <i>almost</i> something like the case $\mathbf{p} = m\mathbf{v}$ for a single particle, with the rotational analogues of mass and velocity, though the "mass" here is much more complicated (depending on the positions). In particular, as a vector, it doesn't have to point in the same direction as the velocity, unlike in the particle case (that misalignment is also the cause of washing machines thumping with an unbalanced load). With this in mind, we rewrite the kinetic energy in terms of $\mathbf{p}$ as follows: assuming the matrix $I$ is invertible, we write $\dot{\boldsymbol\theta} = I^{-1} \mathbf{p}$, substitute in $T = \frac 1 2 \dot{\boldsymbol{\theta}}^t I \dot{\boldsymbol{\theta}}$, and write $\mathbf{q} = \boldsymbol{\theta}$ (the usual notation), to get<br />\[<br />H(\mathbf{q},\mathbf{p}) = T+V= \frac{1}{2} (\mathbf{p}^t I^{-1}) I (I^{-1} \mathbf{p}) + V(\mathbf q) =\frac{1}{2} \mathbf{p}^t I^{-1} \mathbf{p} + V(\mathbf q).<br />\]<br /><br /><h4>Hamilton's Equations</h4><div>Now that we have derived $H$, the <i>Hamiltonian</i>, in order to relate it to our actual equations of motion, we have <i>Hamilton's equations</i>:<br /> \[<br /> \dot{\mathbf{q}} = \frac{\partial H}{\partial \mathbf{p}}<br /> \] \[<br />\dot{\mathbf{p}} = -\frac{\partial H}{\partial \mathbf{q}}<br />\]<br />which are very nice: $\mathbf{p}$ and $\mathbf{q}$ are almost "dual" to each other in some sense. There's just that extra minus sign; this can be remembered by recalling that $H$ includes potential energy and thus differentiating with respect to $\mathbf{q}$ gives the gradient. But force is conventionally given as the <i>negative</i> of the gradient of the potential in physics. We thus find, plugging in the above:<br />\[<br />\dot{\mathbf{q}} = I^{-1} \mathbf{p}<br />\]<br />and<br />\[<br />\dot{\mathbf{p}} = -\frac 1 2 \mathbf{p}^t \frac{\partial I^{-1}}{\partial \mathbf{q}} \mathbf{p} -\frac{\partial V}{\partial \mathbf{q}}.<br />\]<br />The part that really complicates things and makes this different from the simpler examples is that the moment of inertia $I$ is dependent on $\mathbf{q}$, so we can't ignore that term when differentiating with respect to $\mathbf{q}$ (it is often just the $-\partial V/\partial\mathbf{q}$ term by itself). How do we calculate that derivative of $I^{-1}$? We step back down to components:<br />\[<br />\dot{p}_i = -\frac 1 2 \mathbf{p}^t \frac{\partial I^{-1}}{\partial \theta_i} \mathbf{p} -\frac{\partial V}{\partial \theta_i}= +\frac 1 2 \mathbf{p}^t I^{-1}\frac{\partial I}{\partial \theta_i}I^{-1} \mathbf{p} -\frac{\partial V}{\partial \theta_i}<br />\] \[<br />= \frac{1}{2} \dot{\mathbf q}^t \frac{\partial I}{\partial \theta_i}\dot{\mathbf q}-\frac{\partial V}{\partial \theta_i}<br />\]<br />where we have employed the identity that the derivative of the inverse of a matrix is always multiplication, by the inverse on both sides, of the derivative of the matrix (and a sign reversal). Let's write out what all of this is. We note that $\partial I/\partial \theta_1 = - \partial I/\partial \theta_2$, because the only way the angles in appear in $I$ is the difference $\theta_2 - \theta_1$. Thus, we have<br />\[<br />\dot {\mathbf q} = I^{-1} \mathbf{p},<br />\]<br />and using that as an intermediate variable (because to write it all out explicitly is a real mess, and we will need to use it when discretizing it, anyway):<br />\[<br />\dot{p}_1 = \frac{1}{2}\dot{\mathbf{q}}^t \begin{pmatrix} 0 & m_2\ell_1\ell_2 \sin(\theta_2-\theta_1) \\ m_2\ell_1\ell_2 \sin(\theta_2-\theta_1) &0\end{pmatrix} \dot{\mathbf{q}} + (m_1 + m_2) g\ell_1 \sin \theta_1<br />\] \[<br />\dot{p}_2 = \frac{1}{2}\dot{\mathbf{q}}^t \begin{pmatrix} 0 & -m_2\ell_1\ell_2 \sin(\theta_2-\theta_1) \\ -m_2\ell_1\ell_2 \sin(\theta_2-\theta_1) &0\end{pmatrix} \dot{\mathbf{q}} + m_2 g\ell_2 \sin \theta_2.<br />\]<br />In the final part, we discretize these equations. That'll have to wait for next time.</div>Chris Tieehttps://plus.google.com/101200599508754408406noreply@blogger.com0tag:blogger.com,1999:blog-5950074794465831898.post-77103014896205121002015-08-24T11:30:00.000-07:002016-05-19T02:49:16.476-07:00The Euler-Lagrange equations for the Double Pendulum (Config Spaces, Part 3)In this post, continuing the explorations of the double pendulum (see <a href="http://nestedtori.blogspot.com/2015/08/the-configuration-space-of-double.html">Part 1</a> and <a href="http://nestedtori.blogspot.com/2015/08/lagrangian-mechanics-is-awesome-double.html">Part 2</a>) we concentrate on deriving its equation of motion (the <i>Euler-Lagrange equation</i>). These differential equations are the heart of Lagrangian mechanics, and indeed really what one tries to get to when applying the methods (it's essentially a way of getting Newton's 2nd Law for complicated systems). And why it is awesome. This is, like the previous part, going to be a more technical than visual, but we'll get back to that in Part 4.<br /><br />Recall from <a href="http://nestedtori.blogspot.com/2015/08/lagrangian-mechanics-is-awesome-double.html">Part 2</a> that for the double pendulum,<br />\[<br />L(\theta,\dot \theta) = T - V = \frac 1 2 (m_1+m_2) \ell_1^2 \dot{\theta}_1^2 +\frac 1 2 m_2\left( 2 \dot{\theta}_1 \dot{\theta}_2\ell_1 \ell_2 \cos(\theta_2 - \theta_1) + \ell_2^2 \dot{\theta}_2^2\right)\] \[+ (m_1+m_2) g \ell_1 \cos \theta_1 + m_2 g \ell_2 \cos \theta_2,<br />\]<br />where, $\theta_1$, $\theta_2$ are the angles from vertical of the two pendulums, $\dot \theta_1$, $\dot \theta_2$ are their angular speeds, $\ell_1$, $\ell_2$ are the length of the rods, and $m_1$, $m_2$ are their masses. This looks very imposing, so let's remark a little on the structure of that equation. In simpler situations, one often sees different squared speed terms, but here we have the <i>interaction</i> of the two parts of the system, in terms with products like $\dot \theta_1 \dot \theta_2$, as well as the $\cos(\theta_2 - \theta_1)$. This distinguishes the system from two completely isolated (uncoupled) pendulums; the Lagrangian already captures the aspect that energy gets transferred back and forth between the two pendulums.<br /><br />The equations of motion are derived from considering an <i>optimization problem</i>, namely, what <i>paths $\gamma$ through state space</i> connecting two states optimizes the <i>action integral</i><br />\[<br />\int_a^b L(\gamma(t),\dot\gamma(t)) dt.<br />\]<br />In a process quite similar to what's done in first-year calculus, we "take a derivative and set it to zero", which derives the equations<br />\[<br />\frac{\partial L}{\partial \theta_i} - \frac d {dt} \frac{\partial L}{\partial \dot \theta_i} = 0<br />\]<br />where $\partial L/\partial \dot \theta_i$ means the partial derivative with respect to $\dot \theta_i$ <i>as if it were</i> an independent variable. In fact, one way is to start off with a formula involving two independent variables $\theta$ and $\omega$; only after solving for the extremal path do we actually find $\omega = \dot \theta$ (this is a generalization called the Hamilton-Pontryagin principle).<br /><br /><h4>$\partial L/\partial \dot {\boldsymbol \theta}$, angular momentum, and the moment of inertia</h4>Let's now do the computation $\frac{\partial L}{\partial \dot \theta_1}$ (this quantity actually is the <i>angular momentum</i> of the first pendulum, $p_1$)—treating everything except $\dot \theta_1$ as a constant:<br />\[<br />p_1 = \frac{\partial L}{\partial \dot \theta_1} = (m_1 + m_2) \ell_1^2 \dot \theta_1 + m_2 \ell_1\ell_2<br /> \cos(\theta_2-\theta_1)\dot \theta_2 .<br />\]<br />Note that unlike the more basic examples of angular momentum, this isn't simply dependent only on the (angular) velocity $\dot \theta_1$, but also on $\dot \theta_2$ and nonlinearly on $\theta_1$ and $\theta_2$. Similarly,<br />\[<br />p_2 = \frac{\partial L}{\partial \dot \theta_2} = m_2 \ell_2^2 \dot \theta_2 + m_2\ell_1\ell_2 \cos(\theta_2 - \theta_1) \dot \theta_1.<br />\]<br />This is more compactly expressed as<br />\[<br />\frac{\partial L}{\partial \dot{\boldsymbol{\theta}}}=\begin{pmatrix}p_1\\p_2\end{pmatrix} = \begin{pmatrix} (m_1 +m_2) \ell_1^2 & m_2\ell_1\ell_2\cos(\theta_2-\theta_1) \\ m_2 \ell_1 \ell_2 \cos(\theta_2 -\theta_1) & m_2 \ell_2^2\end{pmatrix} \begin{pmatrix}\dot \theta_1 \\ \dot\theta_2 \end{pmatrix}.<br />\]<br />This helps keep the unruliness under one umbrella. The square matrix in the above will be important, so let's call it $I$ (it is something like a <i>moment of inertia</i> for our system—but it's more complicated, principally because there is an interaction between its two component parts... It's not even a rigid body!). It should also be noted that the kinetic energy is $T = \frac 1 2 \dot{\boldsymbol{\theta}}^t I \dot{\boldsymbol \theta}$, so it still is quadratic in $\dot {\boldsymbol \theta}$, just like good ol' $\frac{1}{2} mv^2$.<br /><br /><h4>The forces, $\partial L/\partial \boldsymbol{\theta}$</h4><div>Due to the nonlinear dependence of the Lagrangian on the angular variables, the derivatives are trickier. But we forge on, because sticking to just the simplest examples won't help us gain as much insight into how things are used in practice. We have<br />\[<br />\frac{\partial L}{\partial \theta_1} = m_2 \ell_1 \ell_2 \dot \theta_1 \dot \theta_2 \sin(\theta_2-\theta_1) - (m_1 + m_2) g \ell_1 \sin \theta_1<br />\]</div><div>and<br />\[<br />\frac{\partial L}{\partial \theta_2} = -m_2 \ell_1 \ell_2 \dot \theta_1 \dot \theta_2 \sin(\theta_2-\theta_1) - m_2 g \ell_2 \sin \theta_2.<br />\]</div><br /><h4>Putting it together</h4><div>Now we take the time derivatives of the momenta we derived above and subtract and set to zero to derive the Euler-Lagrange equations. That's straightfoward calculus; but different from simpler examples in that, of course, the chain rule must be applied to the $\cos(\theta_2 - \theta_1)$, so there will be an extra $\dot \theta_1$ or $\dot \theta_2$ term:</div><div>\[<br />\frac{d}{dt} \frac{\partial L}{\partial \dot{\boldsymbol{\theta}}} = \begin{pmatrix} 0 & -m_2\ell_1\ell_2\sin(\theta_2-\theta_1)(\dot \theta_2 - \dot \theta_1) \\ -m_2 \ell_1 \ell_2 \sin(\theta_2 -\theta_1)(\dot \theta_2 - \dot \theta_1) & 0\end{pmatrix} \begin{pmatrix}\dot \theta_1 \\ \dot\theta_2 \end{pmatrix}\]<br />\[+\begin{pmatrix} (m_1 +m_2) \ell_1^2 & m_2\ell_1\ell_2\cos(\theta_2-\theta_1) \\ m_2 \ell_1 \ell_2 \cos(\theta_2 -\theta_1) & m_2 \ell_2^2\end{pmatrix} \begin{pmatrix}\ddot \theta_1 \\ \ddot\theta_2 \end{pmatrix}.<br />\]<br />The nice thing about it is that when it is subtracted from $\partial L/\partial \boldsymbol \theta$, the cross terms with the $\dot \theta_1 \dot\theta_2$ cancel. Also, the matrix multiplying the $\ddot{\theta}_i$'s is none other than our moment-of-inertia matrix $I$. This finally gives us<br /><br />\[<br />0=\frac{\partial L}{\partial \boldsymbol\theta} - \frac{d}{dt} \frac{\partial L}{\partial \dot{\boldsymbol\theta}} =\begin{pmatrix}-(m_1+m_2) g \ell_1 \sin \theta_1 - m_2 \ell_1\ell_2 \sin(\theta_2-\theta_1)\dot{\theta}_2^2 \\ -m_2 g \ell_2 \sin \theta_2 +m_2 \ell_1\ell_2 \sin(\theta_2-\theta_1)\dot{\theta}_1^2 \end{pmatrix} -I \begin{pmatrix} \ddot \theta_1 \\ \ddot \theta_2 \end{pmatrix}<br />\]<br />as the Euler-Lagrange equations. As the final equation of motion, we bring the $\ddot {\boldsymbol \theta}$ term to the other side:<br />\[<br />I \begin{pmatrix} \ddot \theta_1 \\ \ddot \theta_2 \end{pmatrix} = \begin{pmatrix}-(m_1+m_2) g \ell_1 \sin \theta_1 - m_2 \ell_1\ell_2 \sin(\theta_2-\theta_1)\dot{\theta}_2^2 \\ -m_2 g \ell_2 \sin \theta_2 +m_2 \ell_1\ell_2 \sin(\theta_2-\theta_1)\dot{\theta}_1^2 \end{pmatrix}<br />\]<br />(keep in mind that $I$ varies with $\theta_1$ and $\theta_2$, however). We discretize and solve this next time. For now, simply take note of the interesting nonlinear interactions: in the positions as the sine of the difference, and quadratic nonlinearity in the derivatives (note the reversal: the $\ddot\theta_1$ gets a $\dot \theta_2^2$ and vice versa).</div>Chris Tieehttps://plus.google.com/101200599508754408406noreply@blogger.com0tag:blogger.com,1999:blog-5950074794465831898.post-23438408109405574522015-08-21T12:12:00.000-07:002015-08-29T10:10:33.160-07:00Lagrangian Mechanics is Awesome (Double Pendulum, Part 2)In this post, we give some calculational details of the double pendulum (introduced in <a href="http://nestedtori.blogspot.com/2015/08/the-configuration-space-of-double.html">Part 1</a>). This derivation is available in several physics books at the undergraduate (upper division) level, usually fleshed out using Lagrangian mechanics, another relevant subject that is highly useful to conceptualizing these types of problems by putting them in a setting using notions of state spaces. More cool-looking demos (and also a streamlined derivation) are available in <a href="http://www.geometrictools.com/">David Eberly's excellent book, <i>Game Physics</i></a>. As a warning, this will be considerably more technical than visual; I post this because I feel that some insight into the workings of these things helps us see where geometry can be useful in contexts that are not just literally pictures. Nevertheless, here's the picture for the upcoming analysis:<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-RWpp5rw7XNs/VdbbjjTl3MI/AAAAAAAAC4Y/ALbji-7kAik/s1600/Screen%2BShot%2B2015-08-21%2Bat%2B1.02.58%2BAM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://2.bp.blogspot.com/-RWpp5rw7XNs/VdbbjjTl3MI/AAAAAAAAC4Y/ALbji-7kAik/s400/Screen%2BShot%2B2015-08-21%2Bat%2B1.02.58%2BAM.png" width="233" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">$\ell_i$ are the length of the pendulum rods, $m_i$ are the masses, and $\theta_i$ are their angular displacements from vertical.</td></tr></tbody></table><div style="text-align: left;">The use of Lagrangian mechanics spurs on the development of thinking in terms of state spaces by considering different kinds of coordinates that may be more convenient for the problem at hand. It is easier to think about the angular positions $\theta_1$ and $\theta_2$ of a pendulum than it is to derive it using <i>x</i>- and <i>y</i>-coordinates directly (and for our problem, physical constraints in the <i>x-</i> and <i>y-</i>coordinates, here made by assuming that the rods are rigid, make it somewhat less straightforward to deal with). Of course, we might <i>use</i> the <i>x</i>- and <i>y</i>-coordinates in parts of our derivation, but it's not so easy to <i>solve</i> for them.</div><br />Lagrangian mechanics also has us think of things in terms of <i>energy</i>, a quantity whose properties crop up in math a lot. Our goal in this post is to derive the Lagrangian <i>L</i> of the system. <i>L </i>is the difference between total kinetic energy <i>T </i>and total potential energy <i>V</i>. The total kinetic energy of our system should be the sum of that of the two particles,<br />\[<br />T = \frac 1 2 m_1 v_1^2 + \frac 1 2 m_2 v_2^2,<br />\]<br />and the potential energy (assuming that our pendulums are subject to downward acceleration <i>g</i>) depends only on the height of the two pendulum bobs:<br />\[<br />V = m_1 g y_1 + m_2 g y_2.<br />\]<br />However, if we want to express this in terms of the angular coordinates, we find, via the usual trigonometric arguments, that $y_1 = -\ell_1\cos \theta_1$ and since $y_2$ includes the height of the first pendulum, $y_2 = -\ell_1 \cos\theta_1 - \ell_2 \cos \theta_2$. In total, this means<br />\[<br />V = -(m_1+m_2) g \ell_1 \cos \theta_1 - m_2 g \ell_2 \cos \theta_2.<br />\]<br />Now the kinetic energy is a little trickier. Here, we assume that the first pendulum bob has linear velocity $\mathbf{v}_1$, and speed $|\mathbf{v}_1| = v_1$. Now, if we let the second pendulum bob have linear velocity $\mathbf{v}_{1,2}$ <i>relative</i> to the first, then the <i>total</i> velocity of the second bob is $\mathbf{v}_2 = \mathbf{v}_1 + \mathbf{v}_{1,2}$. The reason that $\mathbf{v}_{1,2}$ is useful is that it is calculated in exactly the same way as the first pendulum (because the second pendulum is, well, a pendulum).<br /><br />So the total (squared) speed of the second pendulum bob is $v_2^2= v_1^2 + 2 \mathbf{v}_1 \cdot \mathbf{v}_{1,2} + v_{1,2}^2$.<br /><br />Now, to figure out $\mathbf{v}_1 \cdot \mathbf{v}_{1,2}$, we use the <i>geometric</i> (length and angle) formulation of the dot product:<br />\[<br />\mathbf{v}_1 \cdot \mathbf{v}_{1,2} = v_1 v_{1,2} \cos \varphi<br />\]<br />where $\varphi$ is the angle between $\mathbf{v}_1$ and $\mathbf{v}_{1,2}$. But the angle between $\mathbf{v}_1$ and $\mathbf{v}_{1,2}$, since both of them are perpendicular to their respective rods, must be the same as the angle between the rods: $\varphi=\theta_2 - \theta_1$. Finally, to express the magnitudes $v_1$ and $v_{1,2}$, we use the relations $v_1 = \dot{\theta}_1 \ell_1$ and $v_{1,2} = \dot{\theta}_2 \ell_2$. This gives<br />\[<br />v_2^2 = \ell_1^2 \dot{\theta}_1^2 + 2 \dot{\theta}_1 \dot{\theta}_2\ell_1 \ell_2 \cos(\theta_2 - \theta_1) + \ell_2^2 \dot{\theta}_2^2<br />\]<br /><br />This finally gives<br />\[<br />T =\frac 1 2 (m_1+m_2) \ell_1^2 \dot{\theta}_1^2 +\frac 1 2 m_2\left( 2 \dot{\theta}_1 \dot{\theta}_2\ell_1 \ell_2 \cos(\theta_2 - \theta_1) + \ell_2^2 \dot{\theta}_2^2\right),<br />\]<br />or,<br />\[<br />L = T-V = \frac 1 2 (m_1+m_2) \ell_1^2 \dot{\theta}_1^2 +\frac 1 2 m_2\left( 2 \dot{\theta}_1 \dot{\theta}_2\ell_1 \ell_2 \cos(\theta_2 - \theta_1) + \ell_2^2 \dot{\theta}_2^2\right)\]\[+ (m_1+m_2) g \ell_1 \cos \theta_1 + m_2 g \ell_2 \cos \theta_2.<br />\]<br />(If you're reading this on a mobile device, please turn it to Landscape mode to see the full equations.) Actually <i>solving</i> these equations for $\theta_1$ and $\theta_2$ will have to wait for another entry!<br /><br />As a final note, kudos to <a href="https://www.mathjax.org/">MathJax</a> which allows rendering of math formulas using $\LaTeX$!Chris Tieehttps://plus.google.com/101200599508754408406noreply@blogger.com0tag:blogger.com,1999:blog-5950074794465831898.post-48053954990253252752015-08-17T16:45:00.002-07:002015-08-29T10:10:58.988-07:00The Configuration Space of a Double Pendulum<div class="separator" style="clear: both; text-align: left;">One of the classic ways of understand the torus is by anything that involves two circles, such as a system with two independent angular variables—roughly speaking, it is because the torus has two families of circles, one surrounding the tube, and another surrounding the hole. The classic example of such a system is a <i>double pendulum</i>, which is one pendulum hung off the pendulum bob of another.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.blogger.com/video.g?token=AD6v5dxBw-2jgDbtKUBByXJ7Nk7vC8s1hAUxj401s9ByD6Ngu2qguOPO7zKqg5ROVsGlVrLWrs81eb1PpN99r8orgg' class='b-hbp-video b-uploaded' frameborder='0' /></div><div class="separator" style="clear: both; text-align: center;"><span style="font-size: x-small;">Double pendulum: the path is traced by the second bob and fills a path within the annulus. If we imagine that the second circle goes into and out of the page instead, we can see it as a path on a torus.</span></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Of course, it is not immediately apparent why one would find it advantageous to consider a whole torus, as opposed to two distinct circles, other than for cool visualization purposes, so we should explain this. First, many interesting possibilities arise when the two angular variables are <i>not</i> independent: they bear some complicated relationship imposed by, say, physics, in this case. Not all the possibilities are there, at least, in a short amount of time, and it merely traces some path through this grand torus. The dynamics of the <i>interaction</i> is something that makes the system transcend merely "having two angles".</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">This notion of "where all the variables live together" is historically what led to the development of the theory of manifolds. Think again to <a href="http://nestedtori.blogspot.com/2015/08/parametric-pasta-relaunch-of-nested-tori.html">parametrizations</a> and <a href="http://nestedtori.blogspot.com/2012/10/fun-equations-of-torus-fenics-part-2.html">equations</a>, where variables that we might only know, initially, lie in some Euclidean space, either get warped into more complicated spaces, or, when constrained, lead to funny subsets where, and only where, the constraint is satisfied. These <i>state</i><i> spaces</i> (or <i>configuration spaces</i>) are everywhere, at least implicitly in any kind of modeling of real-world objects and their dynamics.</div><div class="separator" style="clear: both; text-align: left;"><br /></div>This visualization was coded in MATLAB, using a <a href="http://nestedtori.blogspot.com/2012/11/hyperbolic-equations-and-symplectic.html">symplectic integrator</a> (in order more accurately reflect the energy-conservation). We delve into the calculational specifics of this example in a future post.Chris Tieehttps://plus.google.com/101200599508754408406noreply@blogger.com0tag:blogger.com,1999:blog-5950074794465831898.post-76170206722886257632015-08-15T16:37:00.000-07:002016-04-11T13:46:36.750-07:00Parametric Pasta: Relaunch of Nested ToriToday we relaunch Nested Tori, and celebrate with a bit of history. Visualization has always been something that has run strong in my mathematical quests since declaring it as my major at UCLA, and is why I really chose to take up the field of differential geometry, and go to graduate school at UCSD. Of course, that subject is far more abstract than simply what can be pictured, so in effort to stay grounded in the motivations and keep it real, I took up some numerical analysis. That was a refreshing viewpoint, and in fact, what eventually enabled me to <a href="http://ccom.ucsd.edu/~ctiee/ctiee-thesis-extended.pdf" target="_blank">get that PhD finished</a>. Well, that, and other things. Suffice to say, my journey has brought me to a <a href="http://ccom.ucsd.edu/~ctiee" target="_blank">very interesting intersection</a> of geometry, topology, real analysis, and numerical analysis (and despite a joke from a colleague, that intersection is not empty!).<br /><br />During that time, of course, I had to teach, and enjoyed showing several visualizations to, in essence, <i>breathe a soul</i> into some of the calculations that we had them doing. Not that it would be soulless without such pictures, but it was evident that such a soul became more "accessible" after some form of visualization. And one of the favorites, and in fact, apparently what I'm most famous for (according to Google), is my <i>Pasta Parametrization Quiz</i>: Match the following pasta (click to enlarge) [UPDATE: See <a href="http://www.nestedtori.com/2016/04/pasta-parametrization-high-resolution.html">here</a> for higher-res photos and a more readable formulas]<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-J_c1UFs4tuM/Vc-8IqR-hLI/AAAAAAAAC1s/-6yqFWxS_Mw/s1600/pasta.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="253" src="https://2.bp.blogspot.com/-J_c1UFs4tuM/Vc-8IqR-hLI/AAAAAAAAC1s/-6yqFWxS_Mw/s320/pasta.png" width="320" /></a></div><span id="goog_127324649"></span><span id="goog_127324650"></span><br />to the following parametrizations ("equations", also click to enlarge):<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-9TRweG6llsY/Vc-8dGAPGvI/AAAAAAAAC10/Mn6JawhRuJQ/s1600/pasta_param.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="206" src="https://1.bp.blogspot.com/-9TRweG6llsY/Vc-8dGAPGvI/AAAAAAAAC10/Mn6JawhRuJQ/s320/pasta_param.png" width="320" /></a></div><br />(the numbers are given by <a href="http://www.dececcousa.com/Pasta/" target="_blank">De Cecco</a>). Despite the <a href="http://www.nytimes.com/2012/01/10/science/pasta-inspires-scientists-to-use-their-noodle.html?_r=0">media</a> <a href="http://www.ams.org/news/math-in-the-media/02-2012-media" target="_blank">coverage</a>, no, I didn't <i>actually </i>administer it as a "pop quiz" ... handling student rebellions isn't exactly being my favorite task. But I bring this back not just to provide a follow-up, or as a bid to get more attention after my supposed 15 minutes have expired (but it should be pretty scandalous that it hasn't been featured here on Nested Tori, right? Right??) but also to provide an answer key and talk about some of the mathematical ideas surrounding it. If you haven't tried the quiz, please try it before the spoiler.<br /><br />Really, I want explain a bit about the philosophy behind such a thing, because it motivates much of my work. Parametrizations are one of the building blocks of visualization, and indeed, <i>a lot </i>of other science, principally, anything that has to do with generating numerical data. This is because the numerical data often satisfies certain constraints: it lies on some high-dimensional surface possibly in some higher-dimensional space. The power of geometry is not (just) in its <i>literal</i> descriptions, but rather, some form of <i>metaphorical</i> descriptions. Parametrizations are a way of translating numbers roaming around in easy, planar (flat) sets, into more complicated warped versions. In each of the above parametrizations, one sees the domain of the parameters are things like rectangles, or in one case, the union of three distinct rectangles. The formulas contort the rectangles using those smooth formulas. More complicated objects could be made by simply taking a bunch together, with adjustments such as rotation and translation.<br /><br />The other common way to describe such sets is to start with a high dimensional space and add in constraints by requiring the coordinates to satisfy certain equations. <a href="http://nestedtori.blogspot.com/2012/10/fun-equations-of-torus-fenics-part-2.html" target="_blank">We've covered an example of this</a>, actually (a way to generate nested tori).<br /><br /><h4>Now on to the answer key (Spoiler Alert!):</h4><i><br /></i><i>Cavatappi: Equation (0.3)</i><br />I thought of this parametrization by first considering a torus (of course) which differs from this example only in how the <i>z</i>-coordinate is treated: if we imagine a torus continually wrapping around itself multiple times (hence the greater range of the parameter <i>t</i>), but at the same time, start changing the height, then it doesn't wrap around itself anymore, but rather, moves "out of its own way", vertically. The "β + α cos(<i>s</i>)" part is the horizontal component of the cross-sectional circle, and "α sin(<i>s</i>)" is the vertical component.<br /><br /><i>Fusilli: Equation (0.5)</i><br />This was conceived of as a variation of a <a href="https://upload.wikimedia.org/wikipedia/commons/d/d2/Helicoid_JD.png" target="_blank">helicoid</a>, which in turn is like parametrizing a flat disk by polar coordinates—think of a rotating ray sweeping out a disk uniformly in time. Except, now, we add in a variation in the same way as we did for Cavatappi: move the entire ray vertically as it sweeps. The <i>s</i>²/2 term adds some additional curviness to the pasta surface as we move out. Finally, <i>n</i>=0,1,2 means that we take three of these separate helicoids, rotate them so that they're spaced evenly, and put all three of them together (this increases the density of the spirals).<br /><br /><i>Conchiglie Rigate: Equation (0.2)</i><br />This graphs a parabola in the radial direction, varying with respect to height. In addition, variation in the angle is used to moderate how sharp the parabola should be. The "rigate" part simply comes from the coloring scheme which uses stripes.<br /><br /><i>Penne Rigate: Equation (0.1)</i><br />The simplest parametrization: simply a skewed cylinder, which says radial distance from the <i>z</i>-axis is a constant, and angle and height are free to vary. The "rigate" part is like previous example.<br /><br /><i>Farfalle: Equation (0.4)</i><br />My students provided this to me as a challenge! "But what about bow-tie?" Each one of the three coordinates has a different tweak to make things work. The easiest is the <i>x</i>-coordinate, which yields the end ruffles. For the <i>y</i>-coordinate, we wanted to model the "pinch" as an inverted bell-like curve, giving <i>y</i> as a function of <i>t</i> (no, not the Gaussian curve; this is only quadratic instead of hyperexponential decay). As the parameter <i>s </i>varies, this rescales the curve until it is flat (<i>s</i> = 0), and then inverts it on the other side (<i>s < </i>0). Finally, the <i>z-</i>coordinate should be wavy and most pronounced in the center. This is the perfect job of that bugbear of beginning calculus students everywhere: sin(<i>θ</i>)/<i>θ.</i><br /><br />All in all, this shows that putting together a lot of simple concepts can generate some cool-looking results, even if they are not exact (and indeed, dealing with less easily-modeled objects is why I got into numerical analysis, so approximation is possible, even without a precise formula).Chris Tieehttps://plus.google.com/101200599508754408406noreply@blogger.com0tag:blogger.com,1999:blog-5950074794465831898.post-44017694763382229602015-02-13T22:45:00.002-08:002015-08-16T16:32:11.918-07:00Origami Dodecahedron<div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-lNxwrV7bxIg/VN7ul8d-ipI/AAAAAAAACDI/z5l8n3UiV_U/s1600/FullSizeRender.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-lNxwrV7bxIg/VN7ul8d-ipI/AAAAAAAACDI/z5l8n3UiV_U/s1600/FullSizeRender.jpg" height="640" width="481" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><br /></div>Physically made this one. 60 folded pieces. Took a couple of months.Chris Tieehttps://plus.google.com/101200599508754408406noreply@blogger.com0tag:blogger.com,1999:blog-5950074794465831898.post-55488711687539767072013-11-19T15:08:00.000-08:002015-08-16T16:34:09.271-07:00Caustic Echoes<div class="separator" style="clear: both; text-align: left;">Greetings. I know it's been a long while since I last posted. Sorry about that. But rest assured, this has not been abandoned! I come blogging from MSRI in Berkeley, in a conference on General Relativity. Here is the coolest visualization I have seen in the conference today: caustic echoes of waves in a Schwarzschild spacetime (the standard example of a non-rotating black hole and the surrounding space).</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.youtube.com/embed/Pe8sRjqtldQ?feature=player_embedded' frameborder='0' /></div><br /><br />Chris Tieehttps://plus.google.com/101200599508754408406noreply@blogger.com0tag:blogger.com,1999:blog-5950074794465831898.post-26083021006959685732012-12-10T20:59:00.002-08:002015-08-18T14:03:33.322-07:00A non-cubical example!<div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.blogger.com/video.g?token=AD6v5dw59p5xG9fFmA8C_KK7fn3akfZaH_juKyBoi6d56PlYWOCbOT-sFpN0APtNMy_di2AzwzPr3qdQoV5BNgnjDg' class='b-hbp-video b-uploaded' frameborder='0' /></div><br />Ahh, finally, an example with the appropriate symmetry... Learned how to create some interesting 3D meshes. This example is essentially the same as the <a href="http://nestedtori.blogspot.com/2012/12/more-vector-finite-element-goodness.html" target="_blank">previous</a>, but with a different global mesh. I also changed the numerical method to something <a href="http://nestedtori.blogspot.com/2012/11/hyperbolic-equations-and-symplectic.html" target="_blank">symplectic</a>, and of higher order: Störmer-Verlet timestepping. The energy that is preserved here (or, only close to being preserved, actually—symplectic integrators do not exactly preserve energy, but do a lot better than more generic methods) is the field energy, 1/2(E, D) + 1/2 (B, H). Or, the energy that <i>would be</i> (close to) preserved, if it weren't for this oscillation being a driven one, caused by currents in the wire, rather than natural oscillations given by some initial, non-equilibrium configuration (as in the <a href="http://nestedtori.blogspot.com/2012/11/hyperbolic-equations-and-symplectic.html" target="_blank">vibrating drum post</a>). It's time for a different example, perhaps the analogue of a cavity resonator.Chris Tieehttps://plus.google.com/101200599508754408406noreply@blogger.com0