Saturday, August 29, 2015

Double Pendulum Concluded (Part 5): Discretization

Path traced out in state space by a double pendulum
In this final part, we discretize the equations for the double pendulum, covered in Parts 1, 2, 3, and 4. As frequently noted, for this problem, we are in favor of symplectic methods, because qualitatively, the algorithms have very good long-time energy behavior, and you don't get nonsense like the pendulum speeding up uncontrollably. And in order to do this, we rephrased in terms of Hamiltonian mechanics in Part 4, from which we recall (setting $\mathbf{q} = (\theta_1,\theta_2)$):
\[
\dot{\mathbf{q}} = I^{-1} \mathbf{p}
\]\[
\dot{\mathbf{p}} = \frac{1}{2} \dot{\boldsymbol{\theta}}^t \frac{\partial I}{\partial \mathbf{q}}\dot{\boldsymbol{\theta}} - \frac{\partial V}{\partial\mathbf q}
\]
which, for this system, is
\[
\dot{p}_1 = \frac{1}{2}\dot{\mathbf{q}}^t \begin{pmatrix} 0 & m_2\ell_1\ell_2 \sin(\theta_2-\theta_1)  \\ m_2\ell_1\ell_2 \sin(\theta_2-\theta_1) &0\end{pmatrix} \dot{\mathbf{q}} + (m_1 + m_2) g\ell_1 \sin \theta_1
\]\[
\dot{p}_2 = \frac{1}{2}\dot{\mathbf{q}}^t \begin{pmatrix} 0 & -m_2\ell_1\ell_2 \sin(\theta_2-\theta_1)  \\ -m_2\ell_1\ell_2 \sin(\theta_2-\theta_1) &0\end{pmatrix} \dot{\mathbf{q}} +  m_2 g\ell_2 \sin \theta_2,
\]
with the symmetric moment of inertia
\[
I = \begin{pmatrix} (m_1 +m_2) \ell_1^2 & m_2\ell_1\ell_2\cos(\theta_2-\theta_1) \\ m_2 \ell_1 \ell_2 \cos(\theta_2 -\theta_1) & m_2 \ell_2^2\end{pmatrix}.
\]
We use a one-step, first order method to simulate (with time step $h$). The usual thing to do is to consider the derivatives $\dot{\mathbf{q}}$ and $\dot{\mathbf{p}}$ as difference quotients, and solving for the later time step (denoted with the superscript $n+1$) in terms of the earlier (superscript $n$):
\[
\frac{\mathbf{q}^{n+1} - \mathbf{q}^n}{h} = I^{-1} \mathbf{p}
\]and\[
\frac{\mathbf{p}^{n+1} - \mathbf{p}^n}{h} = \frac{1}{2} \mathbf{p}^t I^{-1} \frac{\partial I}{\partial \mathbf{q}} I^{-1} \mathbf{p} - \frac{\partial V}{\partial \mathbf{q}}.
\]
Now of course, the subtlety is how to evaluate the quantities on the right-hand side; since now we don't assume that we have direct access to $\dot{\mathbf q}$, we have re-expressed it in terms of $\mathbf{p}$ in the second equation. Also, since $\partial I/\partial{\mathbf{q}}$ is some higher-order tensor, the notation becomes very confusing, very fast. If we write
\[
B = \frac{\partial I}{\partial \theta_1}
\] then we find $\partial I/\partial \theta_2 = -B$, since, as we saw, $I$ only depends on $\theta_2 -\theta_1$. Therefore, we rewrite it as
\[
\frac{\mathbf{p}^{n+1} - \mathbf{p}^n}{h} = \frac{1}{2}\begin{pmatrix} \mathbf{p}^t B \mathbf{p} \\ -\mathbf{p}^t B \mathbf{p} \end{pmatrix}- \frac{\partial V}{\partial \mathbf{q}}.
\]
The Euler-B method uses $\mathbf{p}^{n+1}$ and $\mathbf{q}^n$ on on the right-hand side (we choose it instead of Euler-A, which would use $\mathbf{q}^{n+1}$ and $\mathbf{p}^n$, because the moment-of-inertia tensor $I$ depends nonlinearly on $\mathbf{q}$ and makes the resultant equations harder to solve—we'll see where that comes in, in a little bit).

Once we solve for $\mathbf{p}^{n+1}$, then it is clear how to solve for $\mathbf{q}^{n+1}$ (noting we evaluate $I$ at $\mathbf{q}^n$):
\[
\mathbf{q}^{n+1} = \mathbf{q}^n + h I^{-1} \mathbf{p}^{n+1}.
\]
Now $\mathbf{p}^{n+1}$ is harder:
\[
\mathbf{p}^{n+1} = \mathbf{p} ^n + h \left( \frac{1}{2} \begin{pmatrix} (\mathbf{p}^{n+1})^t B \mathbf{p}^{n+1} \\-(\mathbf{p}^{n+1})^t B \mathbf{p}^{n+1}\end{pmatrix}- \frac{\partial V}{\partial \mathbf{q}}\right)
\]
where, again, the $B$ and $\partial V/\partial{\mathbf{q}}$ are evaluated at $\mathbf{q}^n$. The trouble is, of course, that $\mathbf{p}^{n+1}$ appears on both sides, and it's a nonlinear function of $\mathbf{p}^{n+1}$ (though only quadratic nonlinearity, but nonlinearity, nonetheless). How do we solve this? By...

Using Newton's Method.

To clear the clutter, we let $\mathbf{x}$ be the unknown, namely $\mathbf{p}^{n+1}$, and simply write $\mathbf{p} = \mathbf{p}^n$ as known given data. Thus, bringing all of that junk to one side of the equation, we want to solve $F(\mathbf{x}) = 0$ where $F$ is:
\[
F(\mathbf{x}) = \mathbf{x} - \mathbf{p} - h\left( \frac{1}{2} \mathbf{x}^t B \mathbf{x}\begin{pmatrix} 1 \\ - 1\end{pmatrix}- \frac{\partial V}{\partial \mathbf{q}}\right).
\]

Newton's method is to consider the iteration $\mathbf{x}_{m+1} = \mathbf{x}_m - F'(\mathbf{x}_m)^{-1}F(\mathbf{x}_m)$. Now
\[
F'(\mathbf{x}) = \mathrm{Id} + h \begin{pmatrix} -\mathbf{x}^t B \\ \mathbf{x}^t B \end{pmatrix}
\]
where $\mathrm{Id}$ is the identity matrix. This, for small enough $h$, is certainly invertible. In fact, the Newton iterations tend to be stable simply because the presence of an identity matrix with small $h$ makes the problem well-conditioned. And as a starting point for Newton's method, we can use $\mathbf{x}_0 = \mathbf{p} = \mathbf{p}^n$, the value of $\mathbf{p}$ at the current timestep; it should be reasonably close, because it is the smooth solution to differential equation defined by a smooth vector field. Finally, we should stop the Newton iterations when the residual $F(\mathbf{x}_m)$ falls within a small tolerance, say, $10^{-7}$; we then simply accept $\mathbf{x}_m$ as the new value $\mathbf{p}^{n+1}$. Here's how to say that in MATLAB:

   A = [ mm*r1^2, m2*r1*r2*cos(q(1)-q(2));
        m2*r1*r2*cos(q(1)-q(2)), m2*r2^2]; % matrix I
   C = [0 , -m2*r1*r2*sin(q(1)-q(2)) ;
         -m2*r1*r2*sin(q(1)-q(2)), 0];     % matrix dI/dq
   B = -A\(C/A);                           % derivative of inverse matrix
                                           % -I^-1 dI/dq I^-1
   dVdq = [mm*g*r1*sin(q(1)); m2*g*r2*sin(q(2))];

   x = p;          % initialize Newton iteration
   F = [1.0; 1.0]; % set initially to something that'll give a big norm
   iter = 0;
   while (norm(F) > 1.0e-7 && iter <= itmax)
      F = x - p + h *(1/2* [x'*B*x; -x'*B*x] + dVdq); % current F
      DF = eye(2) + h*([x'*B; -x'*B]);                % Jacobian
      x = x - DF\F;                                   % update iteration
      iter = iter+1;
   end
   p = x;            % update p
   q = q + h* (A\p); % update q, using updated p

Of course, we should save up the values of $q$ as a history if we want to see where it's been in state space. Here's a video of just that: we save the values of $q$ and plot them as a trail following the current point. But in this time we show the angles as being on the abstract phase space, a torus, rather than as the actual angular displacements of a pendulum (as shown in Part 1):


The picture that opened this post is simply the final frame of the movie, showing everywhere in state space where the pendulum has been. It is this abstract representation of a physical system that leads to intriguing visualizations! The idea is, hopefully, in some other physical system, the state space of the variables might more clearly point to interesting features of the system. We shall frequently keep this philosophy in mind in our posts!

Wednesday, August 26, 2015

Hamiltonian Mechanics is Awesome (Double Pendulum, Part 4)

In this post, we work toward discretizing the equations for (and simulating) the double pendulum, considered in Parts 1, 2, and 3 of this series. It turns out that one very important qualitative behavior that we want is for the total energy to be conserved (one simulation I ran of this, directly discretizing the equations for $\ddot{\boldsymbol\theta}$, eventually ran amok over time, getting faster and faster, clearly something that would not occur in a real system). Thus, one place to look is symplectic methods, which we've remarked on before. However, at least with the symplectic methods I've learned, they seem more conceptually geared toward solving differential equations involving angular position and momentum, rather than angular position and velocity. This is not strictly true, as I have seen symplectic methods at work with position and velocity. However, this might be an artifact of the relation between velocity and momentum often being very trivial (e.g., just multiplying by some mass). In any case, the position-and-momentum viewpoint is more than important enough to be worth diving into now. This is referred to as Hamiltonian mechanics. We look at $H = T+V$ instead of $L = T-V$ and consider our variables as generalized position and momentum instead of generalized position and velocity (more generally, Hamiltonian mechanics can be derived from Lagrangian mechanics via the Legendre transform).

In fact, most basic physics courses tend to focus on the sum of kinetic and potential energies, saying energy is "converted" between these two forms. They just never used the word "Hamiltonian." It's not clear, at first, what the difference between the two approaches is, especially with simple examples where it's pretty trivial to go from one to the other. In mathematical or veteran Nested Tori reader terms, the difference is that of being equations on the tangent bundle vs. the cotangent bundle (called phase space). Momentum takes a life of its own in more complicated systems (and in fact, in quantum mechanics, it's not clear what something can even be the velocity of, so something called "momentum" crops up as its own fundamental quantity without reference to the velocity of anything).

So, what is this mystical momentum for our system? We've already calculated something which we called "angular momentum" for the double pendulum: $\mathbf{p} = \partial L/\partial \dot{\boldsymbol \theta}$, the partial "gradient" vector with respect to the angular velocities. This is what it is generally. (As a check, we note for a single particle of mass $m$ and kinetic energy $\frac 1 2 m v^2$, $\partial L/\partial \dot{\mathbf{x}}=\partial L/\partial {\mathbf{v}}$ is indeed the usual $m\mathbf{v}$.) For our example, we derived
\[
\mathbf{p} =  \frac{\partial L}{\partial \dot{\boldsymbol{ \theta}}} = \begin{pmatrix} (m_1 +m_2) \ell_1^2 & m_2\ell_1\ell_2\cos(\theta_2-\theta_1) \\ m_2 \ell_1 \ell_2 \cos(\theta_2 -\theta_1) & m_2 \ell_2^2\end{pmatrix} \begin{pmatrix}\dot \theta_1 \\ \dot\theta_2 \end{pmatrix} = I\dot{\boldsymbol{\theta}}
\]
That's almost something like the case $\mathbf{p} = m\mathbf{v}$ for a single particle, with the rotational analogues of mass and velocity, though the "mass" here is much more complicated (depending on the positions). In particular, as a vector, it doesn't have to point in the same direction as the velocity, unlike in the particle case (that misalignment is also the cause of washing machines thumping with an unbalanced load). With this in mind, we rewrite the kinetic energy in terms of $\mathbf{p}$ as follows: assuming the matrix $I$ is invertible, we write $\dot{\boldsymbol\theta} = I^{-1} \mathbf{p}$, substitute in $T = \frac 1 2 \dot{\boldsymbol{\theta}}^t I \dot{\boldsymbol{\theta}}$, and write $\mathbf{q} = \boldsymbol{\theta}$ (the usual notation), to get
\[
H(\mathbf{q},\mathbf{p}) = T+V= \frac{1}{2} (\mathbf{p}^t I^{-1}) I (I^{-1} \mathbf{p}) + V(\mathbf q) =\frac{1}{2} \mathbf{p}^t I^{-1}  \mathbf{p} + V(\mathbf q).
\]

Hamilton's Equations

Now that we have derived $H$, the Hamiltonian, in order to relate it to our actual equations of motion, we have Hamilton's equations:
 \[
 \dot{\mathbf{q}} = \frac{\partial H}{\partial \mathbf{p}}
 \] \[
\dot{\mathbf{p}} = -\frac{\partial H}{\partial \mathbf{q}}
\]
which are very nice: $\mathbf{p}$ and $\mathbf{q}$ are almost "dual" to each other in some sense. There's just that extra minus sign; this can be remembered by recalling that $H$ includes potential energy and thus differentiating with respect to $\mathbf{q}$ gives the gradient. But force is conventionally given as the negative of the gradient of the potential in physics. We thus find, plugging in the above:
\[
\dot{\mathbf{q}} = I^{-1} \mathbf{p}
\]
and
\[
\dot{\mathbf{p}} = -\frac 1 2 \mathbf{p}^t \frac{\partial I^{-1}}{\partial \mathbf{q}} \mathbf{p} -\frac{\partial V}{\partial \mathbf{q}}.
\]
The part that really complicates things and makes this different from the simpler examples is that the moment of inertia $I$ is dependent on $\mathbf{q}$, so we can't ignore that term when differentiating with respect to $\mathbf{q}$ (it is often just the $-\partial V/\partial\mathbf{q}$ term by itself). How do we calculate that derivative of $I^{-1}$? We step back down to components:
\[
\dot{p}_i = -\frac 1 2 \mathbf{p}^t \frac{\partial I^{-1}}{\partial \theta_i} \mathbf{p} -\frac{\partial V}{\partial \theta_i}= +\frac 1 2 \mathbf{p}^t I^{-1}\frac{\partial I}{\partial \theta_i}I^{-1} \mathbf{p} -\frac{\partial V}{\partial \theta_i}
\] \[
= \frac{1}{2} \dot{\mathbf q}^t \frac{\partial I}{\partial \theta_i}\dot{\mathbf q}-\frac{\partial V}{\partial \theta_i}
\]
where we have employed the identity that the derivative of the inverse of a matrix is always multiplication, by the inverse on both sides, of the derivative of the matrix (and a sign reversal). Let's write out what all of this is. We note that $\partial I/\partial \theta_1 = - \partial I/\partial \theta_2$, because the only way the angles in appear in $I$ is the difference $\theta_2 - \theta_1$. Thus, we have
\[
\dot {\mathbf q} = I^{-1} \mathbf{p},
\]
and using that as an intermediate variable (because to write it all out explicitly is a real mess, and we will need to use it when discretizing it, anyway):
\[
\dot{p}_1 = \frac{1}{2}\dot{\mathbf{q}}^t \begin{pmatrix} 0 & m_2\ell_1\ell_2 \sin(\theta_2-\theta_1)  \\ m_2\ell_1\ell_2 \sin(\theta_2-\theta_1) &0\end{pmatrix} \dot{\mathbf{q}} + (m_1 + m_2) g\ell_1 \sin \theta_1
\] \[
\dot{p}_2 = \frac{1}{2}\dot{\mathbf{q}}^t \begin{pmatrix} 0 & -m_2\ell_1\ell_2 \sin(\theta_2-\theta_1)  \\ -m_2\ell_1\ell_2 \sin(\theta_2-\theta_1) &0\end{pmatrix} \dot{\mathbf{q}} +  m_2 g\ell_2 \sin \theta_2.
\]
In the final part, we discretize these equations. That'll have to wait for next time.

Monday, August 24, 2015

The Euler-Lagrange equations for the Double Pendulum (Config Spaces, Part 3)

In this post, continuing the explorations of the double pendulum (see Part 1 and Part 2) we concentrate on deriving its equation of motion (the Euler-Lagrange equation). These differential equations are the heart of Lagrangian mechanics, and indeed really what one tries to get to when applying the methods (it's essentially a way of getting Newton's 2nd Law for complicated systems). And why it is awesome. This is, like the previous part, going to be a more technical than visual, but we'll get back to that in Part 4.

Recall from Part 2 that for the double pendulum,
\[
L(\theta,\dot \theta) = T - V = \frac 1 2 (m_1+m_2) \ell_1^2 \dot{\theta}_1^2 +\frac 1 2 m_2\left( 2 \dot{\theta}_1 \dot{\theta}_2\ell_1 \ell_2 \cos(\theta_2 - \theta_1) + \ell_2^2 \dot{\theta}_2^2\right)\] \[+ (m_1+m_2) g \ell_1 \cos \theta_1 + m_2 g \ell_2 \cos \theta_2,
\]
where, $\theta_1$, $\theta_2$ are the angles from vertical of the two pendulums, $\dot \theta_1$, $\dot \theta_2$ are their angular speeds, $\ell_1$, $\ell_2$ are the length of the rods, and $m_1$, $m_2$ are their masses. This looks very imposing, so let's remark a little on the structure of that equation. In simpler situations, one often sees different squared speed terms, but here we have the interaction of the two parts of the system, in terms with products like $\dot \theta_1 \dot \theta_2$, as well as the $\cos(\theta_2 - \theta_1)$. This distinguishes the system from two completely isolated (uncoupled) pendulums; the Lagrangian already captures the aspect that energy gets transferred back and forth between the two pendulums.

The equations of motion are derived from considering an optimization problem, namely, what paths $\gamma$ through state space connecting two states optimizes the action integral
\[
\int_a^b L(\gamma(t),\dot\gamma(t)) dt.
\]
In a process quite similar to what's done in first-year calculus, we "take a derivative and set it to zero", which derives the equations
\[
\frac{\partial L}{\partial \theta_i} - \frac d {dt} \frac{\partial L}{\partial \dot \theta_i} = 0
\]
where $\partial L/\partial \dot \theta_i$ means the partial derivative with respect to $\dot \theta_i$ as if it were an independent variable. In fact, one way is to start off with a formula involving two independent variables $\theta$ and $\omega$; only after solving for the extremal path do we actually find $\omega = \dot \theta$ (this is a generalization called the Hamilton-Pontryagin principle).

$\partial L/\partial \dot {\boldsymbol \theta}$, angular momentum, and the moment of inertia

Let's now do the computation $\frac{\partial L}{\partial \dot \theta_1}$ (this quantity actually is the angular momentum of the first pendulum, $p_1$)—treating everything except $\dot \theta_1$ as a constant:
\[
p_1 = \frac{\partial L}{\partial \dot \theta_1} = (m_1 + m_2) \ell_1^2 \dot \theta_1 + m_2  \ell_1\ell_2
 \cos(\theta_2-\theta_1)\dot \theta_2 .
\]
Note that unlike the more basic examples of angular momentum, this isn't simply dependent only on the (angular) velocity $\dot \theta_1$, but also on $\dot \theta_2$ and nonlinearly on $\theta_1$ and $\theta_2$. Similarly,
\[
p_2 = \frac{\partial L}{\partial \dot \theta_2} = m_2 \ell_2^2 \dot \theta_2 + m_2\ell_1\ell_2 \cos(\theta_2 - \theta_1) \dot \theta_1.
\]
This is more compactly expressed as
\[
\frac{\partial L}{\partial \dot{\boldsymbol{\theta}}}=\begin{pmatrix}p_1\\p_2\end{pmatrix} = \begin{pmatrix} (m_1 +m_2) \ell_1^2 & m_2\ell_1\ell_2\cos(\theta_2-\theta_1) \\ m_2 \ell_1 \ell_2 \cos(\theta_2 -\theta_1) & m_2 \ell_2^2\end{pmatrix} \begin{pmatrix}\dot \theta_1 \\ \dot\theta_2 \end{pmatrix}.
\]
This helps keep the unruliness under one umbrella. The square matrix in the above will be important, so let's call it $I$ (it is something like a moment of inertia for our system—but it's more complicated, principally because there is an interaction between its two component parts... It's not even a rigid body!). It should also be noted that the kinetic energy is $T = \frac 1 2 \dot{\boldsymbol{\theta}}^t I \dot{\boldsymbol \theta}$, so it still is quadratic in $\dot {\boldsymbol \theta}$, just like good ol' $\frac{1}{2} mv^2$.

The forces, $\partial L/\partial \boldsymbol{\theta}$

Due to the nonlinear dependence of the Lagrangian on the angular variables, the derivatives are trickier. But we forge on, because sticking to just the simplest examples won't help us gain as much insight into how things are used in practice. We have
\[
\frac{\partial L}{\partial \theta_1} = m_2 \ell_1 \ell_2 \dot \theta_1 \dot \theta_2 \sin(\theta_2-\theta_1) - (m_1 + m_2) g \ell_1 \sin \theta_1
\]
and
\[
\frac{\partial L}{\partial \theta_2} = -m_2 \ell_1 \ell_2 \dot \theta_1 \dot \theta_2 \sin(\theta_2-\theta_1) - m_2 g \ell_2 \sin \theta_2.
\]

Putting it together

Now we take the time derivatives of the momenta we derived above and subtract and set to zero to derive the Euler-Lagrange equations. That's straightfoward calculus; but different from simpler examples in that, of course, the chain rule must be applied to the $\cos(\theta_2 - \theta_1)$, so there will be an extra $\dot \theta_1$ or $\dot \theta_2$ term:
\[
\frac{d}{dt} \frac{\partial L}{\partial \dot{\boldsymbol{\theta}}} =  \begin{pmatrix} 0 & -m_2\ell_1\ell_2\sin(\theta_2-\theta_1)(\dot \theta_2 - \dot \theta_1) \\ -m_2 \ell_1 \ell_2 \sin(\theta_2 -\theta_1)(\dot \theta_2 - \dot \theta_1) & 0\end{pmatrix} \begin{pmatrix}\dot \theta_1 \\ \dot\theta_2 \end{pmatrix}\]
\[+\begin{pmatrix} (m_1 +m_2) \ell_1^2 & m_2\ell_1\ell_2\cos(\theta_2-\theta_1) \\ m_2 \ell_1 \ell_2 \cos(\theta_2 -\theta_1) & m_2 \ell_2^2\end{pmatrix} \begin{pmatrix}\ddot \theta_1 \\ \ddot\theta_2 \end{pmatrix}.
\]
The nice thing about it is that when it is subtracted from $\partial L/\partial \boldsymbol \theta$, the cross terms with the $\dot \theta_1 \dot\theta_2$ cancel. Also, the matrix multiplying the $\ddot{\theta}_i$'s is none other than our moment-of-inertia matrix $I$. This finally gives us

\[
0=\frac{\partial L}{\partial \boldsymbol\theta} - \frac{d}{dt} \frac{\partial L}{\partial \dot{\boldsymbol\theta}} =\begin{pmatrix}-(m_1+m_2) g \ell_1 \sin \theta_1 - m_2 \ell_1\ell_2 \sin(\theta_2-\theta_1)\dot{\theta}_2^2 \\ -m_2 g \ell_2 \sin \theta_2 +m_2 \ell_1\ell_2 \sin(\theta_2-\theta_1)\dot{\theta}_1^2  \end{pmatrix} -I \begin{pmatrix} \ddot \theta_1 \\ \ddot \theta_2 \end{pmatrix}
\]
as the Euler-Lagrange equations. As the final equation of motion, we bring the $\ddot {\boldsymbol \theta}$ term to the other side:
\[
I \begin{pmatrix} \ddot \theta_1 \\ \ddot \theta_2 \end{pmatrix}  = \begin{pmatrix}-(m_1+m_2) g \ell_1 \sin \theta_1 - m_2 \ell_1\ell_2 \sin(\theta_2-\theta_1)\dot{\theta}_2^2 \\ -m_2 g \ell_2 \sin \theta_2 +m_2 \ell_1\ell_2 \sin(\theta_2-\theta_1)\dot{\theta}_1^2  \end{pmatrix}
\]
(keep in mind that $I$ varies with $\theta_1$ and $\theta_2$, however). We discretize and solve this next time. For now, simply take note of the interesting nonlinear interactions: in the positions as the sine of the difference, and quadratic nonlinearity in the derivatives (note the reversal: the $\ddot\theta_1$ gets a $\dot \theta_2^2$ and vice versa).

Friday, August 21, 2015

Lagrangian Mechanics is Awesome (Double Pendulum, Part 2)

In this post, we give some calculational details of the double pendulum (introduced in Part 1). This derivation is available in several physics books at the undergraduate (upper division) level, usually fleshed out using Lagrangian mechanics, another relevant subject that is highly useful to conceptualizing these types of problems by putting them in a setting using notions of state spaces. More cool-looking demos (and also a streamlined derivation) are available in David Eberly's excellent book, Game Physics. As a warning, this will be considerably more technical than visual; I post this because I feel that some insight into the workings of these things helps us see where geometry can be useful in contexts that are not just literally pictures. Nevertheless, here's the picture for the upcoming analysis:

$\ell_i$ are the length of the pendulum rods, $m_i$ are the masses, and $\theta_i$ are their angular displacements from vertical.
The use of Lagrangian mechanics spurs on the development of thinking in terms of state spaces by considering different kinds of coordinates that may be more convenient for the problem at hand. It is easier to think about the angular positions $\theta_1$ and $\theta_2$ of a pendulum than it is to derive it using x- and y-coordinates directly (and for our problem, physical constraints in the x- and y-coordinates, here made by assuming that the rods are rigid, make it somewhat less straightforward to deal with). Of course, we might use the x- and y-coordinates in parts of our derivation, but it's not so easy to solve for them.

Lagrangian mechanics also has us think of things in terms of energy, a quantity whose properties crop up in math a lot. Our goal in this post is to derive the Lagrangian L of the system. is the difference between total kinetic energy and total potential energy V. The total kinetic energy of our system should be the sum of that of the two particles,
\[
T = \frac 1 2 m_1 v_1^2 + \frac 1 2 m_2 v_2^2,
\]
and the potential energy (assuming that our pendulums are subject to downward acceleration g) depends only on the height of the two pendulum bobs:
\[
V = m_1 g y_1 + m_2 g y_2.
\]
However, if we want to express this in terms of the angular coordinates, we find, via the usual trigonometric arguments, that $y_1 = -\ell_1\cos \theta_1$ and since $y_2$ includes the height of the first pendulum, $y_2 = -\ell_1 \cos\theta_1 - \ell_2 \cos \theta_2$. In total, this means
\[
V = -(m_1+m_2) g \ell_1 \cos \theta_1 - m_2 g \ell_2 \cos \theta_2.
\]
Now the kinetic energy is a little trickier. Here, we assume that the first pendulum bob has linear velocity $\mathbf{v}_1$, and speed $|\mathbf{v}_1| = v_1$. Now, if we let the second pendulum bob have linear velocity $\mathbf{v}_{1,2}$ relative to the first, then the total velocity of the second bob is $\mathbf{v}_2 = \mathbf{v}_1 + \mathbf{v}_{1,2}$. The reason that $\mathbf{v}_{1,2}$ is useful is that it is calculated in exactly the same way as the first pendulum (because the second pendulum is, well, a pendulum).

So the total (squared) speed of the second pendulum bob is $v_2^2= v_1^2 + 2 \mathbf{v}_1 \cdot \mathbf{v}_{1,2} + v_{1,2}^2$.

Now, to figure out $\mathbf{v}_1 \cdot \mathbf{v}_{1,2}$, we use the geometric (length and angle) formulation of the dot product:
\[
\mathbf{v}_1 \cdot \mathbf{v}_{1,2} = v_1 v_{1,2} \cos \varphi
\]
where $\varphi$ is the angle between $\mathbf{v}_1$ and $\mathbf{v}_{1,2}$. But the angle between $\mathbf{v}_1$ and $\mathbf{v}_{1,2}$, since both of them are perpendicular to their respective rods, must be the same as the angle between the rods: $\varphi=\theta_2 - \theta_1$. Finally, to express the magnitudes $v_1$ and $v_{1,2}$, we use the relations $v_1 = \dot{\theta}_1 \ell_1$ and $v_{1,2} = \dot{\theta}_2 \ell_2$. This gives
\[
v_2^2 = \ell_1^2 \dot{\theta}_1^2 + 2 \dot{\theta}_1 \dot{\theta}_2\ell_1 \ell_2 \cos(\theta_2 - \theta_1) + \ell_2^2 \dot{\theta}_2^2
\]

This finally gives
\[
T =\frac 1 2 (m_1+m_2) \ell_1^2 \dot{\theta}_1^2 +\frac 1 2 m_2\left( 2 \dot{\theta}_1 \dot{\theta}_2\ell_1 \ell_2 \cos(\theta_2 - \theta_1) + \ell_2^2 \dot{\theta}_2^2\right),
\]
or,
\[
L = T-V = \frac 1 2 (m_1+m_2) \ell_1^2 \dot{\theta}_1^2 +\frac 1 2 m_2\left( 2 \dot{\theta}_1 \dot{\theta}_2\ell_1 \ell_2 \cos(\theta_2 - \theta_1) + \ell_2^2 \dot{\theta}_2^2\right)\]\[+ (m_1+m_2) g \ell_1 \cos \theta_1 + m_2 g \ell_2 \cos \theta_2.
\]
(If you're reading this on a mobile device, please turn it to Landscape mode to see the full equations.) Actually solving these equations for $\theta_1$ and $\theta_2$ will have to wait for another entry!

As a final note, kudos to MathJax which allows rendering of math formulas using $\LaTeX$!

Monday, August 17, 2015

The Configuration Space of a Double Pendulum

One of the classic ways of understand the torus is by anything that involves two circles, such as a system with two independent angular variables—roughly speaking, it is because the torus has two families of circles, one surrounding the tube, and another surrounding the hole. The classic example of such a system is a double pendulum, which is one pendulum hung off the pendulum bob of another.

Double pendulum: the path is traced by the second bob and fills a path within the annulus. If we imagine that the second circle goes into and out of the page instead, we can see it as a path on a torus.

Of course, it is not immediately apparent why one would find it advantageous to consider a whole torus, as opposed to two distinct circles, other than for cool visualization purposes, so we should explain this. First, many interesting possibilities arise when the two angular variables are not independent: they bear some complicated relationship imposed by, say, physics, in this case. Not all the possibilities are there, at least, in a short amount of time, and it merely traces some path through this grand torus. The dynamics of the interaction is something that makes the system transcend merely "having two angles".

This notion of "where all the variables live together" is historically what led to the development of the theory of manifolds. Think again to parametrizations and equations, where variables that we might only know, initially, lie in some Euclidean space, either get warped into more complicated spaces, or, when constrained, lead to funny subsets where, and only where, the constraint is satisfied. These state spaces (or configuration spaces) are everywhere, at least implicitly in any kind of modeling of real-world objects and their dynamics.

This visualization was coded in MATLAB, using a symplectic integrator (in order more accurately reflect the energy-conservation). We delve into the calculational specifics of this example in a future post.

Saturday, August 15, 2015

Parametric Pasta: Relaunch of Nested Tori

Today we relaunch Nested Tori, and celebrate with a bit of history. Visualization has always been something that has run strong in my mathematical quests since declaring it as my major at UCLA, and is why I really chose to take up the field of differential geometry, and go to graduate school at UCSD. Of course, that subject is far more abstract than simply what can be pictured, so in effort to stay grounded in the motivations and keep it real, I took up some numerical analysis. That was a refreshing viewpoint, and in fact, what eventually enabled me to get that PhD finished. Well, that, and other things. Suffice to say, my journey has brought me to a very interesting intersection of geometry, topology, real analysis, and numerical analysis (and despite a joke from a colleague, that intersection is not empty!).

During that time, of course, I had to teach, and enjoyed showing several visualizations to, in essence, breathe a soul into some of the calculations that we had them doing. Not that it would be soulless without such pictures, but it was evident that such a soul became more "accessible" after some form of visualization. And one of the favorites, and in fact, apparently what I'm most famous for (according to Google), is my Pasta Parametrization Quiz: Match the following pasta (click to enlarge) [UPDATE: See here for higher-res photos and a more readable formulas]


to the following parametrizations ("equations", also click to enlarge):


(the numbers are given by De Cecco). Despite the media coverage, no, I didn't actually administer it as a "pop quiz" ... handling student rebellions isn't exactly being my favorite task. But I bring this back not just to provide a follow-up, or as a bid to get more attention after my supposed 15 minutes have expired (but it should be pretty scandalous that it hasn't been featured here on Nested Tori, right? Right??) but also to provide an answer key and talk about some of the mathematical ideas surrounding it. If you haven't tried the quiz, please try it before the spoiler.

Really, I want explain a bit about the philosophy behind such a thing, because it motivates much of my work. Parametrizations are one of the building blocks of visualization, and indeed, a lot of other science, principally, anything that has to do with generating numerical data. This is because the numerical data often satisfies certain constraints: it lies on some high-dimensional surface possibly in some higher-dimensional space. The power of geometry is not (just) in its literal descriptions, but rather, some form of metaphorical descriptions. Parametrizations are a way of translating numbers roaming around in easy, planar (flat) sets, into more complicated warped versions. In each of the above parametrizations, one sees the domain of the parameters are things like rectangles, or in one case, the union of three distinct rectangles. The formulas contort the rectangles using those smooth formulas. More complicated objects could be made by simply taking a bunch together, with adjustments such as rotation and translation.

The other common way to describe such sets is to start with a high dimensional space and add in constraints by requiring the coordinates to satisfy certain equations. We've covered an example of this, actually (a way to generate nested tori).

Now on to the answer key (Spoiler Alert!):


Cavatappi: Equation (0.3)
I thought of this parametrization by first considering a torus (of course) which differs from this example only in how the z-coordinate is treated: if we imagine a torus continually wrapping around itself multiple times (hence the greater range of the parameter t), but at the same time, start changing the height, then it doesn't wrap around itself anymore, but rather, moves "out of its own way", vertically. The "β + α cos(s)" part is the horizontal component of the cross-sectional circle, and "α sin(s)" is the vertical component.

Fusilli: Equation (0.5)
This was conceived of as a variation of a helicoid, which in turn is like parametrizing a flat disk by polar coordinates—think of a rotating ray sweeping out a disk uniformly in time. Except, now, we add in a variation in the same way as we did for Cavatappi: move the entire ray vertically as it sweeps. The s²/2 term adds some additional curviness to the pasta surface as we move out. Finally, n=0,1,2 means that we take three of these separate helicoids, rotate them so that they're spaced evenly, and put all three of them together (this increases the density of the spirals).

Conchiglie Rigate: Equation (0.2)
This graphs a parabola in the radial direction, varying with respect to height. In addition, variation in the angle is used to moderate how sharp the parabola should be. The "rigate" part simply comes from the coloring scheme which uses stripes.

Penne Rigate: Equation (0.1)
The simplest parametrization: simply a skewed cylinder, which says radial distance from the z-axis is a constant, and angle and height are free to vary. The "rigate" part is like previous example.

Farfalle: Equation (0.4)
My students provided this to me as a challenge! "But what about bow-tie?" Each one of the three coordinates has a different tweak to make things work. The easiest is the x-coordinate, which yields the end ruffles. For the y-coordinate, we wanted to model the "pinch" as an inverted bell-like curve, giving y as a function of t (no, not the Gaussian curve; this is only quadratic instead of hyperexponential decay). As the parameter s varies, this rescales the curve until it is flat (s = 0), and then inverts it on the other side (s < 0). Finally, the z-coordinate should be wavy and most pronounced in the center. This is the perfect job of that bugbear of beginning calculus students everywhere: sin(θ)/θ.

All in all, this shows that putting together a lot of simple concepts can generate some cool-looking results, even if they are not exact (and indeed, dealing with less easily-modeled objects is why I got into numerical analysis, so approximation is possible, even without a precise formula).