Sunday, November 25, 2012

Happy Thanksgiving from Nested Tori!


It looks kind of turkey-ish, right? Some numerical imperfections give it a green tail, anyway. It's supposed to be the magnetic field of a spinning solid ball of charge. In some sense, this problem started my quest to become more numerical: I remember being frustrated that I could not rattle off what the magnetic field of this configuration of charge was, despite knowing a whole bunch of theory and equations. And attacking the problem analytically was, to use a different species of fowl, a wild goose chase. It turns out I did find out one possible analytic expression for it, using multipole expansions, but that's still a far cry from being able to come up with interesting visualizations (multipole expansions and spherical harmonics DO have cool visualizations which we will definitely post here someday). Gobble gobble! The field is actually similar to that of a dipole, and aside from the boundary condition that it has to be tangent to the big enclosing cube, the vector field is tangent to, you guessed it, nested tori.

Friday, November 16, 2012

Sneak Peek: Vector Finite Elements


Vector-valued differential equations are often a topic felt to be too advanced to be mentioned in elementary differential equations courses. Part of the problem, of course, is that many of the equations actually involve things like differential forms and other vector-like quantities that are best explained in terms of differential geometry, which takes an introductory course too far afield. The following picture is the magnetic field of current flowing through a solid cube (it is a constant current coming approximately out of the plane of the picture) requiring it to be tangent to the boundary. The method of solution involves special kinds of finite elements specifically for vector-valued functions (really, differential forms), and as talked about in Anil Hirani's talk. I'll be working with these types of elements for a while, so expect to see more posts on this!

Monday, November 12, 2012

Hyperbolic Equations and Symplectic Integrators

One well-known analytic fact about the wave equation (hyperbolic equations) vs. the heat equation (parabolic equations) is that the latter always results in smooth solutions, for any time t > 0, for even very nonsmooth initial data; whereas the former has no smoothing properties at all. This is usually summed up as "Wave singularities propagate." Here is the video with the same (very nonsmooth) initial data that was given for the heat equation last time:

Not Honest Abe's hat...

Pretty ridiculously jagged, right? However, it turns out that in order to actually (reasonably) faithfully preserve the jaggedness, you have to tweak your numerical methods a bit to achieve this. (We'll give a few words on why one would want to preserve jaggedness at the end). The most basic method to discretize time evolution problems is simply to replace time derivatives with their difference quotient approximations—the derivative without a limit: u'(t) ≈ 1/∆t (u(t+∆t) - u(t)). This enables us to solve for the future time in terms of the past: u(t+∆t) ≈ u(t) + u'(t)∆t. Now the essence of differential equations is that the u' is also given by some other function (vector field) f(t,u). Fixing the ∆t to some standard "mesh size" h, and having the initial value, this gives us an iteration process (recurrence relation):

un+1 = un + hf(t0+nh,un)

and u(t0) = u0 is the given initial condition. This is called Euler's Method. It is well-known, however, that this method is sensitive to the size of the time step (it is unstable). It turns out that one can get a stable method by simply considering backward differences, that is, replacing the un in the vector field f with un+1 instead. Simple? Well, now you have un+1 on both sides of the equation, and for all but the simplest fs, it can't be easily solved algebraically (so we have to do some more numerical computations). For linear ODEs, like the heat and wave equations, this turns out to simply be inverting a different matrix.

For wave equations, with two time derivatives, one usually does a classic (but as far as I know, nameless) trick, setting v = ut, giving us the system ut = v; vt = -∆u, then treating it as first-order. It's a bit more complex than that, because we also have to discretize the spatial part (with finite elements—more fun with FEniCS). The backward Euler discretization gives: 


Hey, why is it a lot smoother?! As explained to me by Prof. Melvin Leok in his class on geometric integration, it is essentially because stability theory is based on exponential decay (the model problem), and so in the goal to achieve stability, the methods actually introduce artificial dissipation (smoothing) terms. This means the solution loses energy. Energy conservation (which preserves the jaggedness) in the wave equation, however, is important for many applications. For example, if you simulate the solar system, you'll get some wildly off predictions, since energy loss there corresponds to orbits spiraling in. And even if we have a dissipative system, we probably don't want the wrong amount (however, there's one case I can think of in which it might not matter: solving an elliptic problem by running a parabolic problem to equilibrium).

The fix in this particular case (Euler-type discretizations) is interesting and almost seems like a hack at first (but really, there's lots of good theory backing this!) is that we use forward Euler in one variable (say, u) and the backward Euler in the other variable (v). Either way works (one is called Euler-A, the other Euler-B). This is called a symplectic integrator. In some sense, the dissipative behavior of the backward method, combined with the instability (where things exponentially grow) of the forward way, cancel out. Not completely, as it does have timestep sensitivity (too large a timestep causes our solution to simply blow up, even if is smooth like last time). Things can be written down a lot more explicitly with simpler, 1-particle systems such as a pendulum, or simulation of one planet in the solar system, but that is a story for another entry.

For fun, I leave you one last video, which is the symplectic case, but now with only the color-coding and not the actual displacement in 3D, which might be more informative. Note how you can really see the waves "reflecting" at the boundary—that's the Dirichlet boundary condition.

This is your brain. On math.

Saturday, November 10, 2012

Fun with FEniCS, Part 3


That's the same heat equation, except now on a circular domain rather than a square. However, more interesting visualization can be done if we add an extra time derivative,

utt - ∆u = 0,

which is the wave equation. For a circular domain, it models vibrations of a drumhead. Here it is, with an initial condition that looks like the horseshoe condition but smoothed out by the previous heat equation (I saved the data and used it as initial data in the hyperbolic equation):


Generally, it was a lot more computationally intensive to do this instead of the heat equation, as I had to use a smaller timestep. Also, the general time scale on which the dynamics becomes interesting is longer. I wrote to file only once per 10 timesteps (otherwise I would have had 12000 frames, which would have given a video 3 minutes and 20 seconds long!). As for computational speed, using FEniCS was definitely a boon; if I had generated this in MATLAB, it would have easily taken a whole day. The circular equations is a great way to talk about Bessel functions, which will definitely make an appearance here someday. But suffice to say, their use was not necessary here.