## Monday, December 10, 2012

### A non-cubical example!

Ahh, finally, an example with the appropriate symmetry... Learned how to create some interesting 3D meshes. This example is essentially the same as the previous, but with a different global mesh. I also changed the numerical method to something symplectic, and of higher order: Störmer-Verlet timestepping. The energy that is preserved here (or, only close to being preserved, actually—symplectic integrators do not exactly preserve energy, but do a lot better than more generic methods) is the field energy, 1/2(E, D) + 1/2 (B, H). Or, the energy that would be (close to) preserved, if it weren't for this oscillation being a driven one, caused by currents in the wire, rather than natural oscillations given by some initial, non-equilibrium configuration (as in the vibrating drum post). It's time for a different example, perhaps the analogue of a cavity resonator.

## Friday, December 7, 2012

### More Vector Finite Element Goodness

Would you like corn on the cob with that?

In case you weren't satisfied with steady-state behavior in the past couple of posts, I finally bring you some dynamics. This is a picture of the magnetic field caused by a central wire with a given alternating current, surrounded by air (although with boundary conditions tangent are to the surrounding cube, so the vector field becomes a bit squashed toward the outside as compared to an actually cylindrically symmetric problem). Current in the wire is modeled by simply taking a current density which is nonzero (and time-varying) within a small radius from the z-axis. The mesh is significantly more refined in the region, to better account for the discontinuity and that I intend the wire to actually be round. It takes a little while for it to reach steady state in terms of amplitude, so don't be disappointed if the vectors aren't very visible initially. If you look closely (and admittedly, you will have to either really look, or play the video in slower motion), you can see that there's a slight delay between what happens at the wire, and what happens some distance away from it. Here c = 3 (so today, E = 9m), that is, it takes 1/3 of a unit of time (about 2 seconds of the video) for the influence of the wave to traverse 1 cube length. The whole video itself is 4 units of time long. The original equations as presented in the paper of Bossavit is in fact more geared toward static situations, in which we can neglect the time derivative term for the electric displacement (the one that Maxwell added), but it turns out adding that term back makes the finite element formulation much more natural—in fact, when reducing it to mass and stiffness matrices and nodal vectors, the time-stepper we use is exactly the same as in the backward Euler discretization described a couple of posts ago. Without that term, I had to use  two different mass matrices. Here's a more interesting example:

And a donut too!

Here, we've modified the setup to include a conductor in the shape of, you guessed it, a torus. The significance of it being "conductive" is that an extra term, Ohm's law, is added in. It affects the dynamics because the field causes currents to flow inside the torus, which generates more field, in different directions. This is expressed in the program as a piecewise function which is zero outside the toroidal region and 1 inside it, and the stiffness matrix is modified to account for this extra term. For similar reasons as the wire above, the mesh is now also significantly more refined in the region of the torus (otherwise it kind of looks like a square block), and in any case, also better for handling the discontinuity, but this time for conductivity. Next up: seeing if there's a way to go about this using symplectic methods!

## Sunday, November 25, 2012

### Happy Thanksgiving from Nested Tori!

It looks kind of turkey-ish, right? Some numerical imperfections give it a green tail, anyway. It's supposed to be the magnetic field of a spinning solid ball of charge. In some sense, this problem started my quest to become more numerical: I remember being frustrated that I could not rattle off what the magnetic field of this configuration of charge was, despite knowing a whole bunch of theory and equations. And attacking the problem analytically was, to use a different species of fowl, a wild goose chase. It turns out I did find out one possible analytic expression for it, using multipole expansions, but that's still a far cry from being able to come up with interesting visualizations (multipole expansions and spherical harmonics DO have cool visualizations which we will definitely post here someday). Gobble gobble! The field is actually similar to that of a dipole, and aside from the boundary condition that it has to be tangent to the big enclosing cube, the vector field is tangent to, you guessed it, nested tori.

## Friday, November 16, 2012

### Sneak Peek: Vector Finite Elements

Vector-valued differential equations are often a topic felt to be too advanced to be mentioned in elementary differential equations courses. Part of the problem, of course, is that many of the equations actually involve things like differential forms and other vector-like quantities that are best explained in terms of differential geometry, which takes an introductory course too far afield. The following picture is the magnetic field of current flowing through a solid cube (it is a constant current coming approximately out of the plane of the picture) requiring it to be tangent to the boundary. The method of solution involves special kinds of finite elements specifically for vector-valued functions (really, differential forms), and as talked about in Anil Hirani's talk. I'll be working with these types of elements for a while, so expect to see more posts on this!

## Monday, November 12, 2012

### Hyperbolic Equations and Symplectic Integrators

One well-known analytic fact about the wave equation (hyperbolic equations) vs. the heat equation (parabolic equations) is that the latter always results in smooth solutions, for any time t > 0, for even very nonsmooth initial data; whereas the former has no smoothing properties at all. This is usually summed up as "Wave singularities propagate." Here is the video with the same (very nonsmooth) initial data that was given for the heat equation last time:

Not Honest Abe's hat...

Pretty ridiculously jagged, right? However, it turns out that in order to actually (reasonably) faithfully preserve the jaggedness, you have to tweak your numerical methods a bit to achieve this. (We'll give a few words on why one would want to preserve jaggedness at the end). The most basic method to discretize time evolution problems is simply to replace time derivatives with their difference quotient approximations—the derivative without a limit: u'(t) ≈ 1/∆t (u(t+∆t) - u(t)). This enables us to solve for the future time in terms of the past: u(t+∆t) ≈ u(t) + u'(t)∆t. Now the essence of differential equations is that the u' is also given by some other function (vector field) f(t,u). Fixing the ∆t to some standard "mesh size" h, and having the initial value, this gives us an iteration process (recurrence relation):

un+1 = un + hf(t0+nh,un)

and u(t0) = u0 is the given initial condition. This is called Euler's Method. It is well-known, however, that this method is sensitive to the size of the time step (it is unstable). It turns out that one can get a stable method by simply considering backward differences, that is, replacing the un in the vector field f with un+1 instead. Simple? Well, now you have un+1 on both sides of the equation, and for all but the simplest fs, it can't be easily solved algebraically (so we have to do some more numerical computations). For linear ODEs, like the heat and wave equations, this turns out to simply be inverting a different matrix.

For wave equations, with two time derivatives, one usually does a classic (but as far as I know, nameless) trick, setting v = ut, giving us the system ut = v; vt = -∆u, then treating it as first-order. It's a bit more complex than that, because we also have to discretize the spatial part (with finite elements—more fun with FEniCS). The backward Euler discretization gives:

Hey, why is it a lot smoother?! As explained to me by Prof. Melvin Leok in his class on geometric integration, it is essentially because stability theory is based on exponential decay (the model problem), and so in the goal to achieve stability, the methods actually introduce artificial dissipation (smoothing) terms. This means the solution loses energy. Energy conservation (which preserves the jaggedness) in the wave equation, however, is important for many applications. For example, if you simulate the solar system, you'll get some wildly off predictions, since energy loss there corresponds to orbits spiraling in. And even if we have a dissipative system, we probably don't want the wrong amount (however, there's one case I can think of in which it might not matter: solving an elliptic problem by running a parabolic problem to equilibrium).

The fix in this particular case (Euler-type discretizations) is interesting and almost seems like a hack at first (but really, there's lots of good theory backing this!) is that we use forward Euler in one variable (say, u) and the backward Euler in the other variable (v). Either way works (one is called Euler-A, the other Euler-B). This is called a symplectic integrator. In some sense, the dissipative behavior of the backward method, combined with the instability (where things exponentially grow) of the forward way, cancel out. Not completely, as it does have timestep sensitivity (too large a timestep causes our solution to simply blow up, even if is smooth like last time). Things can be written down a lot more explicitly with simpler, 1-particle systems such as a pendulum, or simulation of one planet in the solar system, but that is a story for another entry.

For fun, I leave you one last video, which is the symplectic case, but now with only the color-coding and not the actual displacement in 3D, which might be more informative. Note how you can really see the waves "reflecting" at the boundary—that's the Dirichlet boundary condition.

This is your brain. On math.

## Saturday, November 10, 2012

### Fun with FEniCS, Part 3

That's the same heat equation, except now on a circular domain rather than a square. However, more interesting visualization can be done if we add an extra time derivative,

utt - ∆u = 0,

which is the wave equation. For a circular domain, it models vibrations of a drumhead. Here it is, with an initial condition that looks like the horseshoe condition but smoothed out by the previous heat equation (I saved the data and used it as initial data in the hyperbolic equation):

Generally, it was a lot more computationally intensive to do this instead of the heat equation, as I had to use a smaller timestep. Also, the general time scale on which the dynamics becomes interesting is longer. I wrote to file only once per 10 timesteps (otherwise I would have had 12000 frames, which would have given a video 3 minutes and 20 seconds long!). As for computational speed, using FEniCS was definitely a boon; if I had generated this in MATLAB, it would have easily taken a whole day. The circular equations is a great way to talk about Bessel functions, which will definitely make an appearance here someday. But suffice to say, their use was not necessary here.

## Sunday, October 28, 2012

### Fun Equations of the Torus (FEniCS part 2)

The story so far: we wished to create a horseshoe-like initial condition on the torus. We needed to (1) come up with the equation for the bounding curve, and (2) distinguish between the inside and outside. One tool is perfect for the job: level sets (another common name for them is isosurfaces, and some more dimension-specific terms like level contours or level surfaces). Surfaces (or curves) of the form f(x,y,z) = c (or f(x,y) = c). In order to do this, we first start with the classic parametrization of the torus as a surface of revolution:

x = (R+r cos(u))cos(θ)
= (R+r cos(u))sin(θ)
= r sin(u),

with 0 ≤ u ≤ 2π and 0 ≤ θ ≤ 2π. What this is saying is that we take a circle of radius r in a vertical plane (its angle coordinate is u) and bring it out so its center is at radius R from the origin and still lying in the xy-plane (that's why the R only affects the term with r cos(u)). Then we revolve it around the z-axis, which is where the cos(θ) and sin(θ) come in. We need to come up with an equation in terms of x, y, and z.

If we square these three coordinates and add, we get

x2 + y2 + z2 = (R+r cos(u))2 + r2sin(u)2R2+2Rr cos(u) + r2.

However, r cos(u) is simply the extra distance one has to go from the radius R to the coordinates (x,y,0), that is, cos(u) =  ρ-R, where ρ2 = x2 + y2 is the usual polar coordinate (click to enlarge):

This gives us

x2 + y2 + z2 = (R+r cos(u))2 + r2sin(u)2 = R2+2R(ρ-R)+ rr2+ 2Rρ – R2.

Moving everything on the other side, except 2Rρ, and dividing by 2R, we finally get

(x2 + y2 + z2 + R– r2)/(2R) = ρ,

or

(x2 + y2 + z2 + R– r2)2/(4R2)= xy2, which finally gives us

f(x,y,z) = (x2 + y2 + z2 + R– r2)2/(4R2) – x– y= 0.

This is the Cartesian equation of the torus (really, a Cartesian equation, because there are several). Of course to be really proper, we should include dependence on R and r in there too (by including that in Mathematica, I got to see how the surfaces change as I vary each parameter). The torus plus its solid interior (a donut!) is given by replacing it with a ≤ rather than a = (note also we can determine which side is inside and which side is outside by computing the differential df, also variously known as the gradient vector–it points in the direction of increasing f). To get nearby shapes, we can consider other level sets f(x,y,z) = c for c near to 0. The picture introducing this post is precisely this, for c in the interval [-1,1] with increments of 1/4. Notice that the largest two of these has no hole (look carefully at the upper center part of the picture). So it's almost nested tori.

The interesting thing about using a level set description of a family of surfaces is that nothing special happens when some topological transition (shape with a hole becoming a shape without a hole) occurs. Well, almost nothing special—the transition point is precisely when c is not a regular value of f, that is, there are one or more points on the level set for which the gradient df vanishes at p. Such ps are called critical points (and is the starting point of Morse theory). Making use of level sets and tracking various geometric dynamics (surfaces, interfaces, etc) using these concepts in numerical methods is the level set method. The trickiness, however, is finding the right level set function to use, and it can be more of an art than a science (although I don't necessarily think the strict separation of art and science is that great of a thing. But that's a story for another day).

Back to our original problem, however. We wanted to see what the cross section looks like and how it changes as we slice a bagel at different angles. This is quite simple to do in terms of level sets (which is why we went through all the trouble to derive it). We can imagine the slicing plane to be z = mx + b. The slope m is the tangent of the angle α between the slicing plane and the horizontal xy-plane. And, of course, we wish it to lie on the surface of the torus, so that we have a system of equations

z = mx + b,
f(x,y,z) = (x2 + y2 + z2 + R– r2)2/(4R2) – x– y= 0.

The key is, then, to substitute one into the other, getting

g(x,y) =  (x2 + y2 + (mx + b)2 + R– r2)2/(4R2) – x– y= 0.

If we consider this as a function of x and y only, that is, a subset of the plane, it gives the xy-projection of the 3D slice. To get the interior of the cross section, we use a ≤ instead of a = as before. The values chosen here are r = 1, R = 1.61803398874 (the Golden Ratio), m = 1/2 and b = 1/3. And that gives us our inside vs. the outside of the domain we used in the heat equation. If we had wanted to make it more circular, we could eliminate the projection by rotating the entire picture by the angle α. This makes the equations much more complicated, however.

## Thursday, October 25, 2012

### Fun with FEniCS

Today's entry is also inspired by another talk. We had Anders Logg of Simula Research and one of the developers at The FEniCS Project come by, all the way from Norway. FEniCS is one of the main pieces of software I am learning to use. Now, I started the day not thinking I would really encounter nested tori (and don't think, of course, every post is going to be about them!). But believe it or not, they made completely unexpected side entrance. I was testing out its capabilities for time evolution (simply solving the heat equation on a square with zero Dirichlet boundary conditions) and got it running with some trivial initial data. But as another interesting test run, I figured I should try some nontrivial initial data. My basic goal was to see if I could recreate this (from Wikipedia, LOL):

Now, where was I gonna get that horseshoe?

From nested tori, of course. If you slice a torus at an angle (this can easily be tested out with a good chef's knife and a bagel), you may get completely different-looking slices depending on the angle. In the plane of the bagel slice, of course, you get an annulus... the one that you spread cream cheese on. Completely perpendicular, you will get either two circle-like curves (if the knife cuts through the hole), a figure 8 (if the knife just grazes the hole), or a single oval like curve (if the knife cuts the side). Changing the angle will make any of these configurations into an annulus. For example, the two circles will "reach out to touch each other" as the angle changes, and then the other ends start wrapping around the hole. Just before those ends join, it becomes something like a letter "C" or a horseshoe (pause the movie about halfway in and look at the red):

Anyway, supplying the initial data as 1 inside the horseshoe and 0 outside (the interpolating part of the software actually enforces continuity, albeit with some very steep slopes), we get this:

Now just how exactly did I determine what was "inside" the horseshoe and "outside" it? I used level sets. That's where the nested tori come in. But I will have to leave my dear readers in suspense, and tell you about it in another entry. But this has been quite an adventure!

## Wednesday, October 24, 2012

### Singular Value Decomposition Delights, Part 2

First off, congratulations to math homie and guest blogger Adam on his advancement to candidacy! Now back to our explorations from last week. So the goal of the previous vector bundle problem is to consider a vector bundle over the circle, represented as planes over an interval such that the fibers at one end of the interval are identified with the other end using some linear transformation T. In order to visualize this, we shall adopt the following approach: we put our base space circle in the xy-plane. arrange the fiber planes perpendicularly to circle (planes θ = constant), like so:

and then we choose a reference circle in one of these planes (say, of radius 1). Then as we move around the main circle, we consider the surface swept out as we carry that circle with us and apply the interpolated transformation

A(t) = exp() Σtexp(–)

to that circle as we go. When we come around to the beginning, this circle should be deformed by the original transformation T. We do the experiment with the matrix

0.498502 0.214052
0.0433899 0.60181

which is a rotation of -2.62394 radians (-150.3°), then stretching with singular values 0.69543 and 0.41803, and finally applying a rotation of 2.24233 radians (128.5°). We start with the identity matrix, and as we move around, rotate the appropriate fraction t, times of the first angle, stretch by the second factor to the t power, and finally rotate by the same fraction of the second angle. So at time = 1, a full revolution about the main circle, we get full operator T applied to it. This is the surface swept out:

And if we kept going, it would spiral inward more and more, which when doing a cutaway view (essentially, starting only with, say, 3/4 or 1/2 of the initial circle), we get the original picture shown in Part 1. But actually, it is more interesting to see what happens if we just unfurl it in a straight line (in the spirit of the original definition as an identification space)... and let the mapping continue past t = 1. The result is the title picture, which shows what happens when we proceed to t = 3 (which corresponds to the transformation T3). Forget donuts, let's do pasta (more comprehensively, here)!

Of course, we know that this shows our bundle is trivial—the only way to get a nontrivial bundle this way is to have a transformation with a negative singular value, in which case, since we can't raise negative numbers to real powers and stay in the reals, makes the above construction fail, as it should (since a nontrivial bundle fails to have a continuous global frame). So what good is this, really, besides making things pretty? As it turns out, such oddly behaving, yet topologically trivial bundles, do have some interesting, different geometry. It is encoded in the notion of connection—a connection will reveal nontrivialities in the geometry that topology cannot (in fact, Riemannian geometry starts out by considering a connection specifically on the tangent bundle, the Levi-Civita [LAY-vee CHEE-vee-tah] connection). This is an interesting topic for yet another part (I'm not promising it soon, however!).

## Friday, October 19, 2012

### An unexpectedly delightful application of the Singular Value Decomposition

We begin this entry with a bit of commentary on numerical analysis. If there's anything that I think that should be taught in math classes more, it is the Singular Value Decomposition of a matrix. Every real × m matrix A can be factored as A = UΣV* where U and V are orthogonal, and Σ is "diagonal" (but can be rectangular). It is in fact more useful than the much more often discussed eigenvalues and eigenvectors (which often are computed for symmetric matrices where the two concepts coincide, anyway). It would probably do a great amount of good for people's intuition in linear algebra classes. For some reason, it is completely ignored in a lot of abstract linear algebra books, somehow having acquired a reputation of being "too applied," and thus would sully the purity of their most precious subject! I never heard it talked about in any upper division or graduate math class, until I'd somewhat belatedly discovered that numerics was really the best place to go to explore my interests (better late than never). But the SVD is absolutely ubiquitous in science. The first place I'd heard of it was a linguistics class, where the prof talked about some paper on latent semantic analysis. As I understood it, the SVD was essentially used to compute axes of word meanings(!!) and the information used to categorize the words. That was really a long time ago, so forgive me, linguists, if this seems a gross mischaracterization. In any case, it also is intimately connected with least squares (briefly mentioned in the last entry).

At this point, the sufficiently attention-spanned reader (all too rare these days!) is most certainly scratching their head (yes, this blog will use singular they) wondering what that has to do with the very out-sized picture. First of all, the picture generated actually is a trivial case in which the SVD isn't actually needed. But if it wasn't for the SVD, I would have never been inspired to make a picture of my findings, so it deserves the tribute. It is inspired by trying to explore the concept of orientation, and to generalize constructions like the famous Möbius Strip:

We can generalize the Möbius strip by the use of vector bundles and identification spaces. We get the topological twist in a Möbius strip by gluing the ends of the usual strip with a flip. If we allow ourselves some imagination, one way to generalize is to first imagine the Möbius strip as now being constructed from a strip that is infinitely tall ([0,1] × ℝ) to get the Möbius vector bundle. Now when we "glue" the ends together, we can still flip it (identify (0,x) with (1,-x)). Or, we can stretch it and identify, say, (0, x) to (0, 2x), the "strained cylinder" (as Roger Penrose calls it in The Road to Reality). If we have to restrict ourselves to a finite picture, what happens here is that it goes around in a loop and reconnects to itself twice as high, so it looks discontinuous. But it's not. It turns out to be, in fact, a cylinder and it is in fact orientable.

Taking a cue from a problem in Spivak's excellent 5-volume series, A Comprehensive Introduction to Differential Geometry, we can imagine that instead of starting out with just a line for the vertical dimension, we could have a whole space ℝⁿ, and identify the initial and final spaces by an invertible linear transformation T : ℝⁿ → ℝⁿ, namely, identifying (0,v) to (1,Tv). If T has positive determinant, then we have not made anything topologically new: it's S¹ × ℝⁿ. But to actually (constructively) prove this is a more difficult task. But one way it can be done is by choosing a global frame, n continuous vector fields that are a basis at every point. And one way to do that is to somehow connect the transformation T to the identity. This can be done any number of ways, since, in fancy-schmancy speak, the orientation-preserving elements of the general linear group form a path-connected component containing the identity. But one quickly implementable realization is to use the SVD! Namely, factor the matrix T as UΣV* with U and V orthogonal and Σ the singular values, which must all be positive if the matrix has positive determinant. Since it is a diagonal matrix of positive numbers, Σ can be raised to any real power t. And U, V are respectively exponentials of some antisymmetric matrices ξ and η (which are not unique). So given some basis vectors vat 0, we consider the section

t ⟼ (t, exp(texp(–)vi).

At time t = 0 it is just (t, vi). At time t = 1, it is (1, UΣV*vi) = (1, Tvi), which is identified with (0,vi). (Warning: Although we could make things more notationally uniform by writing Σt = exp() where σ is the diagonal matrix consisting of the logarithms of the original singular values, we must resist the temptation to compute the result as exp(t(ξ+σ–η)), despite it being called an "exponential" map, as that only works when all the matrices commute!!) The path is continuous and so we have proved that there is indeed a continuous, globally defined frame, and thus our bundle is trivial. Beautiful? Well, where are the pictures? The picture above is given by considering the case n = 2, that of a plane, and the transformation by scaling by 2. If we take circular set in the initial plane, we can see how this transformation acts (here the U and V are the identity and Σ is just twice the identiy) and make it go 'round and 'round (without identifying), we get something that is reminiscent of nested tori (but not really, as it is just one torus spiraling into itself!). We're not done with the awesomeness of this example quite yet though—I've got some other math to get back to! But there are more interesting visualizations from this example, so stay tuned!

## Wednesday, October 17, 2012

### Some Unnested, Sprinkled Donuts

Great talk on Tuesday by Professor Anil Hirani on harmonic differential forms and generally bringing back things like geometry, topology, and algebra back into numerical analysis. Here's a sampling of what was talked about.

Harmonic differential forms are the basic topological obstructions to finding of solutions of Poisson's equation (∆u=f). Solutions to Poisson's equation represent steady-state phenomena (the scalar version representing heat, and the vector or differential form versions representing flows). On a closed surface, if we have a source term f, it must balance out (sum to zero over the whole surface), so that as much material is being created as it is being destroyed. Otherwise, of course, we could not have a steady state (more and more stuff just keeps getting added!). However, if the domain has some kind of topological nontriviality, such as holes (which could be represented as obstacles of some sort), different kinds of steady-state solutions can exist without sources. This is because stuff can flow "around" the holes, our "out of a cavity" (even in the idealized case that it's a single point and infintesimally small). But if the holes are not there, then it is often true that zero is the only field that satisfies the (reasonable) demand that the flow be continuous at that point. On other things besides donuts, we have planar domains with holes (lotus root slices), whose harmonic fields do vanish somewhere.

And here is an example of flow out of a cavity:

We call this example the Koosh [Thanks to fellow math homie Adam for locating the slide with this... I didn't get a pic of it during the talk. He'll be one of our guest bloggers from time to time!]

The topological object that describes these fields more precisely is the de Rham cohomology of the space, equal to the "locally constant fields modulo the trivially constant ones" as described by Bott and Tu in their nice book Differential Forms in Algebraic Topology (Prof. Hirani recommends that if there's any class that one should take in graduate school, it's Algebraic Topology. I have my reservations about that, but he also said that we might not know what he is talking about when he says that!). In any case, there is a notion of closed form, which is a form which when integrated over a surface of the appropriate dimension, it is unchanged under small perturbations of the surface, holding its boundary fixed. Too large of a perturbation may cause the surface to cross over one of these topological nontrivialities, which will in fact cause a jump in the value of the integral. On the other hand, if the form is exact, it has what is known as a potential, which and the integral is unchanged regardless of the surface, so long as it has the same boundary, by virtue of Stokes's Theorem.

As abstract as this all seems, it has some concrete interpretations, for example, rankings of sports teams (or the Netflix Problem), where everybody ranks things by comparing two things, and we want to figure out some sort of global, overall ranking. Cohomology tells us the level of inconsistency one may achieve---one can get into situations where A is better than B is better than C is better than A. This also has applications in economics. In order to take advantage of all of this for applications, we obviously have to have a method of computing these things. Geometry allows us to use a very common, bread-and-butter tool of scientists everywhere: least squares. Finding a harmonic form is equivalent to finding a form within the same cohomology class of minimal norm. It is quite beautiful how all these techniques work well together!

## Monday, October 15, 2012

### Welcome to Nested Tori!

The goal of this blog is will be to collect some AWESOME visualizations in mathematics and other scientific disciplines, and show how it is possible to get them. Even if you aren't the biggest math fan, stick around anyway and enjoy the view!

 Clifford tori, a foliation of $S^3$ and stereographically projected to $\mathbb{R}^3$

 Another view, rendered in Mathematica, this time sliced with a vertical plane, rather than restricting parameter values directly

More details here!