Symmetry is an old topic of fascination for mathematicians. I know I've recommended Weyl's book, Symmetry, before; let me now do so again. It's a short book, written for a lay audience, and it describes types of symmetries in art, nature, and mechanics. While Weyl writes very clearly, the book does reflect a very deep sort of knowledge; I have another book by Weyl on my shelf on a more mathematical treatment of the classical groups, and his treatment of symmetry groups in quantum mechanics is considered classic.
The notion of a perturbation is similarly old, mostly because the real world tends to be full of problems that are Really Hard, and the only way anybody knows to tackle them is to pretend they are Tractable (or perhaps Trivial on a good day). This usually means dropping small terms that make the problem hard, and then analyzing the effect of the missing bit. Sometimes it's possible to correct the answer to account (at least partially) for the effect of the missing term; and sometimes the best one can hope for is to figure out roughly how bad the mistake was. The business of getting rid of the hard parts of a problem by estimating or bounding them is at the heart of mathematical analysis, together with the notion of a limiting process (which sometimes allows estimates to be parleyed back into equalities).
Elementary courses on differential equations tend to emphasize a small set of equations which can be solved by hand. While this seems sensible to me -- after all, we choose our models rather than accept them as gifts from on high -- it does have the unfortunate side effect that many otherwise well-educated people fail to realize how fundamentally hard it is to get exact analytical solutions. Differential equations with solutions in terms of elementary functions are exceedingly rare; and equations for which such a solution can be found and understood by a reasonably educated human are rarer still. Nevertheless, a colleague of mine, an engineer who should have known better, was once inspired to ask why I didn't just solve a particular equation analytically; and when I explained to him that the integration was provably intractable, he snorted in apparent disbelief, shrugged, and observed that at least computers make it trivial to compute numerical solutions. I'm not sure whether I disabused him of this notion in our subsequent conversation, but I surely tried.
Those equations which can be analyzed at all are usually analyzed by
exploiting symmetries, which deliver interesting qualitative
information even in the cases when they don't lead to a full solution.
Fourier analysis depends on translational symmetry; separation of
variables depends on a certain symmetry in the shape of the domain
where an equation lives; and familiar basic conservation laws
(conservation of energy, momentum, etc) are closely linked to other
symmetries (a fact proved by Emmy Noether). Dimensional analysis (or
the study of
dynamic similarities) is another type of symmetry
reduction, though most people who know what dimensional analysis is
probably have never heard of Sophus Lie or Emmy Noether; the matter is
only confused by the fact that dimensionless parameters are often
dimensionless groups, a name which bewildered me for
Equations which are
almost symmetric are immensely interesting.
Symmetric systems show all sorts of behaviors that don't usually occur
if there's no symmetry -- such behaviors are
nongeneric -- and
a perturbation which changes the symmetry therefore often alters the
solution enormously. At the same time, an lot of both the natural
world and the engineered world is almost -- but not quite --
symmetric; and so beams buckle, atoms bond to form molecules, shutters
buzz in a strong breeze, whirlpools form when the sink drains, and
dropped sheets of paper go flying all over the place when you drop
them. Well, my papers fly every which way when I drop them;
perhaps your papers drop directly to the floor, in which case I can
only guess that you use really thick paper or that you live in a very
rarified atmosphere indeed. Either way: huzzah for symmetry breaking!
It makes the world a more interesting place.
Of course, to an unwary user who would like to simulate a physical system, symmetry breaking can herald interesting times indeed.
To solve a continuous problem on a computer, one discretizes
the problem: in some way, we have to approximate an
infinite-dimensional problem by something which is finite-dimensional
(as the speaker at a recent talk observed,
We do not need to go to
infinity, which is good, because that is too big). One way to do
this is by difference approximations: instead of computing smooth
functions, we compute functions at a (large) number of discrete
points; and when we need a derivative (tangent at a point), we replace
it with a divided difference (a secant between successive points).
This approximate system generally does not have all the same
symmetries as the original system. For example, if the original
problem remains the same if we move the coordinate system around, the
best the discrete system can do is remain the same if we move the
coordinate system around in a way that maps mesh points to other mesh
points. Or suppose the differential equation preserves some invariant
relationship involving a derivative; if we want a similar conservation
law to hold for the difference equation, we have to ask
difference? For a nice function of a single variable, there is
only one derivative at a point x; but there are two natural
differences, one involving the point to the left of x, and
one involving the point to the right of x. (There is a class
of integrators for Hamiltonian systems which approximately conserve a
differential relationship called a
symplectic form; the
analysis of these methods is complicated by precisely the issue
indicated above, since the discrete system has two natural
analogues of the symplectic form for the continuous system).
As another example, consider what happens if you want to know the few
lowest resonant frequencies of a gong. A gong is highly symmetric:
you can rotate it, flip it over, or reflect it across various planes,
and after you've finished your mutilation, it will still look the same
as when you started. Having great faith in the power of your
computer, and being unwilling to go through the pain of hand analysis,
you feed the problem to some standard finite element code, which is
built to solve exactly such equations. The program runs a standard
algorithm, and returns its estimate of the lowest few frequencies, and
you discover to your dismay that the computation takes a lot longer
than you thought, and misses some eigenvalues, too. Why? Because of
the symmetries in the original problem (because O(2) is
non-Abelian, if you like), many of the resonant frequencies of the
gong correspond to multiple eigenvalues -- which is a very rare case
for problems which lack such symmetries. The presence of these
multiple eigenvalues (called a
degeneracy) carries through
exactly if the system is discretized carefully so that the discrete
system has a symmetry that mimics the symmetry of the continuous
system; if the discrete system does not have such a symmetry, the
eigensolver might have less trouble, but you'll probably have to work
harder (use more mesh points) in order to get a decent answer.
There's no free lunch (or
tanstaafl, if you read too much
Heinlein in a mis-spent -- or maybe well-spent -- youth). Whether
it's exactly preserved or only approximated, the presence of a
degeneracy causes confusion for standard eigensolver algorithms.
Now, suppose you're a very clever blobby, and have figured out that to find the resonant frequencies of a gong, you can just restrict your attention to specific types of motions. If you start the standard algorithm (shift-invert Lanczos iteration with partial reorthogonalization) at a special starting vector which obeys a specific symmetry -- reflection, about some symmetry plane, say -- then all the subsequent iterations it looks at will also have the same symmetry. You've just managed to perform a symmetry reduction on your problem without changing the model at all! Of course, if you've had a course in numerical linear algebra which was sufficiently competently executed that you learned about Lanczos's iteration, you probably know what will go wrong. Slight differences in rounding errors act as a perturbation, causing the iterations to drift a little, so that your iteration no longer stays strictly symmetric -- and suddenly you're faced with a symmetry-breaking behavior again, and it will make your life... interesting.
Now, if you're a reasonably clever blobby who has spent too much time thinking in detail about the behavior of floating point arithmetic, you might realize that there are situations in which the symmetry indicated will be preserved exactly, even in floating point arithmetic. But it will only work that way in some situations; and though the situations aren't that hard to figure out, most people have fuzzy mental models of what actually goes on inside of the floating point unit on their machine, and will either be completely oblivious to the behavior described above, or will be immensely spooked by the fact that the program can be broken by changing the parentheses in a program so that one expression is computed rather than an algebraically equivalent alternative. Of course, if you had enough knowledge of all the different pieces to realize such a subtle way to factor out a symmetry, you probably know enough to understand how to factor out the symmetry explicitly in a pre-processing step, and get rid of all the subtleties and potential sources of instability.
And this is the real art in numerical analysis: recognizing what symmetries and problem structures can be reasonably conserved under discretization or under the action of transformations used in a numerical method, and what symmetries can just be approximated; and then parleying that knowledge into algorithms which are simultaneously fast and accurate.