Thursday, February 24, 2005

Library Loan

I ran into a colleague on my way back from the engineering library this afternoon, and we had a conversation that went something like this:
It's strange to see you carrying that book. I'd forgotten that I'd returned it to the library, and I was used to seeing it on my shelf.
I figured that was where it was; I've been checking on and off for a while.
It has been on my shelf for three or four years. You should have told me!
Eh, wasn't that important. I have some books like that, too.


Sometimes paper works better. Courtesy LifeHacker.

Wednesday, February 23, 2005


This is the third attempt to draft this entry. The first two times, I was cut short when my computer locked up; it does this from time to time -- due, I think, to a known bug in the graphics hardware -- but not so often that I'm willing to get rid of it and get a new computer. There are days, though, when I'm tempted to do just that. This laptop is sturdy, and has done well for the past 4+ years; but it's also getting long in the tooth in computer terms, and it would be nice to have something that didn't run out of juice in 10 minutes on battery power.

To begin again in the way that I began my first two attempts: we shipped a paper today. If you want to read about Elastic PMLs for Resonator Anchor Loss Simulation, you now know where to read. This is just a tech report so far, of course, and we still have to go through the review process; but it still feels nice to have that done.

On other notes, I spent an entertaining half hour reading Foolproof: A Sampling of Mathematical Folk Humor from the most recent Notices of the AMS. If you like math jokes, this goes from Abelian grapes to Zorn's lemons; if you think math jokes are incomprehensible, you still might be entertained by the analysis of mathematical folk culture (the paper was co-authored by a mathematician/physicist and a folklorist). I found this article by chasing links from a post on Mark Zimmermann's page; another such post led me to another interesting article: If a tree doesn't fall on the Internet, does it really exist? While I can't speak of journalism, I find myself in sympathy with the author: computer scientists are also not good at extending their searches to the libraries. I have friends in graphics with whom I've taken courses in computational mechanics, and they've observed that a lot of the physically-based animation research of today seems to be a rehash of technology discovered by mechanical engineers decades ago. Of course, an engineer has a different goal than a CG researcher: one wants simulations which reflect reality, though they may be slow; the other wants simulations which only reflect reality to the extent necessary to look plausible, and is otherwise happy trading accuracy for speed. Still, many of the techniques are the same, and we'd all be better off with research done by people well-versed in the relevant prior art -- including the stuff that's not online.

I think I'd planned to write something else. I'm sure there was something else in the earlier instances of this post. But now I've grown absent-minded, and should probably stop typing, pack my bag, walk home, and eat dinner.

Sunday, February 20, 2005

Miscellaneous T

Elderflower and Spinach

It rained yesterday. Our original plan, to walk around Chinatown and watch New Year festivities, washed away with the rain. Instead, we went to have lunch at Ikea.

I've never visited an Ikea before, though I hear about it often. I shared a plate of Swedish meatballs, which were good, and had a croissant sandwich, which was stale. The restaurant was having a Lunar New Year special, but neither of us felt like trying it. We sat near the window and looked out at the rain and at the roof of the building across the street. The roof curled at the corners, and there were banners hanging from the windows with Chinese characters written on them. Fusion, said Winnie. I thought it looked Japanese, and said so; That's because they copied the style from us. It's like Macs and PCs, she replied.

I finished my coffee, and we explored the store floor. I'm a poor shopper, and was dazed by the weekend crowds; but soon enough we wandered to the food section, which was uncrowded and interesting. Signs hanging on the walls described the foods: lingonberries and elderberries, coffee and crispbreads. Lingonberries are like delicate cranberries, according to one sign; does that mean you have to add lots of sugar before they taste good? asked Winnie. I wasn't sure. As we left, we bought a carton of a drink made from elderflowers. It tasted a little like the cold chrysanthemum drink sold at some milk tea places.

For dinner, I tossed spinach with garlic, olive oil, salt, and a little vinegar. I thought it was great, but perhaps I should have used less salt. We brewed a pot of masala chai, which we sipped with some cookies. Tea, cookies, and public radio are wonderful accompaniments to a rainy evening, particularly when they're shared.

On the ride home, I read Newton's Cannon (Gregory Keyes). I finished it this morning. I read The Briar King last year or the year before; I wasn't blown away, but I enjoyed it enough to keep an eye out for his other work. Besides, I didn't want a great book to chew on, not right then. I wanted something diverting and easy to read, and I had what I wanted.

This afternoon, I have some quiet time to work, and that's a fine thing, too.

  • Currently drinking: Russian caravan blend

Thursday, February 17, 2005


The past month or so has been busy. Tomorrow afternoon, I'll be giving a tutorial on my resonator simulation code, and -- if the kami that govern my interactions with Office feel benevolent -- I will quickly finish the last bits of the slide, poster, and report that are due at the BSAC office for the upcoming industrial advisory board meeting. Then I'll have some breathing room for a while, which I plan to use to move two in-progress journal papers from mostly finished to submitted. That will be a relief to me, and I'm sure my collaborators will be pleased, too.

As I sat at the computer this evening, absent-mindedly humming along with the radio while I set up scripts to time two alternate methods for computing thermoelastic damping (conclusion: my method wins), I thought about how cheerful I am now. This is fun! I'm in a position where my job is to think about the things I've loved to think about since I was a kid, with the added bonus that the results of those thoughts seem relevant to people who are building interesting things. I have been playing with computer programs late into the nights for about two decades, now, and I still find it entertaining.

When I feel that I need a break, I clean around the house, or I read from interesting technical books, or I walk to the grocery store -- and those chores are usually fun, too.

I'm lucky to be able to find such joy in work and chores, but I've been this way for a long time, and so I usually don't think much about it. But preparing for the tutorial tomorrow reminded me of a time about three years ago when I had a similar busy stretch, also ending in a tutorial. I don't look fondly on that time; there were many reasons, but one major reason was that I disliked large parts of my research work. Most of the feedback I received was negative, a hazard common to anyone who writes software, since few users write an e-mail until they run into a bug -- and I was still figuring out how to shrug off the more rude (and usually more clueless) of those complaints. I was in the middle of a major revision to SUGAR (another simulator I worked on for a long time), and at any given time it seemed like more was broken than was working. I felt nigh to murderous toward some of my colleagues, and my feelings toward some of my friends were not markedly kinder. It was the second time as a graduate student that I seriously considered walking away.

There has not been a third time.

I would not wish to go back to being a young child -- though that sounds better than going back to middle or high school! -- but there is something to the idea that we should be as little children. I was a curious kid, and I loved to play with things that interested me. I'm taller now than I was, then, and I've changed in other ways as well. But I'm still curious, and I still love to play with the things that interest me. I've moved from pegboards to PDEs, and it has been a long time since I squeezed the color from the fallen petals of a crabapple tree; but I'm still able to spend my time at play. And when I can shrug off the irritants and unkindnesses and stupid mishaps that litter my days -- and those of everyone else -- the world still seems to be a fun and intriguing place, just as it did then.

However fun it the world may be, though, I should go eat something and devote some time to contemplating the insides of my eyelids.

Wednesday, February 16, 2005

Numerical Computation and Kindergarten

Everything I Needed to Know, I Learned While Attempting Numerical Computations

  1. Some problems have many solutions and some have no solutions; a few have just one solution.
  2. Sometimes you get lucky. Often you don't. Unless you know what to expect ahead of time, it's hard to tell the difference.
  3. If you take the wrong approach, getting lucky means failing early.
  4. The difference between good enough and good may be the difference between easy and impossible.
  5. The hard part is not finding good solutions; it's knowing when you've found them.
  6. A good solution passed through the hands of a few careless people (or sensitive functions) can turn into a disaster.
  7. If an obvious bad solution is nearly the same as a good solution, it will be hard to avoid getting distracted by the bad one.
  8. Throwing more people (or processors) at a problem only slows things down if you don't know how to coordinate them effectively.
  9. Sometimes the best way to deal with a problem is to sleep on it.
  10. Religious zealots will propose the same approach to every problem, even if the approach is totally inappropriate.
  11. Keeping up with rapid changes is hard; recognizing slow changes is also hard. Interesting problems change rapidly in some ways and slowly in others.
  12. Finding a description that captures the important details and neglects the unimportant ones is an art; it's also the first step to finding a solution.
  13. Usually, someone else has solved the same problems you have. Always, someone else thinks they have the solution to your problems.
  • Currently drinking: Golden Monkey Tea

Monday, February 14, 2005

Day of Memory

I just remembered that this is an important day.

On Feb 14, 1943 -- 62 years ago today -- one of the great figures in modern mathematics passed away. Though he made many interesting contributions to geometry and analysis, David Hilbert is probably best known now for the 23 (then unsolved) problems he presented at the Second International Congress of Mathematics in Paris. His biography can be found here.

Hilbert shares the peculiar honor of sharing his death day with a physician and early Catholic saint who was arrested, condemned to death for his faith, beaten with clubs, and finally beheaded on Feb. 14, AD 270.

Turing Trains

I spent several hours at home this morning, making algebra mistakes and listening to NPR. Among other things, I heard a KQED Forum broadcast on Bush's plan to eliminate Amtrak subsidies. Then I came to work and saw this plan to build a Turing machine out of trains, courtesy Pete.

C'mon! Even if national pride doesn't spur us on -- our rail system hardly meets the standards of any other first-world country -- the ability to use giant locomotives as computers must be taken seriously.

Graphical Style

I bought a new copy of The Elements of Style on Friday. My old copy disappeared last year. I probably loaned it and forgot to make a note, but I hold no grudge against whoever has it now. I can afford a new copy, and perhaps the old one is doing some good where it is. Still, I am glad to have a new copy for my shelf.

After signing my name in the cover of my purchase, I read the text again. As I read, I was reminded of an article on style in technical illustration that I'd read on Friday: Pictures and Proofs by Bill Casselman [1]. Casselman's article on illustration and Strunk and White's book on writing offer similar advice: reduce clutter, think about how the material would be presented in spoken discourse, and ask whether the presentation really conveys the point. The analogy between writing and illustration is imperfect, though, and an ear for language and an eye for design are rarely given to people in equal measure -- as I know too well.

How I wish I had the equivalent of The Elements of Style for technical graphics! With the aid of the computer, I manage to illustrate my papers and talks well enough; but I cannot call my illustrations any more than competent. I never learned to draw as neatly as I write, though I'm nowhere near the phenomenal level of incompetence Poincare is said to have achieved. My color vision is weak, and I have trouble choosing colors for my own graphics, just as I have trouble understanding the use of colors in other people's presentations. But I know no source of condensed advice on how I can improve my graphics. I should invest in Edward Tufte's books, but they are tomes; I would not expect to read them in entirety in an hour and a half, or even in a day. Even if I were to encroach further on my office mate's shelf space to accomodate Tufte's books, I would like something thin and informative to use every day.

Any suggestions?

  1. Bill Casselman, Pictures and Proofs, Notices of the American Mathematical Society, vol 47, no 10, pp 1257-1266, Nov 2004.

Sunday, February 13, 2005

In the Details, Part V

While eating breakfast this morning, I watched Jim flipping through an outdoors magazine. Then something caught my eye. Wait, go back, I said. He turned the page back to an advertisement for Viagra. The word Viagra was in blue print on the background, and a middle-aged gentleman was standing in front of the V so that only the top corners of the letter showed. They looked like horns behind his head.

Were the advertisers really trying to show the man as a cuckold? Or has that symbol fallen so out of use that nobody recognizes it any more?

Friday, February 11, 2005

Lawgivers T. Eyelids

The title Good morning is innocuous enough, but I don't think I know anybody named Lawgivers T. Eyelids.

At least the spam that makes it past my filters sometimes has entertainment value.

Lamy Pen

For my birthday on Tuesday, Winnie got me a Lamy fountain pen. I'd call it a replacement for the Parker Vector pen that shattered in my pocket during a particularly unpleasant day three weeks ago, except that it's too nice for replacement to be at all appropriate. I told Mom about this on the phone this morning, and Mom said Your grandma would have approved; she loved goor writing instruments, and hated poorly made pens. 'Those balky old things,' she called them. Did she shake her head and scowl a little when she said that? I asked. Yes, said Mom, just like I do. And my aunts -- and me, for that matter, I said. I guess some things remain constant through generations.

Wednesday, February 09, 2005

About Time

I took my qualifying exams today, about three years later than the usual recommended time. I've been rather embarrassed about the matter for a while, actually; usually the quals are done at the start of one's research program, not after (as one of my committee members has said) you have the material for four to six theses; certainly not after one has already begun informally interviewing. Nevertheless, untimely as it may have been, it's done and it went fine. I actually enjoyed myself enormously, as I had the chance to give mini-lectures on a variety of my favorite topics.

I will now be returning to my regularly scheduled pondering.

Gung Hay Fat Choy!

Goodbye, Year of the Monkey; hello, Year of the Rooster!

According to Winnie, the Chinese word for rooster is gai, the same as the word for chicken. This makes me wonder: would it be translated to Spanish as Año de Gallo or Año de Gallina or Año de Pollo?

Tuesday, February 08, 2005

PS Handout

I just spent an hour or so putting together PS Handout, a script to turn PostScript slides (as produced by Prosper and other LaTeX-based slide packages) into PowerPoint-style handouts with slides at the top and lines for notes at the bottom.

You'd think someone else would have done this already, but darned if I could find it. Sometimes it really is quicker to do it yourself.

Page's Worth

I have four books on my out from the library stack right now. The top two are almost identical in size; they are, respectively, Model Order Reduction Techniques with Applications in Finite Element Analysis (Qu) and Group Supermatrices in Finite Element Analysis (Zlokovic). As far as I can tell, the former book has at least twice the density of information per page -- not in a strict information-theoretic sense, of course -- that the latter has. This is mostly an illustration of the importance of compact notation: for about half the book on symmetry groups in finite elements, the author just shows page after page of large block matrices, most of which could be written in perhaps a tenth of the space by a wiser choice of notation. A picture is woth a thousand words? Bah! Only when the picture is chosen wisely.

  • Currently drinking: Coffee

Monday, February 07, 2005

Discrete Lagrangian Mechanics

I'm tired of thinking about one set of technical thoughts, so it's time to take a break and think about a different set of technical thoughts. These are technical thoughts, but since trying to typeset mathematics in HTML is generally about as pleasant as self-performed dental surgery, I will try to keep myself to a minimum of notation. Perhaps I will also achieve a minimum level of incomprehensibility in the process?

There are three main frameworks for writing the equations of classical mechanics. Students who passed high school physics are familiar with Newton's formulation, the oldest of the three. In Newton's equations of motion for a particle, the central equation is

where the force law F may depend on the particle position, but classically does not depend on the velocity (F is a function of v if the particle experiences friction).

Less familiar are the Lagrangian and Hamiltonian formulations. In Lagrange's formulation, also known as the principle of least action, the equations of motion are related to a scalar function called the action. The action S is the integral over [0,tfinal] of the Lagrangian function L(q,v), where q is position and v is velocity. The particle follows the path q(t) which minimizes the action [1]. To find such a path, we use the technique taught in elementary calculus courses: differentiate the function with respect to the free variable, then set the derivative to zero. The only complication -- which is not so much of a complication, really -- is that we are differentiating with respect to q(t), a variable which is itself a function. Taking derivatives with respect to functions is a topic in the fancily titled topic of the calculus of variations. Anyhow, if we write down the stationarity condition (derivative = 0), we find that the minimizing function q satisfies the Euler-Lagrange equation:

D1 L(q,v) - d/dt (D2 L(q,v)) = 0
where Di indicates partial differentiation in the ith position.

The Lagrangian is usually written as L(q,v) = T(v)-U(v), where T(v) = 1/2 mv2 is the kinetic energy and U(v) is the potential energy. Subsituting this form into the Euler-Lagrange equation, we have

-dU/dv = ma,
which is precisely Newton's law with F=-dU/dv. That is, the Newtonian formulation and the Lagrangian formulation are equivalent; at issue, then, is which is more convenient for analysis.

To solve differential equations on a computer, it is necessary to approximate them by finite systems of difference equations. There are different ways to approach this approximation, but one of the most intuitive is simply to take the original equations of motions and replace derivatives by differences. For example, we might follow Euler and write

q'(t) = (q(t+h)-q(t))/h + O(h).
As h becomes small, the divided difference becomes a good approximation to the derivative. This scheme will give good results if we approximate t at sufficiently closely spaced points (i.e. if h is small), and if the resulting recurrences satisfy certain algebraic properties (the recurrences need to be stable). Different approximations to q'(t) lead to different numerical integration schemes, each of which has its own advantages and disadvantages.

An alternative approach to discretizing the equations of motion is to use a discrete Lagrangian formulation. Instead of having a Lagrangian function that depends on q and v, we now have a function that depends on the value at consecutive time steps: L(qk,qk+1). And rather than having an action integral, we have an action sum. The minimization procedure is the same, however, and yields the discrete Euler-Lagrange equation:

D1 L(qk,qk+1) + D2 L(qk-1,qk) = 0
Unsurprisingly, if we choose the discrete Lagrangian to approximate the continuous Lagrangian, the numerical methods that we obtain from solving the discrete Euler-Lagrange equations are often the same as the methods we obtain from directly approximating Newton's equations of motion.

So why would we choose to use a discrete Lagrangian formulation rather than discretizing Newton's law? It is because the discrete Lagrangian formulation can guide our choice of numerical methods. We define the momentum of the continuous system to be

p = D2 L(q,v).
Those who have been in a course on classical mechanics may vaguely remember that this definition is the first step in setting up a Legendre transformation, by which we can change from a Lagrangian formulation to a Hamiltonian formulation. For the discrete Lagrangian formulation, though, there are two natural definitions for the momentum:
p+k = D2 L(qk-1,k)
p-k = -D2 L(qk,k+1).
This is one of the things that makes discrete analogues to continuous problems tricky: in general, well-defined classical concepts like velocity and momentum have multiple reasonable analogues in the discrete world.

So we have two reasonable definitions for momentum in the discrete world, and similarly we end up with two copies of other related objects (e.g. there are two discrete canonical one-forms). But, by an almost magical stroke, there is only one canonical two-form associated with the discrete equations. Thus, the discrete Euler-Lagrange equations for a conservative system mimic the continuous Euler-Lagrange equations in that they conserve symplectic structure (i.e. they conserve the canonical two-form). To this day, I have no good intuitive explanation of what symplectic forms are and why they're important; nevertheless, they are important, because with the exact conservation of symplectic structure under discretization comes better approximate conservation of better-known quantities like energy and overall momentum. This behavior markedly differs from the behavior of many direct discretizations of Newton's law in which the energy of the discrete system tends to either grow or decay over time.

So why am I thinking of this now? Some time ago, I was challenged by one of my professors, W. Kahan, to show that a numerical scheme for integrating a particular type of differential equation -- a matrix Riccati equation -- would maintain a symmetry property present in the original differential equation. He had in mind something enormously complicated involving the group-theoretic structure of the problem, which he had once proved but never trusted, and which was lost in the morass of papers in his office. I just attacked the problem with algebra guided by intuition, and to our mutual surprise, found a proof. My proof is short (one page), and relies on a set of coordinate changes to go from unknowns which are morally symmetric -- symmetric but for the error imposed by discretization -- to unknowns which are symmetric even in the discrete equations. A few months later (or earlier?), I gave a brief summary of discrete Lagrangian mechanics in a computational mechanics seminar that I was attending. While rummaging through old files today, I found both my notes on discrete Lagrangian mechanics and my proof of the symmetry in Kahan's integrator. It seems to me that there is a philosophical connection betweek Kahan's integrator and the integrators derived from the discrete Euler-Lagrange equation: in both cases, the discretization is set up in such a way that some interesting properties can't help but be conserved. I wish I'd thought a little longer and made that connection more precise; alas, that thinking will have to wait for another day.

1. Technically, the action need not be minimized, but need only be rendered stationary. This distinction is important in the study of dynamics of constrained systems, where the action is typically not minimized.

Sunday, February 06, 2005

Rudin and Dunham

I've mentioned two short mathematical/historical books that recently joined my bookshelf: Euler: The Master of Us All by William Dunham and The Way I Remember It by Walter Rudin. I recommend them both, with the following cautionary comments about the intended audiences.

Dunham's book begins with a brief biography of Euler, but most of the book is dedicated to exposing Euler's life work and characteristic style through his propositions and the proofs that he presented for them. The historical context -- as well as the mathematical context -- is an integral part of the text, and is not separated from the mathematical development. The book is self-contained, and almost all of it should be accessible to one who has taken elementary single-variable calculus and remembers a reasonable fraction of it. It is a good book to have if you enjoy playing with mathematical problems that do not involve too much intricate machinery.

In contrast, a large fraction -- more than half -- of Rudin's book is dedicated to a non-mathematical autobiography. This part is worthwhile on its own; all that is needed for enjoyment is an appreciation of good writing and a modicum of interest in history or biography. The latter chapters of the book are devoted to Rudin's mathematical work, and while they are written for mathematicians, and not just for analysts, I can't imagine reading them without the background of an introductory graduate real analysis sequence. For those interested in the biography but not the mathematics, I recommend skimming through the latter part of the book for Rudin's descriptions of interactions with his peers -- but skip the mathematical content unless you're either brave or have a math-phobic foe who you wish to terrify. For those who do have a graduate analysis course in their background, I recommend both halves of the book; it contains no proofs, but only theorem statements and high-level descriptions of the landscape, and so can be read faster than Rudin's monographs and texts.

  • Currently drinking: Coffee

Saturday, February 05, 2005

Strategy and Tactics

I'm writing the first draft of my thesis introduction and -- like anyone else -- I'm utterly frustrated by the task. Much of the middle of the thesis is written or partly written, but this is the first time I've seriously attempted to write an introduction. At least, this is the first time I've tried to write an introduction to my research as a coherent whole; I've certainly written introductions to pieces of my research. Whatever the case, I'm suffering writer's block, and this seems like a very good time to spend a few minutes on a tangent.

From a high level, I'm frustrated because I'm having a hard time explaining something which fundamentally isn't complicated. Parlett's book The Symmetric Eigenvalue Problem begins with the words

Vibrations are everywhere ... and so, too, are the eigenvalues (or frequencies) associated with them.

And so it is. There are typically two steps to finding the frequencies and mode shapes -- the eigenvalues and eigenvectors -- for a vibrating mechanical object. First, the continuous motion has to be approximated by something finite; and second, the frequencies of the finite problem must be computed numerically. My research concentration for the past couple years has mostly dealt with different types of eigenvalue problems, infinite or finite, that have interesting structures. If you pay attention to that structure, you can find resonant frequencies more quickly and accurately than you could if you proceeded blindly; if you ignore too much problem structure, you'll often end up with no information at all! I often apply my methods to problems in microsystem (MEMS) engineering, and the special mathematical structures correspond to special physical structures in the devices; but the ideas apply much more broadly. As a computer scientist, I spend a lot of time not just in developing the mathematical theory needed to solve structured eigenvalue problems, but also in writing codes that implement the ideas. My thesis research, then, comes in three parts: there's the engineering problems that drive my development; there's the software packages I write to solve those problems; and there's the fundamental mathematics and physics framework that underpins both the software development and the interpretation of computed results.

That's a reasonable nutshell description. But those sentences could describe a lot of possible theses. As always, the devil is in the details. What do I mean by structure? I mean certain symmetries of the devices, either in the geometry of the domain or in the structure of the governing equations. I mean continuous dependence on parameters, so that a single computation can be interpreted in the context of a family of very similar computations. I mean the existence of different physical effects -- like elasticity and heat flow -- which interact, but have very different time scales. I mean having a nice partial differential equation with not-so-nice boundary conditions, (where nice and not-so-nice are obviously stand-ins for more complicated technical conditions). These structures are more subtle than, say, self-adjointness (a very strong sort of symmetry condition which the equations of both classical and quantum mechanics often have). And what do I mean when I say I apply my ideas to MEMS engineering? MEMS is a huge field: I've really been working specifically on studying very high frequency (high MHz-GHz) mechanical resonators of the type that might one day end up in your cell phone -- which already uses a mechanical filter to process signals, by the way. And as far as software goes, I write my own codes for solving PDEs using finite elements and spectral elements. I write my own implementations for my new numerical ideas, and otherwise I try to use other people's codes as much as possible -- which presents software interfacing problems which are of both academic and practical interest.

The trouble I'm having is essentially one of balance. How do I convey both the strategy and the tactics of my work? In the introduction, the strategy is key -- but the interplay between strategy and tactics is, in my not-so-humble opinion, what makes or breaks any sort of numerical computation. One of my favorite elementary numerics books is Numerical Methods That (Usually) Work by Forman Acton. The book is bound in red cloth, and the title is printed on the front in a metallic shade -- except the word Usually, which is indented into the cover but not otherwise filled in, making it impossible to see unless you look very closely. Chapter 7 of Acton's book bears the title Strategy versus Tactics, and it begins with the following words:

When we first approach a numerical problem we must take care lest we become immersed too soon in the detail without having thought through clearly the general strategy of our attack. There is, of course, the alternative danger that we will concern ourselves exclusively with the grand design while ignoring inconvenient details that turn out to be decisive -- thereby wrecking our plans with a thoroughness that borders on the catastrophic. The second danger, however, seems less severe than the first, against which we must accordingly warn more loudly.

Exactly! Without really understanding the engineering application, I would probably end up solving the wrong problem (even if it was the problem that was originally posed to me). I could easily pitch this as a thesis about computer-aided design of MEMS; I could pitch it as a computational mechanics thesis; or I could pitch this as a thesis about numerical linear algebra; or I could pitch it as a thesis about numerical software engineering. None of these global views matches how I think about these problems, though. The different results are independently interesting, but I know people who are more competent than me in each of those areas independently. The thing that excites me about this work -- and about most of the problems I work on, even as side projects -- is that they are interesting across multiple disciplines. Don't get me wrong: I think it's a lot of fun to prove theorem's for purely aesthetic purposes, and I've been known to indulge myself in such at times; and while I'm not a strong designer and will never be known for the widgets I've dreamed up, I can appreciate people who do really pure engineering design work. But I dip a little into a lot of things -- that's simply how I make progress on problems.

So now I'm supposed to write an introduction which is comprehensible to a general audience in my field. But which field is it, exactly? Can I assume my readers will know something about MEMS, computational mechanics, PDE theory and functional analysis, linear algebra, numerical analysis, and software engineering? Many people -- maybe most -- who would find this work interesting are perhaps more expert than I in at least one or two of those areas, but not all.

I am, of course, far from alone in facing this difficulty. Besides Acton's book, I brought home with me two other books that I admire for the way they combine mathematically deep ideas, physically-guided intuition, and a generally accessible presentation. Those books are Applied Analysis by Lanczos and Chebyshev and Fourier Spectral Methods by Boyd. I'm still on a reader's honeymoon with that last book, and I don't know if my current admiration will weather the test of time as my admiration for the books of Lanczos and Acton has -- but I suspect it will. Whether or not I continue to refer to Boyd's book regularly, though, I think I will continue to agree with these sentiments from his introduction:

It is not that spectral concepts are difficult, but rather that they link together as the components of an intellectual and computational ecology. Those who come to the course with no previous adventures in numerical analysis will be like urban children abandoned in the wilderness. Such innocents will learn far more than hardened veterans of the arithmurgical wars, but emerge from the forests with a lot more bruises.


I have also tried, not as successfully as I would have wished, to stress the imprtance of analyzing the physics of the problem before, during, and after computation. This is partly a reflection of my own scientific style: like a sort of mathematical guerrilla, I have ambushed problems with Pade approximants and perturbative derivatives of the KdV equations as well as with Chebyshev polynomials; numerical papers are only half my published articles.

However, there is a deeper reason: the numerical agenda is always set by the physics. The geometry, the boundary layers and fronts, and the symmetries are the topography of the computation. He or she who would scale Mt. Everest is well-advised to scou the passes before beginning the climb.

Perhaps my audience is what Lanczos would call parexic analysts. Parexic analysis, in Lanczos's parlance, is the range of analysis techniques that lie between pure analysis (analysis of infinite processes) and numerical analysis (analysis of numerical algorithms). Lanczos describes his terminology clearly in the introduction to Applied Analysis:

The Greek word parexic (with the roots para = almost, quasi, and ek = out) means nearby. Hence the term parexic analysis can well be adopted to mean that we do not want an exact but only a nearby determination of a certain quantity. We can then speak of parexic methods, parexic expansions, parexic viewpoints, in contradistinction to the corresponding methods, expansions, and viewpoints of pure analysis which aim at arbitrary accuracy with the help of infinite processes.

In my experience, most mathematically-educated computational scientists and engineers appreciate this sort of analysis; the unwary -- who often rush where angels fear to tread -- never seem to realize that the computer cannot be trusted to deal gracefully with infinite sums. I know no effective way to caution those brash souls to prudence before their computations go awry (and often no way of convincing them afterward that there are some computations they can trust).

As far as the mathematics goes, it needn't be that frightening. I think most of what I do can readily be absorbed by someone with no strong understanding of functional analysis -- or even of numerical analysis! All that's needed is enough mathematical maturity to follow the high-level pictures. This is another thing I like about Lanczos's writing; he explicitly assumes such an audience. Quoting from the preface to Applied Analysis:

Furthermore, the author has the notion that mathematical formulas have their secret life, behind their Golem-like appearance. To bring out the secret life of mathematical relations by an occasional narrative digression does not appear to him a profanation of the sacred rituals of formal analysis but merely an attempt to a more integrated way of understanding. The reader who has to struggle through a maze of lemmas, corollaries, and theorems, can easily get lost in formalistic details, to the detriment of the essential elements of the results obtained. By keeping his mind on the principal points he gains in depth, though he may lose in details. The loss is not serious, however, since any reader equipped with the elementary tools of algebra and calculus can easily interpolate the missing details. It is a well-known experience that the only truly enjoyable and profitable way of studying mathematics is the method of filling-in details by one's own efforts. This additional work, the author hopes, will stir the reader's imagination and may easily lead to stimulating discussions and further explorations, on both the university and the research levels.

I agree! I agree! So tell me again... how did I plan to write this introduction?

  • Currently drinking: Vanilla-flavored tea

Friday, February 04, 2005


I'm going to see a performance of a company of Japanese drummers this evening. I'm looking forward to it.

Thursday, February 03, 2005


The Way I Remember It by Walter Rudin is a wonderful little book. I read a chunk of it on my way home, after finishing the Mencken collection, and I think I may read some more of it when I go home tonight.

Let me explain first who Rudin is, and why I would want to pick up his memoir. Rudin is an accomplished and fairly famous analyst, and is the author of a few classic texts in the field. The best known of these is probably Principles of Mathematical Analysis, a book which covers material that might be taught in a typical upper division undergraduate math course with the title Advanced Calculus or Analysis: the rigorous development of the basic concepts of the continuum, and of differential and integral calculus. Rudin's mathematical writing style is distinctive: it is clear, but it is terse. The terseness, together with the hiqh quality, is probably what most people remember about the book; whether they remember it with fondness or with fear, people who have read the book do remember Rudin's brevity. I certainly remember it, though (perhaps surprisingly) I don't own my own copy -- my undergrad course was taught from Protter and Morrey's text, with Rudin as a backup reference. I do have a copy of Rudin's Real and Complex Analysis, though, a much-treasured reference that I received as a gift a few years ago; and the writing there has the same quality.

Rudin grew up in Austria, but his family fled when the Nazis came. You can find this, and a little of his family's history, in the book description on Amazon. What you won't find in that information is how lucky his escape through France was; nor will you find that he spent time as a radio operator in the British service; nor will you find out about his unconventional undergraduate career at Duke.

It's a short, tightly-written, interesting book; and I'm glad and impressed to see that Rudin's nontechnical writing is as stylish as his technical work.

Coincidencental day

Tuesday is the eve of the Year of the Rooster; it is Mardi Gras; and it is my birthday. This is one of those coincidences that entertains me immensely.

State of the Union

I did not watch the State of the Union address last night. I was having an excellent dinner at the time; I think I win. I did read the transcript, though, and saw a few segments of the speech on the news while I was waiting in the airport for my flight home. Leaving aside the content of the speech, which predictably made my stomach ache, I was struck by both the written and the spoken form. As a piece of writing, it was standard political cant, written by someone basically competent, if not too inspired. I thought the prose was more than a little purple, but most political speeches suffer that flaw. Somehow, though, when Bush spoke those words, they sounded rather less eloquent.

I finished reading The Vintage Mencken on the plane ride. One of the last pieces in the collection is In Memorium: WJB, perhaps the most scathing eulogy I've ever read. I'm not sure how many parallels can be drawn between GWB and WJB, but power of presentation is not one of them.

Mencken of WJB:

This talk of sincerity, I confess, fatigues me. If the fellow was sincere, then so was P.T. Barnum. The word is disgraced and degraded by such uses. He was, in fact, a charlatan, a mountebank, a zany without sense or dignity.