Monday, March 28, 2005

Empire Star

... Once she rather surprised both Jo and me by saying during one lesson when he seemed particularly recalcitrant and demanded another reason for why he had to improve his Interling: Besides, think how tiring your clumsy speech will be to your readers.

My what? He had already, with great difficulty, mastered his final consonants.

You have undertaken an enterprise of great pith and moment, and I am sure someday somebody will set it down. If you don't improve your diction, you will lose your entire audience before page forty. I suggest you seriously apply yourself, because you are in for quite an exciting time, and it would be rather sad if everyone abandoned you halfway through because of your atrocious grammar and pronunciation.

-- Samuel R. Delaney, Empire Star (p. 32)

  • Currently drinking: Lychee black tea

Friday, March 25, 2005

Some Sums

Probably everyone reading this has heard the (likely apocryphal) story of Gauss being given sums as a school exercise. At least, I think it was Gauss. Anyhow, the story goes that the teacher told the class to sum the numbers from one to a hundred, and Gauss came back with the answer a few moments later. The numbers from one to a hundred can be grouped into 50 pairs, each of which sum to 101: 1 and 100, 2 and 99, 3 and 98, ..., and 50 and 51. Therefore the total sum is 50*101 = 5050.

If you happen to have studied computer science, you've probably heard this story more than once, because taking such sums is something you do all the time when analyzing algorithms. For example, suppose you want to multiply a vector by an n-by-n upper triangular matrix (solving an upper triangular linear system is about the same). Then you end up doing n(n+1)/2 multiplies and n(n-1)/2 adds. There's one multiply for every nonzero entry in the matrix, of which there are n in the first row, n-1 in the second row, and so forth; and for each row you have one fewer addition than multiplication.

Now, most of us don't bother to analyze things that precisely. We look at an algorithm like triangular matrix multiply and say it's quadratic in n, not even bothering with the constant 1/2 in front. Sometimes if we're being persnickety we'll say that the time is n^2/2 + O(n) -- we're willing to keep track of the constant in front of the fastest growing term, but the rest is relatively unimportant, so we'll just point out that it's bounded from above by something linear in n.

All this is well and good, because it captures the dominant cost of the algorithm, and usually it's the most expensive part of your algorithm that causes you to think. Also with a little practice you can just look at simple algorithms and see that they cost c*n^k + O(n^(k-1)) time, where you know what c and k are. The trick is that, asymptotically, sums look like integrals, so that if you ever need to know what the sum from i = 1 to n of i^2 is, for example, you know immediately that it's n^3/3 + O(n^2), because the integral of x^2 is x^3/3 and, for large n, the integral and the sum look an awful lot alike.

I was, therefore, a little bewildered a couple weeks ago when someone asked me what the exact operation count was for some algorithm. He had converted his problem into a sum, and was stuck because he couldn't remember the exact formula for the sum of i^3 -- and he wasn't satisfied to know that it was about n^4/4. So I made an observation and he went away happy -- though I didn't manage to convince him that he was probably working very hard for some information he ultimately wouldn't care about.

The trick is this: if you have a sum of p(i) for i = 1 to n, and you know that i is a polynomial, you know what that the sum is going to have the form q(n), where q is some polynomial of degree one higher than the degree of p. Furthermore, you know how to evaluate q at n = 1, 2, 3, etc -- just compute the sum by hand for those numbers. But a degree n polynomial is uniquely determined by its behavior at n+1 points, so finding q from this data is no trouble at all! Here's the MATLAB/Octave script to find the coefficients of q from the coefficients of p:

  % q = sumn(p)
  %   Compute q(n) = sum_{i=1 to n} p(i),
  %   where q and p are polynomials.

  function q = sumn(p)
  n  = length(p);
  x  = 1:n+1;
  qx = cumsum(polyval(p,x));
  q  = polyfit(x,qx,n);

If I was less lazy, I could probably find a more efficient algorithm than this -- but probably not more efficient in terms of typing and thinking time. (I could fit the entire script on one line, actually, but then I might get confused, making it a false economy.)

More generally, I don't think computer scientists should ever patiently grind through operation counts in simple algorithms. If it's at all easy to get lost in the bookkeeping, be lazy and make the computer do it! Once you know the functional form of the complexity count as a function of n -- for instance, if you know that it's a cubic polynomial -- you can run the algorithm a few times and let the computer cost the count, then fit the free parameters in your functional form in order to get the general case.

Alas, if you think it's fun to spend a couple hours grinding through these sums, I probably won't change your mind.

Thursday, March 24, 2005

More Writing Comments

Good writing is to bad writing as idiomatic usage is to idiotic usage.

Culture of Life

Does anyone else think of Petri dishes when they hear the phrase culture of life?

Absent-Mindedness, Part Deux

Pete has things to say about my comments on absent-mindedness. In fact, Pete has many things to say about the topic. I think most of them are more interesting than my original comments, actually.

Wednesday, March 23, 2005

Absent-Minded Thoughts

Do you ever find yourself staring at a written word and thinking how strange it looks on the page? Or do you hear a clearly-spoken English sentence which you're unable to immediately process? What about looking for glasses while they're perched on your nose -- or sitting in clear sight on the desk in front of you? All these things happen to me regularly: at least once a week for the first two, and more often than that for the third. It never lasts long, and I'm sure Pete would tell me that it happens to everyone else, too, and has name X.

I've noticed a curious thing recently, though: the instances in which I find myself wondering at how strange a word looks on the page often happen just before the pieces to some long-standing personal puzzle click into place. The same thing happens when I program: a bit of code will suddenly look strange, and I'll realize that I should do something differently -- and the thing I should do differently is not always related to the bit of code that looks strange.

There's no novelty in the observation that our language shapes our thoughts. Perhaps it's the sudden changes in how we perceive our language that indicate that we've made new connections. It can be as simple as the recollection of a frustratingly elusive name or word -- which usually happens to me after I've quit trying to remember it -- or the realization that Hey, the sound bite that candidate has been using sounds good, but it makes no sense!

I have no illusions that my ability to lose my glasses is a sign of penetrating thought. It's a sign of nothing beyond absent-mindedness. But perhaps sometimes that sudden disconnect comes from something more subtle than absent-mindedness? I'm not certain, but it is an interesting... um...

Thought. It's an interesting thought.

Tuesday, March 22, 2005


I just read a Slashdot post in which Ramanujan is referred to is some Indian math guy. I regard this as about the same as calling Einstein some German physics guy.

Composing My Thoughts

This weekend the latter half of my last Amazon order came. I now have a copy of Mathematical Writing (Knuth) and Approximation Theory (Cheney). The former was much less intriguing than I thought it would be; but Teacher in America (Barzun) was part of the same order, and it was better than I expected, so in the mean I think I was pleasantly surprised. Anyhow, most of Mathematical Writing consists of scribe notes and handouts from a class on mathematical writing that Knuth taught at Stanford. The introduction ends with the following comment:

Caveat: These are transcripts of lectures, not a polished set of essays on the subject. Some of the later lectures refer to mistakes in the notes of earlier lectures; we have decided to correct some (but not all) of those mistakes before printing this report. References to such no-longer-existent blunders might be hard to understand. Understand?

Unpolished though the book may be -- perhaps I should just call it a bound report? -- it does include interesting exercises and handouts, some references to books that sound intriguing. And between sometimes-dull transcripts of the highlights of class conversations, there are a few very interesting comments. In one such comment, Knuth noted that he usually writes his first drafts in long hand, because the speed at which he writes is almost perfectly matched to the speed at which he thinks. When he types, he types too fast for his thoughts to keep up. I'm glad I'm not alone in this.

Knuth's comment quickly came to mind this evening when I started to type out a response to a paper I'm helping to edit (I'm the third author, I think). The current draft had enough errors of notation and syntax that I got lost easily, and somewhere in the middle I just lost the thread of it. Frustrated, I sat down with pen and pad and started to write a letter to a friend about how confused and irate I was, and how I thought the exposition should proceed; and seven pages and a few hours later, I had a clear picture in my mind of where I think the paper gets lost, and how I think the problem can be fixed. I think I'll have to make a copy of this letter before I send it. I've had technical inspirations while writing letters before, and regretted not keeping a copy to re-read later. There is a proverb that says the palest ink is better than the best memory; and while my memory is good enough that I remember the technical ideas that come out of such inspirations, it is not so good that I can always recover the path of thought that led me to a fruitful perspective.

  • Currently drinking: Golden Monkey tea

Movies and China

The Elephant Pharmacy on Shattuck Avenue is a fancy sort of place, and most of the things it sells are expensive. But the store also rents DVDs, and the DVD rentals are a dollar a night. I only discovered how cheap the rentals were this weekend, and I perhaps went overboard by spending four dollars to rent Once Upon a Time in China, Iron and Silk, Harry Potter and the Prisoner of Azkaban, and Matrix Revolutions. I have little to say about the latter two movies, which I enjoyed, but which I'm glad I didn't pay full price to see. But the Once Upon a Time in China and Iron and Silke were more interesting.

Once Upon a Time in China stars Jet Li as Wong Fei-Hong, a legendary martial artist from some time when the Manchurians were in power. I first heard about Wong Fei-Hong after Winnie and I watched Around the World in 80 Days; in that movie, Wong Fei-Hong turned up as the father of Jackie Chan's character. Winnie heard the theme music and started dancing around: Wong Fei-Hong! That's his theme music! I have to make you see a real Hong Kong kung fu movie with him in it, she said. She was right. I think we both enjoyed the movie, and there was the additional entertainment (for me) of picking out the few Cantonese phrases I actually knew, and (for Winnie) of poking my shoulder during all the action sequences and asking can you do that?

Iron and Silk is a movie adaptation of Mark Salzman's book of the same name; in fact, Salzman plays himself, as does Teacher Pan (as the text in the credits said, Teacher Pan played himself because it is inconceivable that anyone else could). I enjoyed the movie, though I enjoyed the book more, but there was one flaw. Whoever made the movie must have decided that the lack of a love interest was a fatal flaw, because one was grafted on -- and the graft didn't take. I think I would have known that there was something off about those scenes even if I hadn't read the book. That's a shame, because the scenes that were taken from the book were a lot of fun, and I would have liked to see a few more of them.

Saturday Dinner

I had friends over for dinner on Saturday. It's the first time I've invited people over in a bit over a month, and in that month I've had several good meals with friends: we made gyoza once, and we had a chicken dinner with all the trimmings another time. So I felt it was my turn.

We started with bread and tomatoes, steamed asparagus, and salad. Winnie handled the asparagus, which she peeled before steaming. I had never thought of peeling asparagus before, which is perhaps unsurprising given that half the time I'm too lazy to peel the stickers off apples and pears before I eat them. But there must be something to the peeling, because Winnie's asparagus was much more tender than mine usually is. The salad was Tracy's contribution, and it was also delicious: besides the greens and onions, there were steamed eggplant slices (Japanese eggplant, I think) and sweet toasted walnuts. Mike and Tracy brought a vinagrette dressing with them, but I don't remember using it. A little salt for the eggplant was all I needed.

Next came a white bean soup. I don't think I'd made white bean soup since last winter, but it's a recipe I really like. I use recipe in the loosest sense, of course: the soup consisted of some ham, onion, bell peppers, celery, white beans, carrots, and seasonings, but I did not carefully measure the proportions. I remember that there were two cans of beans and two bay leaves, and that it was salty, but no more than that. You can't make the same soup twice, anyhow, any more than you can step in the same river twice; even if you follow a recipe, the character of the batch of ingredients will change the flavor. McDonald's has found a partial solution to this problem: bleach all the natural flavor from the food and then add it back in the form of flavoring additives (see Fast Food Nation). For my part, I like the variety.

The main course was polenta and chicken. I baked the chicken ahead of time, then spread it over rounds of polenta cut from a roll. I made a spicy tomato sauce to spread over top of that -- just an ordinary sort of sauce (tomatoes, onions, peppers) with a little jalapeno kick -- and then added a layer of fresh sweet basil. I topped the whole thing with cheese and baked it in the oven for half an hour at 350. Oh, it was good! It was filling, too, and I still have enough polenta left over for a few lunches.

We waited a while for dessert, which was fruit salad. It was an ordinary sort of fruit salad, with apples, oranges, bananas, and nectarines. But Winnie added some lime juice to the apples to keep them from browning, and then I drizzled everything with a mixture of cranberry juice and orange juice, and so the salad had a little extra sweet-sour taste. Even if we hadn't done that, though, what's not to like about fruit salad?

Ah, food. Ah, friends. Ah, leftovers!

  • Currently drinking: Gen mai cha


Late Saturday morning, Winnie and I walked to the Cheeseboard for bread. I can't see how any other bakery within a mile stays in business, said Winnie; I replied that other bakeries stay open longer. Besides, baked goods go well with coffee, and any city that can support as many cafes as Berkeley supports must also be able to support more than one bakery.

The Cheeseboard is a co-operative, which explains both the all-too-short business hours and the very reasonable prices. The store sells both bread and cheese, but I almost always stick with the bread. We each bought two rolls on Saturday morning, one sweet and one salty. I had an olive focaccia and a pecan roll, and Winnie had a provolone-olive roll and a corn-cherry scone. We each had a few bites of the other's bread, and it was all good. We saved the scone for last, and savored it (with coffee in my case) while sitting in Berkeley Espresso.

Beside the rolls, we bought a loaf of provolone-olive bread to go with dinner. I've mentioned this bread before: it's soft and salty and cheesy, and it goes great with heirloom tomatoes in the summer. We still bought some tomatoes to go with the bread at dinner, and though the tomatoes weren't as wonderful as summer tomatoes, we did choose a variety that had more flavor than so many red tennis balls. It was still a good combination.

  • Currently drinking: Coffee

Friday, March 18, 2005

To Blog

I can verb any noun you give me, said my undergrad linguistics teacher. She may have exaggerated, since I'm not sure how to use sky or desk as verbs. Still, I can think of many terms that prove her point: to email, to google, to code, to spam, to TeX, and to blog. People create new tools and feel dissatisfied with the current vocabulary to describe how the tools are used, and the language grows. But while I can motor or barrel down the road, or perhaps keep on truckin', I cannot (yet) Hummer or SUV down thr road. I must remain content to drive Hummers or SUVs -- or rather, to be wary of others who do so. Why is it possible to Google someone but not to Hummer him?

There exist adequate old words to describe some of our new activities. I can write or search instead of e-mailing or googling; and while to spam has a certain flair with it's evocation of potted meats, to harrass also describes the primary activity of spammers. A journalist writes or reports: he does not journal. In an opinion piece, he may editorialize; the new verb indicates a specific type of writing, one which is supposed not to mix with reporting. Why, then, does a blogger blog? The noun blog describes a format which can be filled with many types of writing; superficially, to blog is little better as a verb than to legal pad. Perhaps to blog seems natural because to log is an accepted, related verb; but we do not say that a web designer webs, and to web seems an adequate verb, too.

I write to question, not to criticize. Etymology is fascinating, and the histories of new words are sometimes as interesting as the histories of the old ones.

Monday, March 14, 2005

Happy pi day!

It's 3.14, and there's mathematical graffiti for pi day all over the sidewalks in front of Soda Hall. Whee!

Thursday, March 10, 2005


If I roll two dice and I take the sum, I'm more likely to roll seven than to roll two. But is it possible that I could weight a pair of dice so that all possible sums are equally probable?

If you like mathematical puzzles, I strongly encourage you to stop here and play with this problem for a while. It's a fun one, and there are all sorts of generalizations. What if I use more than two dice? What if I use dice with four sides, or with twenty sides? This isn't an original problem, by the way. I learned about from Bill Gasarch and Clyde Kruskal when I was an undergraduate (in 1998, I think); they have a paper on it.

I was reminded of this problem yesterday afternoon when one of my office mates asked me about a problem with the eigensystem for a nearly-Toeplitz matrix. It doesn't sound closely related, but it is. Both problems can be reduced to problems with polynomials, and to multiplying polynomials.

How would I compute the probability distribution associated with a sum of two dice rolls, anyhow? The answer is not hard:

P(die1+die2 = i) = sumj+k = i P(die1 = j) P(die2 = k).
Now, here's a remarkable thing. Suppose that I have two polynomials
g(x) = g0 + g1x + ... + gn xn
h(x) = h0 + h1x + ... + hn xn.
What are the coefficients of f(x) = g(x) h(x)? The answer looks suspiciously familiar:
fi = sumj+k = i gj hk.
The formulae for computing the distribution of a sum of random variables or for computing the coefficients of the product of polynomials are the same! This formula defines a convolution product of two sequences. Convolutions show up in the theory of polynomials, in statistics, in Fourier analysis, in image processing... all over the place.

Back to dice. Because distributions of sums of random variables and coefficients of products of polynomials can be computed in the same way, it makes sense to identify discrete distributions with polynomials. If X is a random variable which can take on values 0, ..., n, the generating polynomial for the corresponding distribution is a polynomial g(x) of degree n such that

gi = P(X=i).
As we have seen, if X and Y are random variables with associated generating polynomials g(x) and h(x), then X+Y has a generating polynomial g(x) h(x). It's worth noting here that not every polynomial is a generating polynomial: the coefficients of a generating polynomial have to be positive and sum, or else they don't correspond to a probability distribution.

With the connection to generating polynomials in hand, we can rephrase the original problem. The generating polynomial for a uniform distribution over 0, ..., n is f(x) = cn/(n+1), where cn is

cn(x) = 1 + x + x2 + ... + xn.
The polynomial cn(x) is called a cyclotomic polynomial. It can also be written as:
cn(x) = (xn+1-1)/(x-1).
This latter form shows that cyclotomic polynomials are easy to factor: the roots of cn are the (n+1)st roots of unity, except for the root at +1.

A uniform distribution for the roll of two six-sided dice is a cyclotomic polynomial of degree 10 (in my world, by the way, dice are labeled 0-5 rather than 1-6). This polynomial has five pairs of complex-conjugate roots. If I have two weighted dice whose sum is uniformly distributed, I need those dice to have corresponding generating polynomials with products proportional to the cyclotomic polynomial; that means that the generating polynomial for the first die must have roots at some subset of the roots of c10, and the generating polynomial for the second die must have roots at the remaining roots of c10. Furthermore, since I need my generating polynomials to be real, I can't split up a complex conjugate pair of roots: if the generating polynomial for a die has a root at z, it must also have a root at conj(z).

I have ten roots; I have to split them into two groups of five; and the roots come into pairs that I can't split up. This is clearly an impossible task, and so we have the answer to the question: there is no way to weight two ordinary dice so that the sum is uniformly distributed.

Wednesday, March 09, 2005

Books Again

I haven't written anything for a while about the books I've been reading. I've gone through a batch of good ones in the past two weeks or so, and if I say nothing about them now, I may forget.

  • Isaac Newton (James Gleick)

    I like Gleick's writing. I know I've written before about his book Faster, and I think I've written about Chaos, a book I read in high school. Isaac Newton is short (191 pages in the text), well-written, and thoroughly researched (56 pages of endnotes and sources). A biography of Newton could be written in many ways: a paean to a hero, a story of the time in which he lived, or a technical account of his scientific work. What Gleick wrote is an account of Newton's life, his interactions with the world, and his scientific discoveries. There are no mathematical details, though Gleick describes the impact of Newton's mathematical and physical thinking. Gleick paints a broad picture of the times, with details where relevant, but he writes mostly about Newton, not Newton and his times. I thought the book was well-balanced and well worth reading.

  • Just for Fun (Linus Torvalds and David Diamond)

    On the page between the introduction and the table of contents to this book is the text of an e-mail from Torvalds to Diamond, in which he writes If you think we can make a fun book, and more importantly if you think we can have fun making it, let's go for it. I think they succeeded. This book is not profound in any way, but it is fun, and it was a great book to read on the BART train. The book includes some interesting things about Finland, beyond the facts that it is cold and that the Finns love cell phones. There is a chapter at the end in which Torvalds prognosticates for a bit -- under some duress, it seems. But the book is mostly a wise-cracking autobiography by someone who likes to play with computers and doesn't take himself too seriously, and that's what it's supposed to be.

  • Babel-17 (Samuel Delaney)

    The last time I was in the science fiction section at Black Oaks books, I saw that they'd put several fancy leather-bound volumes in between the paperbacks. The proprietors of Black Oaks are some of the savviest book managers I know -- to my regret, sometimes, since (unlike at Half Price or Pegasus and Pendragon), I never walk out of Black Oaks with a book I think is a steal -- so when they choose to put expensive leather-bound science fiction books into their stacks, I take notice. I pulled one of these books from the shelf to admire the binding and to read the foreword, then put it back and pulled out the much-less-expensive paperback copy of the same book. I ended up buying the paperback copy of the book: Babel-17, printed together with the novella Empire Star.

    I walked around in a daze for about half an hour after I read this one. It's a quick read, though not as quick as you might expect from the length; and it contains some interesting ideas and beautiful sentences. It is by far the most literary science fiction work I've read in the past six months -- maybe longer, since I know longer remember when exactly I picked up the anthology of Modern Classics of Science Fiction which is the last thing I read to which I might make a reasonable comparison. I'll recommend this one to some of my friends and family who don't usually read science fiction.

  • A Passion for Books (Harold Rabinowitz and Rob Kaplan)

    The title of this book was chosen well. This collection of lists, essays, cartoons, and short narrative pieces is not a book about reading, though reading is a part of it. This is a book about books, the things you hold in your hand and smell and collect. The subtitle is A Book Lover's Treasury of Stories, Essays, Humor, Lore, and Lists on Collecting, Reading, Borrowing, Lending, Caring for, and Appreciating Books. This is a good book to read bit by bit: a little while waiting for tea water to boil, a little before going to sleep, and a little more while sitting outside and enjoying the sun. I haven't found it to be a particularly inspiring book, though, and while I've had fun reading it -- and will likely go back through some of the lists at some point in the future -- I doubt I'll re-read it.

  • Currently drinking: Black tea flavored with peach

Monday, March 07, 2005

Programming Languages

I use several programming languages day-to-day, and a few more languages occasionally. I have a passing familiarity with several more languages that I uses rarely, as are most students with recent computer science degrees. They are our basic tools. They are so basic that I sometimes assume everyone knows about them, until I get brought up short with a question. What's a lambda function, David? Uh...

Well. Let me pick a few of my favorite and least favorite languages and, at least for once, write about them rather than the things I write with them.

  • C

    UNIX is written in C. Windows is written in C. I'm having a hard time thinking of an operating system in current use which was not written in C. Okay, BeOS may have been written in C++, and NeXTStep was written in Objective-C; but those both are children of C, and neither operating system is exactly a major concern any more.

    The C language and the UNIX operating system co-evolved. The first C compiler ran on a DEC PDP-7 machine, and the language has been (perhaps uncharitably) characterized as high-level PDP assembly language. The language did not really depend on the machine, though; this seems like a no-brainer now, but at the time the UNIX operating system was remarkable because it was written at a sufficiently high level that it could be ported from machine to machine. The prevailing wisdom before that was that operating system kernels had to be written in machine-specific assembly language. The C language, though, worked well -- and works well -- as a systems programming language. C lets us program close to the metal, enough so that it's pretty easy to work out the instruction sequences the compiler will generate for a given piece of C code (it's not at all easy to predict what an optimizing compiler will generate, particularly on some modern architectures, but that inconvenient fact isn't really stressed in lower-division CS coursework). At the same time, there is enough stuff built into the language to support a more abstract programming style, one in which the programmer doesn't think that much about what the hardware is doing underneath. Also, C is a small language. It often takes students a long time to understand pointers, or to feel comfortable with some common C idioms, but it's possible to become acquianted with most of the syntax in the course of a couple days. The definitive reference -- The C Programming Language (Kernighan and Ritchie) -- takes scarcely more shelf space than the Elements of Style.

    By way of comparison, The C++ Programming Language (Stroustrop) is over 800 pages of smallish print. C is a small language; most of C's successors are not.

    C is alive and well. Standard C runs on almost anything, and it is still the language of choice for programming operating systems, embedded systems, and other low-level things. Some of us like it for higher-level libraries as well.

    I used C for the core of SUGAR 2.0 and 3.0, for CLAPACK, for ECFS (an Extended Cryptographic File System that I co-authored as a class project), and to write a kernel for my undergraduate OS class. I also use it for a variety of small side projects.

  • C++

    C++ is sometimes lumped together with C in a single breath: C/C++. However convenient it is to write, and however much some people try to think of C++ as a better C, the impression that the phrase C/C++ conveys is inaccurate: C and C++ are very different programming languages. It is true that C++ grew out of C, and that most C compilers now are also C++ compilers. But it's possible to identify a C programmer writing in C++ for the first time, or vice-versa. I really learned C++ before I learned C well, and I was immensely frustrated when I first switched. I know other people who have had the opposite experience, and I know some people who manage to really write C in C++. Reading their code is like reading prose written by a writer familiar with some European language -- Russian, perhaps -- with a good-but-imperfect command of English. Even if the syntax is perfect, the idioms get mixed up.

    C++ is sometimes called an object-oriented language. The idea of object-oriented design is that a program is partitioned into objects which contain both data and routines to act on the data. For example, in one program I might use several different dictionaries: a dictionary object would consist of a data structure which contained both keys and data -- words and associated definitions, perhaps, or employee numbers and employee records -- and associated operations like add an entry, delete an entry, or look up a key. The details of how the data is stored are hidden, since what's really important is what you can do with the data; this is the principle known as encapsulation. For some types of dictionary data structures, all that really matters is that I need some way to compare keys: the exact details of what goes into the keys are secondary. I can use the same code to delete an entry whether I have keys that are strings (words) or numbers: this is a type of polymorphism. Or I can take an existing dictionary type and -- without modifying the original code -- write a dictionary with new functions, like the ability to guess which word is meant when the user is a bad speller. I can still use my dictionary-with-spell-check in any place I use a dictionary; it's just that I've given it an extra new ability. This is an example of inheritance.

    C++ really doesn't force any sort of object-oriented thinking: it's just that it provides a syntax that makes it easier to express (some) object-oriented designs. It's possible to write object-oriented code in C, too; it's just that there is more supporting syntax in C++. Even more important than the supporting syntax is the fact that a C++ compiler will enforce the semantics of an object system: if you tell the compiler that you're going to pass a dictionary to some function, it will let you pass dictionaries and only dictionaries. If you try to pass around penguins in place of dictionaries, the compiler will yell at you; in the same situation, a C compiler will sometimes let you substitute a penguin for a dictionary. Then the penguin will bite you when you actually run the program. Birds do not take well to being force-fed employee records.

    C++ sometimes gets a bad reputation for being slow, clumsy, and complicated. It is complicated, but the charge that it's slow and clumsy is misdirected. C++ can actually be faster than C, if it's used properly; because of strong typing (the property that the compiler won't allow you to confuse a dictionary with a penguin), the optimizer has some opportunities that it wouldn't otherwise have. The trouble is that in order to write really fast code -- in any language -- it's necessary to have a good mental model of how the programming language structures are actually implemented. C++ supports much more complicated constructions than C does, and it takes more time and training to really understand not only the semantics of those added features, but also to understand the implementation well enough to know how much the features will cost. Similarly, object-oriented methods are a supplement to -- not a substitute for -- good design taste. It's possible to make a bad generalization which causes confusion and extra work with little benefit: for example, partial differential equations often behave very differently in 2D than in 3D, and a framework in which 2D and 3D problems are treated too similarly will lead to headaches. But some things about 2D and 3D problems really are the same, and should be treated with the same code. How do you tell whether to treat 2D or 3D the same or differently? It's a matter of good taste, and that's a hard thing to teach.

    I used C++ at my job at Tri-Analytics (and later at Digital Innovation) during my high school and college years. I have used it for various other projects since. The core of the HiQLab simulator is written in C++.


    FORTRAN 77 was standardized in 1977, the same year that I was born. I sincerely hope that FORTRAN 77 dies out before I do. The language does support complex arithmetic -- something that C lacked until very recently -- and often it's easier for an optimizer to generate fast code from FORTRAN 77 than from C. But the language syntax is outmoded and irritating. Why should I have to format my program for a punch card? And why should I have to restrict the length of my function names to six characters? FORTRAN also has very weak type checking; think penguins and dictionaries again, except now I can get away with calling a penguin an integer, which even C will disallow. Also, FORTRAN 77 provides no facilities for dynamic memory allocation: if I make a dictionary, I have to make a maximum size at compile time. It's possible to get around this limitation (and doing so was one of my major contributions to the FORTRAN-language finite element code FEAP), but it takes a degree of cleverness and some mental gymnastics. FORTRAN 77 does not support recursion, does not have pass-by-value arguments, does not have pointers (or any good substitutes for pointers), and generally does not have the modern amenities.

    I like having the modern amenities. I enjoy running water and flush toilets; and I enjoy dynamic memory allocation and long function names. I do not generally think either preference is a weakness. That being said, there is a lot of well-designed numerical code in the world which is, alas, written in FORTRAN 77. And I am willing to use and modify such codes, which means that I am willing to continue to use FORTRAN 77.

    There are more recent dialects of FORTRAN than F77: we have F90, F95, FORTRAN 2000, and now FORTRAN 2003. I actually rather like these languages, though they are more complicated than the older FORTRAN dialects, and they show some of the symptoms of committee design. I'm not sure why these languages haven't caught on in the US, but so far they seem to be more popular in Europe. I see only one reason to continue using FORTRAN 77 rather than a more recent dialect: the free GNU compiler, which a lot of people use and like, currently only supports FORTRAN 77. But that will change with the next major release of the GNU compiler suite, which pleases me immensely.

    The FORTRAN language is dead. Long live the FORTRAN language!


    MATLAB is originally short for MATrix LABoratory, and the language is ubiquitous among those of us who do any sort of matrix computations. Not only is MATLAB often used to develop numerical algorithms, but MATLAB's syntax is often used in a purely mathematical context to express indexing operations which are common in numerical linear algebra. In addition to providing a concise syntax for linear algebra operations, MATLAB provides basic libraries for a variety of numerical tasks, including numerical quadrature (computing integrals), numerical integration (computing solutions to ordinary differential equations), and graphics and plotting in many flavors.

    Of course, there aren't that many numerical linear algebra specialists in the world, and most of us don't make that much money; the MathWorks makes most of its money from electronic designers, and particularly from DSP designers. Besides the basic capabilities to which every MATLAB user has access, the MathWorks sells toolboxes of routines for DSP design, filter design, control theory, system modeling, statistics, symbolic math, and PDEs. There are also a variety of third-party toolboxes (both free and commercial).

    I love MATLAB for its matrix syntax. I like the fact that it includes a wide range of numerical routines. But I'm not so attached to the rest of the language. MATLAB only allows one globally-visible function to be provided per file; that quickly becomes annoying for large programs. MATLAB does have explicit language support for OO design, but it's awkward. And if MATLAB strings seem well-designed compared to strings in C and FORTRAN, that's only because string manipulation in C and FORTRAN is so miserable.

    Almost every numerical code I've written, large or small, has started life as a MATLAB program. My MATLAB interface to FEAP (FEAPMEX) and the SUGAR and HiQLab simulators all combine a computational core in some compiled language with a MATLAB user interface. I've also used a mixture of MATLAB and compiled languages in my work on structured eigensolvers and in my work on network tomography. I sometimes use Octave, which is a free numerical system with a MATLAB-like syntax; but Octave only recently added decent sparse matrix support (which I use a lot), and its graphics aren't as nice. So I still mostly use MATLAB.

  • LISP

    LISP is a LISt Processing language; or is it Lots of Irritating Strings of Parentheses? In any event, LISP is one of the oldest high-level lenguages. It is also one of the simplest. LISP programs work on lists, or lists of lists nested in various ways. LISP programs also are lists themselves: the main data structure is not only used to represent data, but also the instructions that operate on the data. Consequently, for a LISP programmer it is very natural to write a function which takes a function as input and produces another function as output. Deep, no? The elegance and simplicity of such a language is rarely appreciated by students who learn Scheme (a LISP dialect) in a first programming course.

    At a superficial level, LISP confuses the uninitiated because it is a prefix language. For instance, in LISP I would write 1+2*3 as (+ 1 (* 2 3)): the operator comes before the operands, not between them. If Shakespeare wrote in LISP, Hamlet might have said (or to-be (not to-be)) (and in a post-fix language like Forth, it would be tobe tobe not or). The difference between prefix and infix notation is not deep, though, and most people figure it out pretty quickly, particularly if they ever used an HP calculator.

    At a deeper level, LISP is a functional language. A well-written LISP program typically will not involve assignments and loops; instead, operations are expressed recursively. If I wanted to sum the numbers from 1 to n in LISP (and didn't remember the formula n(n+1)/2), I might write something like

      (defun sumit(n) 
          (if (eq n 1) 
               (+ n (sumit (- n 1)))))

    That is, if n is 1, then the sum is 1; otherwise, the sum is n plus the sum through n-1. In contrast, if I was writing in an imperative style in a language like MATLAB, I might write

      function sum = sumit(n)
      sum = 0;
      for k = 1:n
        sum = sum + k;

    It's possible to write in a functional style in a language like C, and it's possible to write in an imperative style in LISP. As with the distinction between C and C++, the difference is not in what the language can do; it is in the idioms to which the language lends itself. The same can probably be said of the technical jargon of different disciplines, but that's a topic for a different time.

    I used LISP in my Artificial Intelligence course (like every other person to take such a course). I also use Emacs LISP from time to time, and on a few occasions I wrote short functions in a LISP dialect which was used as one of the configuration languages for Collector (the flagship software product produced by the company I worked with in high school and college).

  • Pascal and Delphi

    I took AP Computer Science in high school -- sort of. You see, I'd already learned to program at the time, and it quickly became clear that it would be pointless for me to walk through the usual exercises. So I got to pursue my own projects in a semi-independent study; and I helped the other students in the class, which was also entertaining. The class was still taught in Pascal -- and the AP exam was still given in Pascal -- so I taught myself Pascal. I have few things to say about it, except that the support for character strings was pretty lame (again), and that I always found the C style of using semicolons to end statements to be more natural than the Pascal style of using semicolons to separate statements.

    Pascal was an adequate language when I used it, and I suppose it's fine for instructions. But Delphi was a completely different matter. We used TurboPascal for class, and whatever I thought of the basic Pascal language, the Turbo compiler was a delight -- and the folks who wrote the Turbo compiler extended the language in a good way (with better string support, for one thing). But Delphi! Delphi was -- is -- Borland's answer to Visual BASIC. The programming language is an extended version of Pascal which bears roughly the same relation to standard Pascal as C++ bears to standard C. Like Visual BASIC, a lot of Delphi programming was centered around GUI-driven events: click on this button and that piece of code is executed, for instance. But Pascal is a nicer programming language than BASIC, and the whole business was so much cleaner in Delphi that there was hardly a comparison. The Delphi compiler, like the Turbo compiler, was amazingly fast; the code it generated was pretty fast, too. As far as I could tell, it was in almost all ways technically superior to VB.

    I used Delphi to write user interface code, both for Digital Innovation and for the Department of Communication Services (DCS) at the University of Maryland.

  • Lua

    One of my favorite languages for the past couple years has been a little scripting language called Lua. The name comes from the Portuguese word for moon; the language and the name evolved from SOL, a Simple Object Language (sol is sun in Portuguese). SOL, in turn, was a descendant of DEL, a Data Entry Language. DEL, SOL, and Lua were all inspired by the needs of engineers to set up some sort of petroleum engineering simulations, so one of the criteria in the minds of the designers was can someone who only knows FORTRAN 77 pick this up? At the same time, the language was designed by computer scientists, and it has a number of lovely and useful features: functions as first-class objects, syntactic sugar to support object-oriented programming, and a very natural table-based data description syntax. At the same time, it's not a complex language: the complete syntax description fits on one page, and the language manual -- including the description of the extension API and standard libraries -- is about sixty pages. The interpreter is small, fast, and portable.

    One of the nice things about Lua is how well it serves as an extension language. Python, Perl, Tcl/Tk, and various other high-level scripting languages are easy to extend, but they are not as easy to use as extensions. Size is one issue: Python, Perl, and Tcl are not little languages. Syntax is another issue: when I start programming in Perl after a hiatus, I always forget half the dollar signs; and in Tcl, I always forget which sort of braces I should use. There are a few Scheme interpreters that are billed as extension languages, and from personal experience I know that I like Scheme for extensions. But -- the examples of AutoCAD and CADENCE notwithstanding -- I'd rather not spend my time teaching mechanical engineers how to use LISP. Besides, I like infix notation. Lua wins because it supports a variety of programming styles; because it is easy to call from C and can easily be extended from C; because it has an intuitive syntax; and because it's small and fast. I think the language underappreciated, though some of the video game industry has realized how nice Lua is: Lua is used to script the most recent Monkey Island games, among others.

    I use Lua in the HiQLab simulator both to describe my meshes and to script out analysis tasks. I used Lua for the same purpose when I designed SUGAR 3, though at that time I felt obliged to extend the Lua syntax in order to support some features I'd built into the input language I wrote for SUGAR 2. Let me give an example of why Lua works so nicely as a mesh description language. When you solve a differential equation -- partial or ordinary -- one of the things you always have to worry about is the behavior at the boundary. In HiQLab, I have a very flexible way to define boundary conditions: I write a Lua function to do the job. For instance, the code fragment

        if x == 0 then return 'uu', 0, 0; end

    tells my code that all of the points along the line x = 0 in my two-dimensional mesh should not move. The 'uu' means that I'm specifying displacements in both the x and y directions; the two zeros are the values of the displacements. I can write my code this way because Lua treats functions as first class values: I can define a function and pass it as an argument to another function, just as I can in LISP. Also, I've defined an object-oriented interface for my geometry definitions: I have a mesh object (called mesh, strangely enough), and that object has a method called set_bc.

    I generate the interfaces between Lua and C/C++ code using an interface compiler that generates the glue code I need from a cleaned-up header file. I liked the features of the tolua interface compiler well enough that I added some of them to the matwrap interface compiler that I use to generate interfaces between MATLAB and C/C++ code.

    One summer at Digital Innovation, a co-worker and I worked on an interpreter for a language called EEL (Expression Evaluation Language) which we hoped would replace the LISP dialect used to script the Collector database system. I think Lua is what we might have hoped to design, but didn't quite achieve -- though when I mentioned this to Guy, he thought that EEL would still kick ass. I suppose it's a moot point, now.

  • AWK and Perl

    AWK was the first language I learned with really good text string processing. I suppose it's reasonable that I would be impressed by AWK's text processing: after all, the whole point of the language is pattern scanning and text processing. I use AWK for a lot of little things, like massaging text files full of data as I pass them from one program to another. I have a pair of shell scripts that I use a lot, awk-process and awk-unprocess, which let me quickly apply complicated edits to several programs and then undo those edits if I make a mistake.

    Perl (Practical Extraction and Reporting Language or Pathologically Eclectic Rubbish Lister, depending who you ask) includes the capabilities of AWK and much more. However, I rarely write an AWK script that's more than a page, and so I rarely find I want the extra capabilities I could get from Perl. I know enough Perl to be able to read and edit other people's code, but I write almost as little new Perl as I write new Fortran 77. If I want to write new scripts and I need something more than AWK, I'll use Python. Why Python? Because, unlike Perl, it doesn't look like someone accidentally transcribed modem line noise onto the screen. Perl's very powerful, but the syntax is something only a camel could love.

    I suppose Tcl/Tk is also an option for the same sort of high-level scripts that I might write in Perl or Python. Tcl/Tk is an older scripting language; its best point, in my opinion (and in the opinion of some others) is its integrated GUI toolkit (which is what Tk is). Tcl/Tk makes it possible to very quickly write graphical applications for Windows or UNIX, and I used it to make a graphical user interface to the program I wrote for my undergraduate research (TkBTG, a code for statistical prediction of spatial data). But while Tk is nice, I always had trouble debugging Tcl. You see, in Tcl there are specific meanings associated with {}, (), and []; and at critical times, I kept forgetting which was which. That said, I know that OpenSEES -- a locally-devloped open-source simulator for determining the response of structures in earthquakes -- uses Tcl as a scripting language.

  • PostScript

    It seems only fair to end with PostScript. PostScript is primarily a document description language, but it's a full-featured language which is capable of more than drawing documents (though I think there's a certain perversity involved in using PostScript to write a SETI@Home client, for instance). PostScript is the lingua franca for UNIX printer drivers, and many high-end printers will accept PostScript directly.

    PostScript is a postfix language. That is, instead of putting the functions and operators before the data on which they operate (as happens in LISP), one puts the operators after the data. The PostScript interpreter maintains a stack, and whenever it sees an operation -- like + -- it will take an appropriate number of input operands off the top of the stack and push a result onto the top of the stack. The fact that the interpreter hardly needs anything more than a stack and a table of variable definitions means that the interpreter can be pretty simple. Also, because the interpreter can usually do something immediately when it sees a word -- like push something onto the stack, or run an operation -- a PostScript interpreter needs to include only a minimal parser.

    To give an idea how this all works, let me give a simple example. The following program prints the numbers 1-20 in a column running down the side of the screen:

      /Times-Roman findfont 15 scalefont setfont
      1 1 20 {
        dup                % stack: n, n
        -18 mul 720 add    % stack: n, -18*n+720
        72 exch            % stack: n, 72, -18*n+720
        moveto             % stack: n
        3 string cvs show  % stack:
      } for

    The first line sets the font (15 point Times Roman), and the last line shows the page. The lines in between are the interesting part. In PostScript braces denote a procedure: these are the only things that aren't immediately processed in full when the interpreter sees them. Here, the interpreter pushes the loop bounds (1 through 20 in steps of 1), then the loop body procedure. Then it calls the for command, which says pop three elements from the stack and use them as loop bounds; for each value of the index (from 1-20, here), push the index value on the stack and then execute the procedure inside. The procedure inside computes a position on the page (the point 72,-18*n+720), moves there, and prints a three-character string containing the value of n.

    If you think that's intuitive, you've either programmed in PostScript before or your mind has been warped by too much time with an HP calculator. Or, possibly, your mind works better with postfix notation than mine does. Postfix languages are attractive because they are simple to implement and because it's simple to automatically generate code for them -- not necessarily because it's simple for a human being to write code for them.

    Not only are stack-based interpreters simple to implement, but it is also simple to generate code for them. Most of the time, I only work with PostScript through automated tools (sometimes produced by others, sometimes deviced by me). But the idea of a stack-based machine isn't good just for generating printer commands. The Java Virtual Machine is stack-based; so is the virtual machine that Lua uses internally. The x87 floating point coprocessors and the integrated floating-point units in some later Intel chips are stack-based; this last is a mixed blessing, but that's a story for some other day.

Thursday, March 03, 2005

How Long?

Know this feeling?

Structure and Speed

Hit a tuning fork on a table and it will ring -- for a while. Over time, the ringing will gradually decay away, its energy spent. Some of the energy of resonance goes into the air; some of it gets absorbed into motion of defects in the metal; and some of it will be absorbed by the hand that holds the tuning fork. If you have a very tiny tuning fork made out of a nice material like silicon (in which the damping due to defects is low) and you ring it in a vacuum, another source of damping becomes important. Solids, like gases, change temperature when they are expanded or compressed. In a vibrating tuning fork, different parts of the structure are expanded and compressed at different rates; as a result, there are differences in temperature from one part to another. Heat likes to even itself out, so these differences in temperature tend to smooth themselves out over time. Once the heat has diffused, it cannot be turned back into mechanical energy of motion, and so there is a little less energy in the vibrating tuning fork. This mechanism is known as thermoelastic damping; it was explored by Zener (of Zener diode fame) in a long sequence of papers in the late 30s and early 40s, and has more recently been of immense interest to designers of microscopic mechanical resonators that are used for different types of sensors and signal processing components.

The equations for linear elasticity are pretty nice. So are the equations for diffusion of heat. The fully coupled equations of thermoelasticity -- including thermal expansion and temperature variations driven by changes in volume -- are a mess. The coupling terms screw up all the nice structure that makes elasticity problems and heat flow problems so (relatively) easy to solve on their own. But those coupling terms are small, and so it's possible to do something clever: figure out what the mechanical vibrations would be if there were no thermoelastic coupling, and then add a term to correct the shape and frequency of the vibration (to first order).

HiQLab, my latest toy, can calculate the effects of thermoelastic damping by either solving the fully-coupled equations in a brute force fashion (relatively speaking -- even the brute force calculation requires some care), or by using the perturbation technique that I described above. Here's a comparison of the two methods on a sample problem using the stand-alone version of the program (there's also a MATLAB version). Notice the elapsed time lines.

I'm so pleased when things work as they should.

HiQlab 0.2
Copyright   : Regents of the University of California
Build system: i686-pc-linux-gnu
Build date  : Thu Mar  3 19:29:03 PST 2005
Bug reports :
Lua 5.0.2  Copyright (C) 1994-2004 Tecgraf, PUC-Rio
> method = 'full'
> dofile('driver_beam.lua')
Shift w0         : 88 MHz
Damped w (real)  : 87.6735 MHz
Damped w (imag)  : 0.00500905 MHz
Q                : 8751.51
Elapsed time     : 87
> method = 'pert'
> dofile('driver_beam.lua')
Shift w0         : 88 MHz
Damped w (real)  : 87.6735 MHz
Damped w (imag)  : 0.00501454 MHz
Q                : 8741.93
Elapsed time     : 7
[dbindel@localhost beam]$

Wednesday, March 02, 2005


The Mythical Man Month: Essays on Software Engineering (Brooks), like Elements of Style (Strunk and White), has a proud place on my shelves. Brooks book is now thirty years old; I have the twentieth anniversary edition, which is still in print. The technical references are dated; the ideas about writing, design, and engineering organization remain fresh.

The last section of the chapter Ten Pounds in a Five-Pound Sack is titled Representation is the Essence of Programming. The section begins with these words:

Beyond craftsmanship lies invention, and it is here that lean, spare, fast programs are born. Almost always these are the result of strategic breakthrough rather than tactical cleverness. Sometimes the strategic breakthrough will be a new algorithm, such as the Cooley-Tukey Fast Fourier Transform or the substitution of an n log n sort of an n2 set of comparisons.

Much more often, strategic breakthroughs will come from reduing the representation of the data or tables. This is where the heart of a program lies. Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowcharts; they'll be obvious.

The terms flowcharts and tables are dated (in The Cathedral and the Bazaar, Raymond substitutes code and data structures); but the meaning remains relevant. Representation is the essence of programming. It is also the essence of physics, of engineering, of mathematics, and of written language. When I build a physical model, which details do I include, and which do I ignore? Do I write equations for particle dynamics or wave mechanics, or something completely different; and can I go from one to the other when it makes my life easier? When I learn how a microprocessor is designed, do I think about a billion transistors, or about caches and pipelines and arithmetic units?

Our impression of the world comes through our senses; our understanding of the world comes through the representations we build from those impressions.

  • Currently drinking: Coffee