Thursday, August 25, 2005

Fall 05

Classes start next week. How about that?

A couple years ago, I was the TA for the local graduate parallel computing course. That was a lot of fun, though it was also a lot of work. Kathy Yelick has done a lot with performance tuning and language design for parallel machines; and I'm a numerical analyst who happens to know something about systems. So I think the folks taking the class were exposed to a pretty good range of topics, more than they might have been if (for exmple) I was the TA for my advisor.

Since the parallel computing course, I haven't been formally assigned as a TA. I've had a number of students come by my office to ask questions related to projects in numerical linear algebra, and would have probably been the TA for the numerical linear algebra course with Kahan a couple semesters ago if Jim hadn't nixed the idea (perhaps correctly). However, there is -- correctly -- a teaching requirement for graduate students in the CS PhD program; and because it was a graduate course, the parallel computing class didn't fulfill that requirement for me. So I will be teaching this coming semester, in addition to wrapping up my thesis, writing job applications, and trying to make forward progress on my research.

It looks like it will be a busy semester. And it starts next week! And I'm delighted about it. I'm sure I'll be busy, but won't it be fun? I work well when I have several distinct tasks to occupy my time: a few hours proving theorems, a few hours writing code, and a few hours scribbling notes. And yes, a few hours to eat and exercise goes in there, too. A few hours of teaching will fit nicely, I think.

The class for which I'm the TA is CS 164: Compilers and Language Design. Yes, this a somewhat peculiar assignment: if we went by background, I would probably first be assigned to teach a numerical computing course, an algorithms course, or even an operating systems course. But all the numerical computing courses at Berkeley live in the math department, and thus wouldn't fulfill my requirement; and I'm certainly sufficiently broadly qualified to be a TA for many courses in CS, including compilers. It's been a while since my undergrad course, but I do know the material -- I've even used it for my work (academic and commercial) a few times. Besides, I really like this material. The theory is elegant, accessible, and obviously and immediately practical (with the great advantage that I'm thus in a position to be able to crank out a big chunk of the class project code in one or two evenings -- maybe I'll write later about writing an LL(1) parser generator in Common Lisp). And the tradeoffs in language design often center around aesthetic issues of how things are most naturally expressed, issues that can be appreciated by almost anyone who has written a program.

There are about fifty students in the class. So I'll be responsible for two sections of about twenty-five each. With that many people, I would be very surprised if there weren't other people who took just as much joy in the subject as I do. I'd probably also be surprised if there weren't a few who turn out to be the most irritating sorts of language lawyers, but it takes all types -- and I have quite a few friends among the irritating language lawyer crowd.

Anyhow, one of the current requirements for GSIs at Berkeley is that we should take an online course on Professional Standards and Ethics for Graduate Student Instructors. So I came to the cafe to sit and work through the first module. It was, I suppose, an educational experience. But I've already decided that I have some comments for the course feedback system, once I finish the last section:

  • Yes, we're not supposed to discriminate based on race, creed, sexual preference, ethnicity, hair style, eye color, shoe size, taste in music, etc. Do you have to list all those things more than once? At some point, the eye tires of lists; I don't know about other readers, but I have to force myself to actually pay any attention beyond the first item or two, and in this case I see no point in bothering. Your supposed to be teaching about teaching -- do you really think that legalese is the best language for pedagogy?
  • Do not write about groups of individuals. What else would make up the groups? Garden vegetables, perhaps? You don't clarify anything with the extra words. This is supposed to be a course on pedagogy, not a course on advertising or professional business obfuscation. Let your yes be your yes, and let your group be your group! Otherwise people like me will get bees in their bonnets and run in circles crying Ow! Ow! Ow! rather than actually absorbing the rest of your sentence.
  • Similarly, what's the point in writing that discrimination is the granting of preferential treatment to some but not to all? It's a bad definition. Preference implies exclusivity; the phrase to some but not to all is just so much extra verbiage. Even worse is the phrase granting of: treatment is already a noun, you know. And if you remove the excess, this sentence defines discrimination as preferential treatment, which seems to me to be a pretty obvious definition. Bah!
  • Bees! Ow! Ow! Ow!

With this sort of training, I'm sure that I'll be enabled to act effectively as a more useful and pedagogically sound instrument toward student learning and fulfillment. Or maybe I'll just become a crotchety ass.

  • Currently drinking: Black coffee

Wednesday, August 24, 2005

Local names

Modern programming languages support the notion of restricted lexical scope: the meaning assigned to a name can be made to hold only over restricted sections of the program text, allowing the programmer to re-use the same symbol to refer to different things depending on the context. It's hard to overstate how useful this is. It's so useful, in fact, that we fake it when we don't have it; if you've ever seen an old C program with function names like my_package_foo() and my_package_bar(), this is exactly what's going on.

It's more difficult to restrict the scope of a definition in an ordinary prose document. We can fake it by saying for this section only, let x be ..., but it's awkward, and readers get to the start of the next section and forget that you no longer mean the same thing by x. This is, I think, a problem with how math is often presented: usually we have an informal way of introducing local variables (anything defined inside the proof of a theorem is usually local to that proof, unless the author makes noises about it being important elsewhere). But sometimes I want a definition just for a few lines, and it's hard to do that with ordinary English text.

When you don't have enough control over where a particular symbol means what, you're left constantly grabbing for more symbols. Programmers sometimes refer to this problem as namespace pollution. A lot of the tedium of mathematical writing -- and mathematical reading, for that matter -- boils down to managing namespace pollution. After a few years, most people get used to it: yes, my paper involves five different concepts, all of them attached to the label B, but I can figure out from context which is which -- and if not, I can always grab another letter. Right? Except that, in any piece of writing longer than a few pages and dealing with more than a few concepts, it is terrifyingly easy to run through all of the Greek and English alphabet (lower case and upper case). It's even easier to run out of symbols if you choose not to bewilder your readers by letting n be a small quantity, epsilon be an integer going to infinity, t be a quantity with units of length, and x be a quantity with units of time.

This issue of notation and namespace pollution is a constant nuisance to me when I'm writing proofs and when I'm reading codes written by mathematicians who aren't experienced programmers. When I'm writing proofs, I run out of symbols too quickly, because I start with a quasi-English notation in which the scope of the assigned meanings is clear. Let C mean this thing in block 1; let it mean that thing in block 2; and let it mean something else in block 3. But when blocks turn into paragraphs, I have a problem. When I'm reading code code written by inexperienced programmers (not just mathematicians, but also engineers and physicists), I see the same problem in reverse: they clearly started from a mental model in which they didn't have any locality of definitions, and so their codes are crowded with global variables, or with local variables with single-character names which are used for seven different purposes over the course of a subroutine. Bah! It makes a code difficult to understand and nearly impossible to debug.

I dare say that, if we had some way of indicating locality of definitions in spoken and written English, it would be a lot easier to convince people when they were making logical fallacies (particularly those involving circular reasoning). But perhaps this is just wishful thinking.

Tuesday, August 23, 2005

Essays interspersed

Richard Gabriel, Paul Graham, and Peter Norvig all have sites of essays. Norvig has more technical stuff than Graham or Gabriel, but all three collections are worth browsing. I've already mentioned Graham's essays, but if you're unfamiliar with the other two: Norvig is Director of Search Quality at Google, but I know him better as the author of The AI Book (together with Russell); and I first learned of Richard Gabriel from his essay on the rise of Worse is Better, which you can learn about from his site. All three are also Lisp hackers extraordinaire, which is why I was fishing through their writing.

I particularly liked Norvig's essay Teach Yourself Programming in Ten Years. Exactly. Norvig's essay is reminsicent of the one expressed by Keith Devlin in his essay on staying the course. The difference between the popular mis-understanding of mathematics and of computer science as disciplines is that so many people who know nothing of mathematics think that it must be inhumanly difficult, while so many people who know nothing of computer science think it's possible to become an expert in three days. Or perhaps thirty days at the outside. Going through a formal computer science education, I've been told, is pointless: you learn so many things that you'll never use! The same logic -- or lack thereof -- would indicate that a professional carpenter really only needs a screwdriver and a hammer, or that a professional writer in the English language needn't be able to form compound or complex sentences. There is such a thing as useless knowledge, which is either so contorted or so fundamentally mistaken (or both) that it represents a backward step with no potential for interesting development or discovery. But for the most part, discarding a misunderstood tool as useless is a sign or a certain lack of imagination.

You needn't be a professional to enjoy and appreciate either mathematics or computation. In the case of computer science, in fact, being a computing professional or having a computer science degree seems to be surprisingly uncorrelated with enjoying and appreciating computer science. There are a variety of folks without formal CS degrees who are nevertheless highly competent both as programmers and as computer scientists -- in fact, CS is a sufficiently young discipline that most of the older generation of CS faculty can be held up as an example of this. And there are also a variety of folks with formal CS degrees who are nevertheless fairly incompetent both as programmers and as computer scientists. If you spend all day sitting in front of a computer and writing tedious, redundant code, you should really wonder: couldn't I make the computer do this? But a lot of folks don't (or, rather worse, they do, but they're overruled by management). Yes, in twenty-one days you can probably learn to write simple programs. But that type of programming is no more the end goal than summing columns of numbers is the end goal of mathematics, or pounding nails into boards is the end goal of carpentry.

Take a look, too, at Mob Software, by Richard Gabriel. I think he has it right, too. And this is the hidden potential of open source as a software model: that it can inspire amateur efforts, in the original and best sense of an amateur as one who does a thing for the sheer love of it. Up with open source projects, with ham radio, with mathematical puzzles and with the coffee mugs that children make for their parents! The drive toward professionalism may make money, but the drive toward amateurism makes things worth doing.

Tuesday, August 16, 2005

Day in the life

  • Get up. Coffee. Tease flatmate.
  • Taylor's theorem, integration by parts, Cauchy-Schwartz. Rinse and repeat. Get error estimate.
  • Check numerically whether estimate holds for simple test case. Discover algebra errors in the proof. Correct proof.
  • Discover algebra errors in the test program. Correct test program.
  • Discover errors in the compiler. Swear. Let out cat, who is spooked. Lower optimization level.
  • Decide result looks too simple not to be standard. Search through references for something similar.
  • Discover promising reference. Wade through ten pages of dense functional analysis, decide that it's not quite the same, but close enough. Add citation to my notes. Let the cat in.
  • Add page to thesis. Get up, stretch, dance in circles to a song on the radio. Spook the cat, who I let out again.
  • Lunch. Let cat back in.
  • Go onto campus. Check mail. Read blogs for half an hour.
  • Add feature to HiQLab.
  • Index error, index error, type error, core dump. Swear. Rinse and repeat.
  • Fix index errors. Run test case. It fails. Swear.
  • Discover error in the test case. Fix it, discover a sign error meant I was computing the negative of the quantity I wanted. Fix sign error. Test passed! Code crashes on exit.
  • Swear. Spend half an hour figuring out and working around quirk in the dynamic linking system which led to the crash.
  • Walk home. Talk to Winnie on the phone as I go. Make dinner, which is salad, since there's nothing much in the fridge and I actually remembered to eat lunch.
  • Recommence work on HiQLab. Stall out. Take a tea break.
  • Tease flatmate.
  • Decide to work on compilers stuff for a while. Write an LL(1) parser generator in Lisp. Chortle after I reduce the code size to under 200 lines. All tests passed! Realize that it's 1 am.
  • Brush teeth. Fall into bed.

Sunday, August 14, 2005

Starts with P

  • Public transportation

    I visit Winnie in San Jose most Saturdays. The full trip proceeds in three legs: a BART ride to Fremont, a bus ride to a transfer station at the Great Mall transit center, and a light rail ride to a stop near Winnie's apartment. Depending on how long I spend waiting between each leg (and whether I meet Winnie at her apartment or at Great Mall), the trip takes between two and three hours. Usually it takes about two and a half hours, which means that on Saturdays I spend a total of five hours or so riding the local public transportation.

    I do different things in those five hours, depending on my mood and on the contents of my backpack. Some days, I read; some days, I listen to the radio; some days, I sleep; and some days, I stare out the window and ponder. When I walked out the door this morning, it felt like a reading day, so I tucked David Copperfield into my bag.

    I didn't touch David Copperfield after that. Here's the story of what I did instead.

  • Projections

    Pretend for the moment that you're interested in the solution to a large system of equations. The system is so large that it's inconvenient to do anything direct -- there are just too many equations and too many unknowns. What do you do to get an approximate solution?

    There really is only one thing to do: get rid of some unknowns. Perhaps you know a little in advance about your solution, so that you're able to say in advance that some components are going to be almost zero; so you pretend that they are zero. Now, if you started with equal numbers of equations and unknowns, you create a problem by assuming some of the unknowns are zero: there are too many equations, now, and they can't all be satisfied. So you throw away some equations, too, until you have an equation that you can solve again. And this gives you an approximate solution, which you can then try to evaluate or solve.

    Now, you can usually rearrange the unknowns in your problem before doing anything else, so that you really have some confidence that the unknowns you discard will be almost zero. And you can usually rearrange the equations, too, so that the equations you discard are the ones that don't tell you much. For example, your original system of equations might be the stationarity conditions that have to be satisfied at a minimum of some energy (i.e. the conditions that say derivative equals zero at an extremum); in this case, your reduced set of equations might be the equations that minimize the energy subject to the constraint that the components you threw away are zero. But the idea behind this method of approximation, of assuming something about the solution so that you can throw away a bunch of the equations and unknowns, is very general. The high-level idea goes under many names, but I usually call any method of this form a projection method (or sometimes a Galerkin projection method). And projection methods, together with perturbation methods, make up a toolkit that I use almost constantly.

    Projection methods ar used for solving partial differential equations (which are infinite-dimensional problems), as well as for solving large finite-dimensional linear systems of equations. If you've heard of the finite element method, or of the conjugate gradient method, they're both examples. If you've ever used a truncated Fourier series to approximate an ODE or PDE solution, you used a projection method there, too. The cool thing about classifying these methods together is that you can analyze lots of different methods in a unified framework -- which, if you believe in the power of carefully directed laziness, is a very good thing.

    If you don't believe in the power of carefully directed laziness, then neither mathematics nor computer science is probably the discipline for you.

    Anyhow. The task of analyzing any of these projection methods really can be partitioned into two sub-tasks. First, one figures out how nearly the approximate solution matches the best approximation consistent with the simplifying assumptions: how well do you approximate the unknowns that you didn't throw away? Then one figures out whether the approximating subspace contains a good estimate for the true solution: are the unknown values that you discarded really small? The second step is problem- and method-dependent, but the first step can be done in a general setting.

    At least, the first step looks like it can be done in a general setting. In practice, you can't get very far without using some additional hypotheses. The strongest results hold when the system of equations is symmetric and positive definite (in a PDE setting, where positive definite systems can nevertheless be arbitrarily close to singular, you need coercivity). If the equations are nonsymmetric but you still have coercivity, then you can say a little less, but the result is mostly the same. But once you've lost coercivity, suddenly there's much less you can say. If you make an additional assumption (called an inf-sup or Babuska-Brezzi condition in the PDE setting; a lower bound on a minimum singular value in the finite setting), then you can still prove something, then you can at least prove that your approximate problem has a unique solution which is within some factor of being optimal -- but that factor may be much larger than it was in the positive-definite case. Besides, proving an inf-sup condition is often involves problem-specific work, which is work that we'd like to avoid as much as possible in the interest of constructive laziness.

    There's something bothersome about changing to a much weaker bound the instand that coercivity is lost. There are cases when a minor change makes the difference between an easy problem and an impossible one; but this is not such a case. For the problems I think about, a mild loss of coercivity should not be a mortal sin. How can I develop error bounds which gradually transition between the theory that holds in the coercive case and the theory that holds for general indefinite problems?

    This is what I was thinking about for a big chunk of the train rides today, and I think I have a reasonable answer. It generalizes and unifies a hodge-podge of results that I know about in the finite- and infinite-dimensional case, and it involves some mathematical ideas that I find very beautiful (involving perturbation theory and the behavior of the field of values of an operator). I doubt any of it is new in a fundamental way, but I'm not sure these results have been cleanly tied together before. At any rate, I'm excited about it.

  • Polya

    There was already a book in my backpack when I threw in David Copperfield. George Polya's How to Solve It is just as classic as Copperfield (though in a different setting). It's also much shorter, which is why, after pondering projections for a while, I chose to read it over the Dickens book. How to Solve It is about how one goes about solving mathematical problems -- or other problems, for that matter. The techniques that go into such problem-solving are collectively known as heuristic (which is distinguished from the usual definition of heuristic reasoning, since part of the problem-solving process is the rigorous argument that comes after the initial attack). Polya wrote particularly for teachers of mathematics, and I admit that my decision to buy my own copy was partly influenced by a quote from a review:

    Every prospective teacher should read it. In particular, graduate students will find it invaluable. The traditional mathematics professor who reads a paper before one of the Mathematical Societies might also learn something from the book: 'He writes a, he says b, he means c; but it should be d.

    -- E.T. Bell, Mathematical Monthly, December 1945.

  • Practical Common Lisp

    I have a few Lisp books on my shelf, of which I only really use one: ANSI Common Lisp by Paul Graham. The other books aren't bad, but they're more dedicated to how to program in general, as opposed to how to program in Lisp. But there's something limiting about having just one book on a language, just as there's something limiting in only having a single dictionary, a single history book, or a single book on almost any other topic. The local book stores stock two Lisp books: Graham's book, and Practical Common Lisp by Peter Siebel. After discovering the latter book on the store shelf a couple months ago, I found that it was available online as well, and so I started reading it.

    It is an excellent book, one that I've already found very useful. For example, the chapter on Lisp packages and symbol resolution (Chapter 21) includes a section on Package Gotchas, which, in the course of about a page, revealed the source of the woes I'd been having with my attempts to use a Lisp lexer generator that I grabbed from the net. Excellent! Siebel also has good descriptions of the Common Lisp format and loop constructs, better than the corresponding descriptions in Graham's book (probably because neither construct is very Lispy).

    I am not good at resisting the call of good books. I got a paper version when we went to the book store.

    I spent some time on the return bus ride reading through the first several chapters of the book. At the end of the ninth chapter, there's a short section entitled Wrapping Up, which begins with a brief summary of the code in the chapter, and then goes on to review the development process:

    It's worth reviewing how you got here because it's illustrative of how programming in Lisp often goes.

    Illustrative of programming in Lisp? Illustrative of programming in general! Or, for that matter, illustrative of how one approaches a broad variety of problem-solving tasks. Remove all the specific references to Common Lip and rewrite the text in Polya's distinctive style, and the last page of the chapter could come directly from How to Solve It.

Friday, August 12, 2005

Messiness and computing

Heidi recently wrote an entry in praise of messiness in science, which set me thinking about messiness in mathematics and in computing. While reading some of Dijkstra's old writings, I came across this tidbit, which seemed apropos:

We should never forget that programmers live in a world of artifacts, a fact that distinguishes them from most other scientists. A programmer should not ask how applicable the techniques of sound programming are, he should create a world in which they are applicable; it is his only way of delivering a high-quality design. To which I should add a quotation from EWD898 (1984)

Machine capacities now give us room galore for making a mess of it. Opportunities unlimited for fouling things up! Developing the austere intellectual discipline of keeping things simple is in this environment a formidable challenge, both technically and educationally.

  • Currently drinking: Coffee

Three books

  1. The Structure of Scientific Revolutions. Thomas Kuhn

    You've heard of this book, even if you don't think you have. Kuhn gave the modern meaning to the phrase paradigm shift. You may recall that I mentioned this book -- a month ago, perhaps? It took me a while to read it, just as it took me a while to read it the first time I was exposed to it some ten years ago.

    I tried briefly to summarize my thoughts about the contents of this book. It was hard. The difficulty, I think, is that SSR was not written as a book; it was written as an essay, which just happens to be the length of a short book. Indeed, Kuhn refers repeatedly to this essay, and occasionally mentions a book that he hoped to write (and never did, as far as I know) with further details and evidence for all his arguments. The writing is dense, so that if my eyes glazed over briefly, I had to go back and re-read what I'd missed in order to keep the sense of the arguments.

    Bah! you may say, what use are you if you can't bring yourself to summarize a few highlights, at least? At least, you might say that if you were inside my head. The only good responses I can give is that I will try again in another few years. If that's too long, you may read it and try for yourself.

  2. University, Inc: The Corporate Corruption of Higher Education. Jennifer Washburn

    Are student journalists taught that nobody will read their work unless they describe the physical appearance of any major characters? I ask this seriously, because this is not the first book I've read in which I thought why on earth are you taking time from your argument to tell me that this guy has blue eyes and wild hair? Unless the subject of such scrutiny is an Einstein (or maybe a Carrot-Top), I'll pass on the descriptions of hair styles.

    Apart from tickling that particular pet peeve, I enjoyed this book. By enjoyed, I mean that I thought it was coherent, well-argued, and worth-reading. However, I was also troubled by it. The title neatly summarizes the book's thesis, which is that, in the past two decades, commercial influences have increasingly compromised the university's role as a home for educators, long-term scientific researchers, and independent experts. In particular, Washburn focuses mostly on medicine and bio-engineering (and a little on other areas of engineering), where the Bayh-Dole act had an enormous impact by allowing academic researchers to easily patent and commercialize ideas developed in a university setting.

    I know little of medical schools, so I will say simply that I was horrified by the behaviors documented in the chapter Are Conflicts of Interest Hazardous to Our Health. There seems to me to be something tawdry and unethical in a professor agreeing to sign off on a paper ghostwritten by a pharmaceutical company, and that was far from the worst of it. But in a way, I found the rest of the book more worrisome. I know and work with many people who are intensely interested in the commercialization of their widgets. It seems to me that many of these folks juggle their commercial and academic interests pretty well; the problem is that those commercial interests tend to creep into other areas of the academic landscape as well. An academic proposal is a sales pitch, but it bothers me when the sales pitch is not for an idea, but for profits to be had from an idea. And that seems to happen a lot.

    The problem is that what universities are good at -- or are supposed to be good at -- is building new tools and ideas, and then teaching the world about those tools and ideas. And, in some cases, teaching the world might involve developing a useful application and pushing it all the way to market (I use the scare quotes because the market need not be a commercial market -- witness the growth of various open software products). But I would prefer if the drive were not profits, but passion for making a good idea known. I'm reminded of something Edsger Dijkstra wrote, which summarizes my feelings nicely:

    I grew up with an advice of my grandfather's: Devote yourself to your most genuine interests, for then you will become a capable man and a place will be found for you. We were free not to try to become rich (which for mathematicians was impossible anyhow). When Ria and I had to decide whether we should try to start a family, we felt that we had to choose between children and a car; Ria had a Raleigh, I had a Humber, we decided that they were still excellent bicycles and chose the children. Today's young academics, whose study is viewed as an investment, should envy us for not having grown up with their financial distractions.

    Envy, indeed.

    I recommend this book to your reading lists.

  3. Flinch

    I was sitting at the bus stop, absorbed in a book, when he tapped me on the shoulder. Do you know anything about pre-Roman Italy? I said that I didn't, and recommended him to a library. Everyone I ask seems to say that, he said, but the books in the libraries are full of lies. Nobody knows the truth, so I'm writing my own book. He carried a yellow pad; I glanced down, and saw that Golgotha was printed in neat block capitals across the top, followed by several paragraphs of text written in a small, neat hand. The word Roswell was written in the corner and underlined.

    You see, he continued, all Europeans were caniibals back then. The rest of the world was fine, and everyone got along; then the Europeans went exploring, and spread their sickness throughout the world. I learned in public school that history only goes back 3000 years; how can that be? It's because of the cover-up. So I'm writing a book about it. I think people know about it; all the white people I tell flinch when I tell them about my book, because they've been trying to hide the truth. But there's signs of it everywhere, and when Bush cheers on the Texas Longhorns, that's really the sign of the devil that he's making. It makes me sick. I think my book will come out next September, if you'd like to hear the title?

    The bus had arrived as he told me this, and he asked the last question as I was waiting in line to board. I told him I thought he might be mistaken, paid my fare, and boarded. Sitting on the bus, I heard the question again through the window, asked to someone else: Do you know anything about pre-Roman Italian history?

    I don't recommend this one for your reading lists.

Thursday, August 11, 2005

Backstroke of West

There's an old story about the phrase the spirit is strong, but the flesh is weak being translated to Russian and back to English to yield the vodka is good, but the meat is rotten. I thought that was funny. I nearly hurt myself reading about this English-subtitled Chinese edition of Revenge of the Sith.

Back to Sobolev estimates.

Tuesday, August 09, 2005

Little things

I like pen and ink drawings, the kind in which every line is critical to the picture. I enjoy short poems, sonnets, and lyrics in which each sound and connotation is placed just so. I appreciate Strunk and White, Kernighan and Ritchie, and Rudin. I love ripe summer tomatoes with naught but a sprinkle of salt.

Romantic landscape paintings, Wagner's Ring Cycle, Gibbon's history, and extravagantly designed chocolate desserts have places of pride, and I would not wish to live in a world without them. But for the most part, I prefer simplicity. One graceful note leaves such room for the imagination, does it not?

Friday, August 05, 2005

Search terms 2

Searching for Lisp tokenizer or Lisp parser -- nearly useless.

Searching for Lisp regular expression -- much more useful.

Google -- priceless.

Macro expanding disclaimers

From the CLiki Common Lisp resource site:

Imagine a fearsomely comprehensive disclaimer of liability. Now fear, comprehensively.

Wednesday, August 03, 2005

Linear B

I gave my advisor an early draft of my thesis early in May; he returned the marked up version to me a couple weeks ago. There was a section in my draft which I cut from previous documentation of one of my codes in which I described the mixed-language structure of the code as a combination of C++, Fortran, Lua, MATLAB, and Minoan linear B script. I forgot to cut the last one from the list before giving the draft to Jim. If this is too obscure: linear B was one of the written forms of the language used by the ancient Minoan culture, and is now mostly of interest to archaeologists -- something which cannot, alas, be said of Fortran 77.

In the marked up version, Jim simply scribbled ref? beside that last language. I'm unsure whether this means he was asserting his own sense of humor, or just didn't recognize the reference. I'm also unsure whether I should remove that bit, or if I should put in the reference as suggested and see whether anyone notices...