-
The word you're thinking of is
teetotal.
It's an adjective that describes one who does not drink. It is a pun, since I prefer tea to alcohol (though I've decided a cup of cider from time to time is okay, too). -
There are two aspects of MATLAB EXternal interface (MEX) programming that cause
most of your headaches. First: when you use the mex script, the compiler flags
are set in such a way that most of the warnings are turned off. This is bad mojo.
Where possible, put most of the functionality for your program into a separate
file, so that you can compile it to object files (and perhaps even link it to
a testing stub) without testing mex. Then use mex to link everything
together. Second: mex creates dynamically loadable modules, and some of the
dependency checks for those modules are not performed until load time or (if
you're on OS X) possibly even until the unlinked routines are hit at run time.
That's why you get errors about
unresolved symbols
andinvalid mex files.
Check the order of the libraries on your link line, and check to make sure you have all the libraries you need. -
You don't want to write your own eigensolver, LU factorization routine, etc.
Not in C++, not in C, not in Fortran. For dense numerical linear algebra,
go look at LAPACK and a
good BLAS implementation
(BLAS = Basic Linear Algebra Subroutines). If you're writing in C++, and you
have trouble linking to a Fortran code like LAPACK, make sure you use the
extern
C
directive in order to turn of the C++ compiler's name-mangling. If you're writing in C on most Unix-y platforms, you can call Fortran code by simply appending an underscore; e.g. call dgemm_ from C instead of dgemm. Of course, this is somewhat platform dependent. There is a C translation of LAPACK, which I was responsible for a while back, but I think it's worth the effort of learning to link to Fortran. Still, you might also check out the other packages on Netlib, to see if any of them fit your needs.
For large, sparse systems, you'll
want something different from what you use for dense stuff.
I use Tim Davis's UMFPACK system for sparse linear
solves, and ARPACK for computing eigenvalues of sparse matrices.
- When I wrote before about tensors and duality, the code was there purely for illustration. It is not how I would typically structure a computation, or at least it's not how I'd structure it if I wanted to run something big in a reasonable amount of time. You might check out the Matlisp bindings if you're interested in something serious.
-
Am I really so pessimistic about my code? For those searching on
crash,
check out Valgrind. This tool will help you. It surely helps me. - If you really are interested in code to do number-theoretic computations, check out Magma, Maxima, or perhaps Mathematica or Maple. If you want to crunch numbers, consider MATLAB, Octave, or perhaps R. There exist C++ number theory libraries, but I'm not sure how much use they'll be (unless you're trying to write a distributed code-cracking system or something -- in which case it's already been done, and you should probably use existing sources).
- For information on the Euler totient function, the lengths of repeating fractions, or a variety of other such things, I refer you to Eric Weisstein's Mathworld.
- If you want a Lisp lexer generator or LL(1) parser generator, you can grab one from my software page. But for the parser generator, you may prefer something that handles LALR grammars, and for the lexer -- why bother?
Wednesday, September 28, 2005
Frequently searched questions
Monday, September 19, 2005
Time out for reading
There is a new Half Price books in Berkeley, along Shattuck Avenue a couple blocks west of campus. Curiously, they're in the same space as the dollar store where I got some pans and utensils just after I moved to Berkeley. As part of their opening celebration, they have an additional 20% off. So I wandered in, saw some familiar faces, and picked up a few books:
-
Adventures of a Mathematician (S. Ulam) --
This autobiographical book is the only one that I've started. There's a preface that describes Ulam's work, which covers a broader range of pure and applied mathematics than I'd realized. And I thought,
excellent, this will be very interesting.
Then I went to the Au Coquelet cafe to read the prologue and sip something warm, and I realized that this will be very interesting. This book didn't start life as an autobiography; initially, Ulam thought to write a biography of von Neumann. But (on page 5):
When I started to organize my thoughts, I realized that up to that time -- it was about 1966, I think -- there existed few descriptions of the unusual climate in which the birth of the atomic age took place. Official histories do not give the real motivations or go into the inner feelings, doubts, convictions, determinations, and hopes of the individuals who for over two years lived under unusual conditions. A set of flat pictures, they give at best only the essential facts.
Thinking of all this in the little plane from Albuquerque to Los Alamos, I remembered how Jules Verne and H. G. Wells had influenced me in my childhood in books I read in Polish translation. Even in my boyish dreams, I did not imagine that some day I would take part in equally fantastic undertakings.
The result of all these reflections was that instead of writing a life of von Neumann, I have undertaken to describe my personal history, as well as what I know of a number of other scientists who also became involved in the great technological achievements of this age.
What follows is a remarkable book, part autobiography, part biography, and part introspection on the workings of memory and of the mathematical mind.
- The Education of Henry Adams (H. Adams) --
I'm not sure where I first heard about this book. I think it was from reading Jacques Barzun, either Teacher in America or A Stroll with William James; that it would be from Barzun is a pretty good guess, though, since Barzun's books are usually crammed with references to books that I decide I want to read (and then quickly lose in my list). Either way, I know in advance to expect more than basic autobiography from this one.
- Fool's Errand (R. Hobb) --
A little lighter reading. I enjoyed the first trilogy, but every time I looked for the start to the second trilogy on the local bookshelves, I could only find the second book and on. So now I have something for the next time I feel like a novel.
Right now I'm switching back and forth between Ulam's book and the most
recent SIAM Review for my evening reading. This issue of
SIAM Review is a really good one, both for technical content and
for style. J.P. Boyd, who has written a book on spectral methods which
I've mentioned before, has an article on Hyperasymptotics and the
Linear Boundary Layer Problem
which is both informative and highly
entertaining. To quote again:
Unfortunately, asymptotics is usually taught very badly when taught at all. When a student asks,
What does one do when x is larger than the radius of convergence of the power series?, the response is a scowl and a mutteredasymptotic series!, followed by a hasty scribbling of the inverse power series for a Bessel function.But of course, that's all built-in to MATLAB, so one never has to use it any more.Humbug! ... Arithmurgy [number-crunching] hasn't replaced asymptotics; rather, number-crunching and asymptotic series are complementary and mutually enriching.
The article refers several times to a longer article (98 pages)
entitled The Devil's invention: Asymptotics, superasymptotics,
and hyperasymptotics,
which I think I would have to put on my
reading list just for the title, even if I didn't know that I found
the author so entertaining. But given the state of my reading list
right now, perhaps it will have to go onto the longer list rather
than the shorter one.
Most of my reading time recently has gone to technical material (excepting Tools for Teaching by B. Davis, which is the textbook for the teaching course). This is unfortunate, because what I read tends to strongly influence what things I think about for casual conversation. So if I'm only reading about number theory, Gershgorin disks, and the error analysis of non-self-adjoint PDE eigenvalue problems, I tend to end up talking about those things to anyone who will listen. Those topics are really cool, and I do have buddies who are willing to go have a coffee, or a cider and some pizza, and have a conversation that wanders in and out of technical realms. Nevertheless: a little light reading in the evenings seems to make conversations in the day go more smoothly.
Multiples of q
You probably learned this trick in grade school: to see whether a number is divisible by three, add all the digits and check if the sum is divisible by three. The same thing works for nine. Ever wonder why?
Actually, three and nine are just special cases of something very general. What does it mean if q divides n evenly? It means that there is ultimately no remainder in the division. So if r represents the remainder at each step of long division, then I should be able to write
r = 0 for d = digits r = mod(r * 10 + d, q) end
If r = 0 at the end of the loop, then the number represented by the given string of digits is evenly divisible by q. Now, it's not too difficult to show that I could rewrite the update of r as
r = mod(r * (10 + k * q) + d, q)
for any integer k. In particular, this means that I could let p = mod(10,q), and write the update formula as
r = mod(r * p + d, q)
The trick with division by three and division by nine works because mod(10,3) = mod(10,9) = 1.
Once you realize how this works, you can think of quick checks for all sorts of divisibility properties. For example, take the number 86834; is it evenly divisible by 11? Yes! I can tell because 10 and -1 are equivalent modulo 11, so that my remainder update looks like
r = mod(-r + d, 11)
I can thus check divisibility by 11 by looking at alternating sums of the digits. So since 8 - 6 + 8 - 3 + 4 = 11 is divisible by 11, so also is 86834. Or if I wanted to know whether a number written in octal (base 8) was divisible by 7, I could add the octal digits and see whether seven divided the sum.
Or I could play around for five minutes to find that the binary representation of any multiple of three matches the regular expression
(0 | 1 (00)* (1|01))*
Hooray! A meaningful regular expression which isn't absolutely trivial to parse! So such toy problems in number theory not only amuse me, but they also actually serve some use as a source of exercises for my students. We'll see whether any of them appreciate the entertainment value as I do.
- Currently drinking: Mint tea
Friday, September 16, 2005
Monday, September 12, 2005
A little long division
Suppose p and q are relatively prime integers, and 0 < p < q. If q has any prime factors besides 2 and 5, then a decimal expansion of p/q will eventually start to repeat. How long will the repeating part be?
Looking at the sequence of digits is actually a very difficult way to approach this problem. It's much simpler to look at the sequence of remainders. Remember how long division works. At every step, we have a remainder r, 0 <= r < q. To get the next digit in the expansion, we figure out how many times q goes into 10*r; this gives us a digit and a new remainder. In C++ code, I might write the long division algorithm this way:
/* Compute ndigits digits of p/q */ r = p; for (i = 0; i < ndigits; ++i) { d[i] = (10*r) / q; r = (10*r) % q; }where those unfamiliar with C should remember that C's integer division gives an integer result (in C, we have 1/2 == 0, but 1.0/2 == 0.5). The operator % gives the remainder of the division. If we keep computing digits, we will eventually see one of two things: a zero remainder, or a repeat of a remainder seen in a previous step. In the former case, p/q can be written as a non-repeating decimal; in the latter case, we have a repeating fraction, where the length of the repeating part is the same as the distance between occurrences of the same remainder in the division algorithm.
The operator % is sometimes called the modulo operator, but
it's actually an odd name. In much of computer science, we speak of
the mod operator
, and mean operator that computes the
remainder in a division.
By contrast, if you're working with
abstract algebra or number theory and you say that two things are
equivalent modulo
something, you mean that there is an
underlying relation that divides your structure into equivalence
classes, and the two things under investigation happen to lie in the
same equivalence class. The usual example of this is time: 11 pm
today and 11 pm yesterday are not the same instant in time, but they
are equivalent modulo the twenty four hour cycle we use for time
keeping. The connection between modulo-as-operation and
modulo-an-equivalence is that when one takes a remainder after
dividing by q, one gets a single canonical representative of a whole
equivalence class of numbers which differ from each other by some
multiple of q. It's like using 11 pm today to refer to 11 pm any day;
they're all equivalent modulo a 24 hour day.
Anyhow. The C code for long division computes a sequence of
remainders, which look like r[k] = (p*10^k) % q. If q has prime
factors other than 2 and 5, then at some point we'll encounter a
remainder which is prime relative to q; at that point, we've started
the repeating part of the fraction. Past that point, the remainders
are all units in the ring of integers modulo q (Z_q),
which is
just a fancy way of saying that all the successive remainders are also
prime relative to q (and therefore have inverses in Z_q). For
instance, 4 and 7 are relatively prime, so 4 is a unit in Z_7; the
inverse is 2, since 4*2 = 7+1 = 1 modulo 7. The set of units in Z_q
is an important object in number theory, so important that it's size
is given a special name: the number of natural numbers less than q
which are prime relative to q is written phi(q); it is called Euler's
phi function or Euler's totient function.
Hmm. I assume totient
is related to quotient,
but I
actually don't know the origin of this word. I'll have to look that
up.
One reason why phi(q) is so important is that it connects the structure of multiplication and addition in Z_q. For example, for any m which is a unit in Z_q, I have m^phi(q) = 1; for instance, phi(7) = 6, and 4^6 = 4096 = 1 modulo 7. That means, for instance, that I have an easy way to write m^(-1); it's just m^(phi(q)-1). There are all sorts of other entertaining things you can do with this sort of manipulation -- including figuring out something about the question we originally asked, which is how long the repeating part in the decimal expansion of p/q will be. Without loss of generality, I can just consider 1/q; for more general p/q, I will generate a sequence of remainders which has the same period as for 1/q. Anyhow, if I start from the remainder 1 and run long division, the remainders will generate a cyclic subgroup of the group of units in Z_q -- I'll keep generating new remainders until at some point I get the remainder 1 back. Since I'm always multiplying by 10, the cycle thus formed is sometimes called the orbit of 10 in the group of units in Z_q. The order of this cycle -- the number of elements -- is the number of digits in the repeating part of the decimal expansion. There is a wonderfully versatile little theorem, due to Lagrange, which says that the order of a subgroup must divide the order of the group in which it is contained; in this case, that means that the number of digits in the repeating part must divide phi(q).
Cool, no? So the number of digits in the repeating part of a decimal expansion of p/q must divide phi(q). Actually, if you walk through the arguments I just made, you find that this is true for any base. So, for example, take q = 47; since phi(q) = 46 = 2*23, I can take an expansion in any base and find that the length of the repeating part is either 0, 1, 2, 23, or 46. Base 2, base 10, base 16 -- they all have to generate repetitions with a period that divides 46. Which divisor of 46 you actually get depends on the base in which you do the expansion, though.
So while writing the length of the repeating part of p/q is difficult in general, we can quickly say that it divides phi(q). When q is prime, phi(q) = q-1, and so we could potentially get a repeating part as long as q-1 digits. In fact, we often do get a repeating part as long as q-1 digits... which is why writing numbers in terms of repeating decimal isn't a very good choice of representations for most q. If it takes log10(p) + log10(q) < 2*log10(q) digits to represent a rational number in the form p/q, but as many as q-1 digits to represent the number as a repeating decimal, then the repeating decimal representation is exponentially longer than the rational number representation. Not good! At least, it's not good if you want to actually do arithmetic; if you want to play with number theory on the other hand, or with finite state machines, then this is lots of fun.
As an amusing aside, you can use the development above to write a wholly impractical scheme for cracking the RSA cryptosystem. You see, the RSA system is secure because we don't have good algorithms for computing the Euler totient function for large numbers. Yes, I know you've heard that the reason that RSA is secure has something to do with factoring, but the difficulty of factoring large numbers is really just a proxy for the difficulty of computing Euler's totient. The actual connection is that if m and n are large prime numbers then there's a simple formula for the totient of the product: phi(m*n) = (m-1)*(n-1). But if we do long division in several different bases, we're almost inevitably going to run into a base in which the length of the repeating part is equal to phi(q). So the totient function is actually easy to compute, and any talented grade schooler who knows about long division and repeating decimals can be taught how to crack RSA, right? Well, not exactly. The fact that computing phi(m*n) in this way would take time on the order of m*n -- rather than on the order of log(m*n) -- means that your precocious grade schooler would take at least 130 years to crack a 32-bit key, assuming that he never slept and could get a digit a second right.
A fast computer using a more effective attack algorithm could break a 32-bit RSA key in a matter of minutes (maybe less, I'm not sure). But those algorithms tend to run into the same explosive cost increases as the long-division cracking algorithm that I just proposed, so the key doesn't have to get that much longer to be much more secure. Currently, most transactions are guarded by keys at least 128 bits long, and the keys I use for remote login have either 1024 or 2048 bits. So unless you want to go into number theory as a career, you probably will find that the easiest way to compromise my keys is to completely go around them -- by finding my office and kicking me in the shins until I'm willing to hand them over, for instance.
Tuesday, September 06, 2005
Honest exercise
Alas, we didn't get through all the Common Lisp exercises I'd planned for section today. This is too bad, because I would have liked to have people think about the last exercise: given a collection of (programmer program) pairs, how can you find the sets of programmers with structurally identical codes? It took me about five minutes to do this one (though this may be because I already had a solution in mind when I wrote the question). I mention the solution here:
(defun code-to-struct (code) "Replace all atoms in a code with 'foo in order to compare structures" (when code (if (listp code) `(,(code-to-struct (car code)) . ,(code-to-struct (cdr code))) 'foo))) (defun cluster-structure (codes) "Find programmers who have structurally identical codes." (let ((ht (make-hash-table :test #'equal)) (groups nil)) (dolist (code codes) (push (car code) (gethash (code-to-struct (cdr code)) ht))) (maphash #'(lambda (key val) (push val groups)) ht) groups)))
Let me restate that in case it went by too fast: it's possible to write a Common Lisp code which will find cheaters, and to do it in under 20 lines of code. Furthermore, that code is reasonably efficient: unless the Lisp implementation chooses a really terrible hash function, the cost of the above code is linear in the sums of the lengths of the programs. It doesn't take long to write, it doesn't take long to run, it doesn't take long to improve.
When I took compilers as an undergrad, a bunch of folks in the class got caught plagiarizing each others codes. This is, I think, one of the stupidest plagiarism cases I can think of: here you are in a class, the whole point of which is to automatically analyze programs. You are being taught by someone who is, presumably, very good at writing tools to automatically analyze programs. What, it seems like writing a program to find similar programs would be an impossible task for this guy? In Common Lisp, it takes an overworked graduate student five minutes to write a detector for structurally identical codes; it would take me another fifteen minutes to modify code-to-struct to catch workarounds (e.g. re-ordering of instructions or other superficial changes to program structure). It would take somewhat longer for other languages, but not so long as you might think -- this sort of detector doesn't need to know about every syntactic bell and whistle in a given language (and anyhow, it's possible to find complete grammars for many languages for free on the net).
Don't cheat in a compilers class
seems to me to be only slightly more
difficult to figure out than don't piss on the electric fence.
Here's
hoping that this class feels the same...
Monday, September 05, 2005
Law and linear algebra
The Economist has an article on using linear algebra network analysis tools to identify important legal cases. This is the same technology used by some search engines to find important web sites (hubs and authorities).
I learned about this from a posted on NA digest.
- Currently drinking: Coffee
Cycling for digits
There's a computer language called Nickle which grew out of something Keith Packard started working on two decades ago because he needed to do a little arbitrary-precision calculation. One of the entertaining features of the language, particularly if you're trying to teach people about compilers, is the treatment of rational numbers. Every rational number can be written as a decimal expansion that starts with some fixed digit string, and then has a repeating part, and this is the form Nickle uses for printing and (I assume) reading rational numbers. For example, the number 5/6 in Nickle would print as 0.8{3}, where the stuff between braces indicates the repeating part.
An entertaining exercise is to write a routine to read a Nickle number and convert it into the form p/q, where p and q are (relatively prime) integers.This turns out to be pretty simple to do in Lisp. Also entertaining is to write a routine that prints a Nickle number. If you find these entertaining, too, you can check out the class page in a couple weeks. I'm not going to post my solution until I've given the exercise for folks to think on between weeks three and four, but I will say that it fits in a page and a half of Lisp, including comments -- the front of the page for reading Nickle strings into Lisp, the back for generating Nickle strings from Lisp rationals.
I think it would be highly entertaining to work through the consequences of different internal representations of these numbers -- for example, if you represented Nickle numbers internally as strings like 0.{3}, then even something as simple as equality becomes nontrivial (since 0.{3} = 0.{33}, even though the two strings are different). You'd probably want to convert the strings into a unique minimal form for comparison, which boils down to a problem in state machine compression. But then you might want to do something different for actual arithmetic. Unfortunately, while I might find it entertaining, these sorts of issues are not likely to be of much interest to the class. So I won't say anything about it unless someone asks.
What I might ask about -- because it actually provides a very entertaining exercise in manipulating finite state machines -- is how you can recognize that a repeating decimal expansion can be written as p/q for some specified q. And what if I allow make the strings finite by allowing rounding? If I allow rounding, it's most natural to introduce nondeterministic finite automata (NFAs) and then convert them to deterministic equivalents (DFAs); this is also the usual chain of tools one uses in building tokenizers for programming languages. If you find such things amusing, I recommend the following two special cases: write a finite state machine to recognize decimal expansions of p/6 which are correctly rounded in the last digit, and another for p/7. You may be able to do the first case in your head, but the latter isn't so obvious. Or is it? I claim that, having done it once, I can now recognize by inspection a correctly-rounded decimal expansion of p/7 for any p. Your calculator probably displays enough digits that you can see the pattern just by playing around with it.
A similar sort of pattern holds in the case q = 17, actually, and in the case q = 47. But observing those patterns on a pocket calculator might be a little harder, unless your calculator displays more digits than does mine. And while I can now remember enough to pretty easily write down what the decimal expansion of p/7 looks like for any given p, remembering the same information for 17 or 47 is not so simple or useful.
But the washer just buzzed, which means that I should move my quilt over to the drier and get back to work.
Saturday, September 03, 2005
On the radio
The AC voltage provided by the local power company is a bit over 60 Hz.
Consequently, most of the clocks in the house (excepting the battery-powered
ones) end up running fast. They all got reset to the correct time when daylight
savings started; now they're about fifteen minutes out of sync. Consequently,
my alarm goes off at 7:15, now, even though the clock thinks it's 7:30. I usually
don't mind this too much. I don't get out of bed any earlier, but the extra few
minutes gives me a chance to wake up gradually while I listen to Morning
Edition
on NPR.
Listening to the radio in the mornings is a lot more fun when there's happy news. For the past few days, I've awakened in the morning to news of the aftermath of Katrina. I was in New Orleans for a conference less than a month ago; there were a lot of friendly folks there, and I wish I knew that they were still there and still doing okay. But I don't. So I make my monetary contribution and hope that it can help someone, even if I fear that it won't.
Listening to the aftermath of Katrina scares me. It took too long for help to
get there, and that means that those without the resources to get out are in
a pickle. Listening to the radio this morning, I heard an interview with one man
who had carefully planned in advance what he would do if the big one hit.
The big one
to me has come to mean the next big earthquake
;
New Orleans has just had its big one,
and in the Bay Area we all spend
a little time waiting and planning for ours. Hearing about the slow response to
the situation in New Orleans makes me think poor people
; it also makes me
think are we next?