Probably everyone reading this has heard the (likely apocryphal) story of Gauss being given sums as a school exercise. At least, I think it was Gauss. Anyhow, the story goes that the teacher told the class to sum the numbers from one to a hundred, and Gauss came back with the answer a few moments later. The numbers from one to a hundred can be grouped into 50 pairs, each of which sum to 101: 1 and 100, 2 and 99, 3 and 98, ..., and 50 and 51. Therefore the total sum is 50*101 = 5050.

If you happen to have studied computer science, you've probably
heard this story more than once, because taking such sums is
something you do *all the time* when analyzing algorithms.
For example, suppose you want to multiply a vector by an n-by-n upper
triangular matrix (solving an upper triangular linear system is
about the same). Then you end up doing n(n+1)/2 multiplies and
n(n-1)/2 adds. There's one multiply for every nonzero entry in the
matrix, of which there are n in the first row, n-1 in the second
row, and so forth; and for each row you have one fewer addition than
multiplication.

Now, most of us don't bother to analyze things that precisely. We look at an algorithm like triangular matrix multiply and say it's quadratic in n, not even bothering with the constant 1/2 in front. Sometimes if we're being persnickety we'll say that the time is n^2/2 + O(n) -- we're willing to keep track of the constant in front of the fastest growing term, but the rest is relatively unimportant, so we'll just point out that it's bounded from above by something linear in n.

All this is well and good, because it captures the dominant cost of the algorithm, and usually it's the most expensive part of your algorithm that causes you to think. Also with a little practice you can just look at simple algorithms and see that they cost c*n^k + O(n^(k-1)) time, where you know what c and k are. The trick is that, asymptotically, sums look like integrals, so that if you ever need to know what the sum from i = 1 to n of i^2 is, for example, you know immediately that it's n^3/3 + O(n^2), because the integral of x^2 is x^3/3 and, for large n, the integral and the sum look an awful lot alike.

I was, therefore, a little bewildered a couple weeks ago when someone asked me what the exact operation count was for some algorithm. He had converted his problem into a sum, and was stuck because he couldn't remember the exact formula for the sum of i^3 -- and he wasn't satisfied to know that it was about n^4/4. So I made an observation and he went away happy -- though I didn't manage to convince him that he was probably working very hard for some information he ultimately wouldn't care about.

The trick is this: if you have a sum of p(i) for i = 1 to n, and you
know that i is a polynomial, you *know* what that the sum is
going to have the form q(n), where q is some polynomial of degree
one higher than the degree of p. Furthermore, you know how to
evaluate q at n = 1, 2, 3, etc -- just compute the sum by hand for
those numbers. But a degree n polynomial is uniquely determined by
its behavior at n+1 points, so finding q from this data is no
trouble at all! Here's the MATLAB/Octave script to find the
coefficients of q from the coefficients of p:

% q = sumn(p) % Compute q(n) = sum_{i=1 to n} p(i), % where q and p are polynomials. function q = sumn(p) n = length(p); x = 1:n+1; qx = cumsum(polyval(p,x)); q = polyfit(x,qx,n);

If I was less lazy, I could probably find a more efficient algorithm than this -- but probably not more efficient in terms of typing and thinking time. (I could fit the entire script on one line, actually, but then I might get confused, making it a false economy.)

More generally, I don't think computer scientists should ever patiently grind through operation counts in simple algorithms. If it's at all easy to get lost in the bookkeeping, be lazy and make the computer do it! Once you know the functional form of the complexity count as a function of n -- for instance, if you know that it's a cubic polynomial -- you can run the algorithm a few times and let the computer cost the count, then fit the free parameters in your functional form in order to get the general case.

Alas, if you think it's fun to spend a couple hours grinding through these sums, I probably won't change your mind.