Modern programming languages support the notion of restricted lexical scope: the meaning assigned to a name can be made to hold only over restricted sections of the program text, allowing the programmer to re-use the same symbol to refer to different things depending on the context. It's hard to overstate how useful this is. It's so useful, in fact, that we fake it when we don't have it; if you've ever seen an old C program with function names like my_package_foo() and my_package_bar(), this is exactly what's going on.
It's more difficult to restrict the scope of a definition in an ordinary prose
document. We can fake it by saying for this section only, let x be ...,
but it's awkward, and readers get to the start of the next section and forget
that you no longer mean the same thing by x.
This is, I think, a problem
with how math is often presented: usually we have an informal way of introducing
local variables (anything defined inside the proof of a theorem is usually
local
to that proof, unless the author makes noises about it being
important elsewhere). But sometimes I want a definition just for a few lines,
and it's hard to do that with ordinary English text.
When you don't have enough control over where a particular symbol means what,
you're left constantly grabbing for more symbols. Programmers sometimes
refer to this problem as namespace pollution.
A lot of the tedium of
mathematical writing -- and mathematical reading, for that matter -- boils down
to managing namespace pollution. After a few years, most people get used to it:
yes, my paper involves five different concepts, all of them attached to the
label B,
but I can figure out from context which is which -- and if not,
I can always grab another letter. Right? Except that, in any piece of writing
longer than a few pages and dealing with more than a few concepts, it is
terrifyingly easy to run through all of the Greek and English alphabet (lower
case and upper case). It's even easier to run out of symbols if you choose not
to bewilder your readers by letting n be a small quantity, epsilon be an integer
going to infinity, t be a quantity with units of length, and x be a quantity with
units of time.
This issue of notation and namespace pollution is a constant nuisance to me when I'm writing proofs and when I'm reading codes written by mathematicians who aren't experienced programmers. When I'm writing proofs, I run out of symbols too quickly, because I start with a quasi-English notation in which the scope of the assigned meanings is clear. Let C mean this thing in block 1; let it mean that thing in block 2; and let it mean something else in block 3. But when blocks turn into paragraphs, I have a problem. When I'm reading code code written by inexperienced programmers (not just mathematicians, but also engineers and physicists), I see the same problem in reverse: they clearly started from a mental model in which they didn't have any locality of definitions, and so their codes are crowded with global variables, or with local variables with single-character names which are used for seven different purposes over the course of a subroutine. Bah! It makes a code difficult to understand and nearly impossible to debug.
I dare say that, if we had some way of indicating locality of definitions in spoken and written English, it would be a lot easier to convince people when they were making logical fallacies (particularly those involving circular reasoning). But perhaps this is just wishful thinking.