A *graph* is a set of vertices connected to each other by
edges. The vertices and edges can represent all sorts of things:
cities and roads, transistors and wires, computers and network
cables, web pages and links, neighborhood gossips and conversations,
people and friendships, tasks and scheduling dependencies, blocks
and lines in a flow chart, or any of a number of other things. If
the edges go one way (one web page links to another, but not
vice-versa), the graph is called *directed*; if the edges
have numbers associated with them (distances between cities, say),
the graph is called *weighted*.

Graph theory is one of the main mathematical tools in computer science. Graphs have lots of applications; they're relatively easy to explain; and they can be represented in an utterly intuitive way by a sort of connect-the-dots scribbling that most of us learn to do as youngsters. Is it any wonder computer scientists love them?

There are other ways to represent graphs which -- though they lack
the charm of connect-the-dots -- are easier for a computer to
manage. They are also, perhaps, more dignified. One of these
representations is called an *adjacency matrix*. For a graph
with a finite number of vertices (the usual case), we can number
each vertex in turn; then if vertex i is connected to vertex j, we
put a nonzero in the ij entry (the entry at row i, column j) of the
adjacency matrix.

So: a square matrix can represent a graph. It can also represent a
linear operator acting on a finite-dimensional vector space. You
might ask if weighted graphs and linear operators aren't really the
same thing in some sense; the answer is that they really are
*different*, though there is a close connection. Why are
they different? If I label the major cities by numbers, it doesn't
make much difference if I call San Francisco 1

and New York
2

or vice-versa (except, perhaps, to some residents and
sports fans in said cities). But I can't book a flight between the
sum of San Francisco and New York and the difference of the two
cities, with a stop-over at pi times Chicago. Vertices aren't
vectors, even though we can represent them as such; and it takes a
certain amount of imagination to go back and forth between the two.

The difference between an adjacency matrix representation of a graph
and a matrix representation of a linear operator is really in the
legitimate *isomorphisms* that can be applied to each. I can
represent a linear operator by many different matrices, depending on
a choice of basis (do I walk a mile north and a mile east, or
sqrt(2) miles to the northeast?): the different matrices that can
represent the same operator are called *similar*. I can also
represent a graph by many different adjacency matrices, because I
can permute the labels of the vertices. But there are far fewer ways
for me to rearrange labels than there are for me to change bases in
general. That is, any two adjacency matrices representing the same
graph are similar -- but just because two matrices are similar
doesn't mean they represent the same graph.

This sort of non-obvious connection is one of the main reasons why mathematics is so useful. Seemingly disparate areas of inquiry connect together, and suddenly you can turn information about the connections between web pages into page rankings -- by turning a problem in graph theory into a problem in linear algebra (an eigenvalue problem, to be specific).