There's another sense of the word ``closure'' while still in the subject
of computing but which is different from what you've understood so far.
Your understanding and the sense of the word used by Matthias Felleisen
and David Storrs is that of Peter Landin in 1964.
In ``[t]he mechanical evaluation of expressions'', Peter Landin defines
``closure'' in the popular sense used in computing. (This is the
earliest use of the word, as far as I know.)
Also we represent the value of lambda-expression by a bundle of
information called a "closure", comprising the lambda-expression
and the environment relative to which it was evaluated. We must
therefore arrange that such a bundle is correctly interpreted
whenever it has to be applied to some argument. More precisely: a
closure has
an /environment part/ which is a list whose two items are:
(1) an environment
(2) an identifier or a list of identifiers,
and a /control part/ which consists of a list whose sole item
is [a lambda expression].
This is how most computer scientists use the word. Mathematicians on
the other hand usually mean something else when it comes to ``closure''.
For example,
``a group is an algebraic structure consisting of a set of
elements equipped with an operation that combines any two elements
to form a third element. The operation satisfies four conditions
called the group axioms, namely /closure/, associativity, identity
and invertibility.'' -- Wikipedia
But this more mathematical sense is quite relevant in computing. The
best example I know is that from Harold Abelson (and Gerald Sussman) in
the Lisp Lectures of 1986. Abelson is the one that uses the term in
this mathematical meaning, applying it to the design of functions.
Here's every mention of the word in the lectures.
``So I almost took it for granted when I said that cons allows you
to put things together. But it's very easy to not appreciate that,
because notice, some of the things I can put together can
themselves be pairs. And let me introduce a word that I'll talk
about more next time, it's one of my favorite words, called
closure. And by closure I mean that the means of combination in
your system are such that when you put things together using them,
like we make a pair, you can then put those together with the same
means of combination. So I can have not only a pair of numbers,
but I can have a pair of pairs.'' -- Harold Abelson, lecture 2B
``Compound Data'', 1986.
Then again in the next lecture:
``And again just to remind you, there was this notion of closure.
See, closure was the thing that allowed us to start building up
complexity, that didn't trap us in pairs. Particularly what I
mean is the things that we make, having combined things using cons
to get a pair, those things themselves can be combined using cons
to make more complicated things. Or as a mathematician might say,
the set of data objects in List is closed under the operation of
forming pairs. That's the thing that allows us to build
complexity. And that seems obvious, but remember, a lot of the
things in the computer languages that people use are not closed.
So for example, forming arrays in basic and Fortran is not a
closed operation, because you can make an array of numbers or
character strings or something, but you can't make an array of
arrays.'' -- Harold Abelson, lecture 3A, ``Henderson Escher
Example'', 1986.
A few lectures later, Gerald Sussman uses the term, but in the sense of
Peter Landin.
``First of all, one thing we see, is that things become a little
simpler. If I don't have to have the environment be the
environment of definition for procedure, the procedure need not
capture the environment at the time it's defined. And so if we
look here at this slide, we see that the clause for a lambda
expression, which is the way a procedure is defined, does not make
up a thing which has a type closure and a attached environment
structure. It's just the expression itself. And we'll decompose
that some other way somewhere else.'' -- Gerald Sussman, lecture
7B, ``Metacircular Evaluator'', 1986.
And the two last appearances of the word, Abelson calls our attention to
the importance of the idea.
``The major point to notice here, and it's a major point we've
looked at before, is this idea of closure. The things that we
build as a means of combination have the same overall structure as
the primitive things that we're combining. So the AND of two
things when looked at from the outside has the same shape. And
what that means is that this box here could be an AND or an OR or
a NOT or something because it has the same shape to interface to
the larger things.''
``It's the same thing that allowed us to get complexity in the
Escher