Shortly before all financial support for the old guard in Silicon Valley --
even from their wealthy compatriots like Federico Fagin (Boundary
Institute)) -- evaporated under the combined weight of the DotCon collapse
and simultaneous mass immigration to Silicon Valley from Asia, Tom Etter
(attendee of the 1956 Dartmouth AI Summer with Solomonoff) wrote this
summary of where his life had brought him.

Whether you believe it was a good thing guys like Tom were dumped or not,
it is probably a good idea to keep this essay around -- just in case it
turns out to not have been such a great idea.

Outline of a New Science
<https://web.archive.org/web/20130511033418/http://www.boundaryinstitute.org/bi/articles/Outline_New_Science.pdf>
Tom Etter
Boundary Institute
Jan 16, 2002
Abstract
To be written.

Section 1. The old science

Why would we want a new science? What’s wrong with the one we’ve got?
According to John Horgan, author of “The End of Science”1, nothing. The big
discoveries have all been made, and all that’s left to do is fill in the
details, a job
we’ll eventually pass on to our smart computers.

We’ve heard this kind of talk before, usually just before a scientific
revolution.
The feeling that we've come to the limits of the world creates in some
people the
restless urge to venture beyond these limits, to take on something bigger.
Suddenly it becomes apparent that the supposed edge of the world is really
the
limitation of our understanding. We become aware not only of our ignorance
but
of our confusion. In working to achieve greater clarity we discover
essential
simplifications, which in turn increase our power to deduce important and
surprising consequences from what we know. Eventually the whole intellectual
landscape is transformed and expanded. That was the story of the new science
that arose in the 17th Century. On a lesser scale it has been the story of
20th
Century science. I believe that on an even greater scale will be the story
of the
science of the 21st.

Before turning to the new science, let’s briefly pay our respects to the
old, and in
so doing remind ourselves of some lessons we should take with us into the
new
territory.

The primary categories of the old science are *space, time and matter*.
These
concepts, in their modern sense, were abstracted from the flux of human
experience over a period of several thousand years, beginning in earnest
with
the pre-Socratics, and coming to a high point with Newton. This abstraction
was
an enormous achievement, and the explanatory power of purely physical
thinking
eventually became so impressive that it led some people to imagine that we
can
reduce everything to matter in motion. Such reductionism has lost much of
its
appeal today, however, partly because quantum mechanics has made both
matter and motion more problematical, and partly because of the rapid
growth of
information science, to which we’ll turn in Section 2.

The laws of Newtonian mechanics, unlike the laws of chemistry or genetics or
quantum mechanics, can be easily observed in the behavior of everyday
objects
like billiard balls. In this we were lucky. Simple Newtonian objects don’t
exist
under water, so dolphins or cephalopods, no matter how intelligent, would
find
mechanics very much harder than we did. Air is close enough to empty space
so
that mechanical things can easily be isolated and studied in small
combinations.
Under water it's a far different story; there the medium is a much bigger
part of
the action, and mechanical interactions are never simple. Smart cephalopods
would discover complexity theory long before they hit on f = ma.

It was our good fortune that the deep laws of mechanics lie so close to the
surface, so-to-speak. Is this kind of good fortune a thing of the past? Has
the
surface gold all been mined?

In this series of papers I shall try to show that, on the contrary, there
is almost
enough surface gold left for a whole new science. A few necessary items do
require some deeper digging, and have only come to light thanks to certain
esoteric discoveries in modern physics. But the good news is that the most
important job is simply to pay more attention to what is before our eyes.
Of course we must pay the right kind of attention. That Newton’s laws lie
close to
the surface does not mean we simply stumbled across them. On the contrary,
people have been observing Newtonian phenomena for thousands of years
without any inkling that that’s what they were observing. Aristotle, who
was an
excellent observer, got them all wrong. Their discovery involved something
else
besides observing accurately and seeing the patterns in what we observe, and
this something else, which Galileo almost single-handedly brought to bear
on the
science of motion, is what the Greeks called theory.

So what is theory? Basically, it means staying focused on what is
essential. This
is the art of theory, and there are no simple rules for it. Why was it
right in
Galileo's time to carefully study falling cannonballs but to ignore falling
sticks and
feathers? Never mind why, it was right. Though it’s essential to pay
attention to
the right objects, we must also make the right distinctions. Many of
Galileo’s
contemporaries talked about acceleration, but it was only Galileo who
carefully
distinguished between acceleration as change of velocity with time and
acceleration as change of velocity with distance. Without this distinction,
his law
of falling bodies cannot even be stated, much less tested.

When we stay focused on what is essential, it’s much easier to draw out the
consequences of what we know. Of course we must not only draw them out but
also publish them and stand up for them, which can sometimes be a nuisance.
Theory leads us from what is exposed to what is hidden. The Greeks
speculated
that matter is made of very small objects called atoms, but it was only
after the
development of Newtonian theory that it became possible to think about the
essential properties of these small objects in a way that leads to
important and
useful knowledge.

Though the Greeks never discovered mechanics, they invented theory itself,
and
the high point of theory in ancient times was Euclidean geometry. Galileo
was
very conscious of the Greek genius that went into this invention, and
brought its
spirit to bear on dynamics. Aristotle, in his physics, had taken a very
different
course, which is to observe and catalogue the motions of various kinds of
objects, much as he had observed and catalogued the properties of plants and
animals. The empiricists of the Middle Ages and Renaissance followed suite.

Galileo, however, realized that the only way to gain real understanding of
motion
is to concentrate on its simple “points and lines”, so-to-speak. We must
study the
ideal falling body, the ideal projectile, starting with the simplest cases
and
working with systematic precision toward the more complex cases. This was a
brand new idea at the time, but it caught on fast, and was only forty years
before
Newton’s Principia, the modern counterpart of Euclid’s Elements, appeared on
the scene.

 It’s important to realize that geometry for the Greeks was about actual
physical
things and places; after all, geometry literally means measuring the Earth.
The
axioms of Euclidean geometry are apparent to the eye as well as to the
mind; in
this sense geometry is an observational science (the fifth postulate was an
exception; its axiomatic status was under suspicion from the start, and
finally lost
all claim to that status with the invention of non-Euclidean geometry in
the 19th
century). The genius of the Greeks was to treat what is obvious in what we
see
as something not quite of this world, something that is an ideal
simplification of
what the eye actually takes in. The eye sees an edge, but the theorizing
mind
sees a line. The eye sees a corner, but the theorizing mind sees a point,
and in
so doing notices that two straight lines can meet in at most one point.

The mechanics of Galileo and Newton was of course built on the foundation of
Euclidean geometry, and Newton, though he had used his new calculus to
actually derive his results, first presented them to the world in the
language of
geometry. Geometry has a modern analogue, which is not so much a system of
results as a style of thinking, but to give it a short name I’ll call it
structure theory.
This new style of mathematics came into its own in the last century with
the move
toward greater abstraction and generality, and led to a wealth of new
mathematical forms: an arithmetic of infinite numbers, new kinds of algebras
such as group theory and vector algebra, new kinds of “spaces” such as
nonEuclidean geometries, topologies, spaces of mechanical states, spaces of
wave
functions (Hilbert space) etc. The new style was even applied to logic,
which
gave us for the first time a precise general definition of mathematical
proof.

Structure theory, as described above, resembles the diverse body of
geometric
discoveries made by the early Greeks rather than the tight axiomatic system
of
Euclid. Russell and Whitehead, in their book “Principia Mathematica”, tried
to
become the Euclid of their time by capturing the essence of structure
theory in a
single axiomatic system they called Relation-arithmetic. Their version had
serious bugs, however, and never caught on. It turns out that some of these
bugs are easily fixed, and I see the fixed version becoming an important
tool of
the new science9. We’ll see in Sections 3 and 4 how an axiomatization of
structure theory based solely on the concept of identity can be used to
unify and
clarify the key concepts of our new science. But first we’ll turn to
something
closer at hand, which is the science of information

Section 2. *Structure, process and information*

Beginning in the early 20th Century, *space, time and matter* began to
encounter
competition from another basic trio of categories, namely structure,
process and
information. We have already met structure as a kind of generalized
geometry.
We can think of process as structure in motion. Process adds to structure
the
dimension of time, and process philosophers like Whitehead made considerable
use of the new style of structural thinking; indeed, Whitehead tried to
interpret
space-time as an aspect of the structure of process. The third member of the
trio, information, was surface gold un-mined until Shannon turned the
theorist’s
eye on its ground and realized that its essence, at least in the context of
information storage and transfer, is simply the narrowing of a range of
possibilities. Today information in this sense has come to occupy a large
part of
the ecological niche once filled by matter, thereby giving its name of our
so-called
information age. Information is a new kind of stuff, parceled out in bits
rather
than in pounds or grams, and shaped and transformed by information
processing
devices.

So is information science the new science? A lot of people think so. My own
answer is a qualified no.

Information science is certainly a new science, but to take the place of
the old
science of matter in motion it would have to give it a more fundamental
grounding. This means that, among other things, it would have to make sense
of
quantum mechanics, which it has not done. The basic problem here is in its
overly narrow conception of process. This topic is covered in detail in a
number
of the other papers (e.g. 5), so I’ll be brief here.

For information science, a process is a structure that evolves
probabilistically
with fixed transition probabilities. If these transition probabilities are
confined to 0
or 1, as in the case of a computer, we call the process deterministic. The
more
general process is called a Markov chain, or more exactly, a homogeneous
Markov chain. It’s safe to say that for the vast majority of scientists,
even those
who have never heard of Markov chains, to explain something is to present
it as
a homogeneous Markov chain. This is why Bell’s theorem created such a flap;
it
showed that there are physical processes, specifically the so-called EPR
process, that cannot possibly be Markov chains unless information can travel
faster than light, something that physics has declared to be impossible.
The reason my answer was a qualified no is that there is a way to
generalize the
mathematical definition of homogeneous Markov chain that in fact does
encompass quantum phenomena like EPR, without any nonsense about
fasterthan-light signaling5. When we do so, it turns out that the core of
quantum
mechanics can be factored out of physics as a purely mathematical feature of
these new Markov chains, much as the second law of thermodynamics can be
factored out of physics as a purely mathematical feature of ordinary Markov
chains. Like the second law, this quantum core says nothing about space,
energy or matter. In effect, it belongs to information science. However, it
has
emigrated from physics to information science, so-to-speak, bringing with
it very
important knowledge, among which is just that part of quantum physics needed
for the logical design of quantum computers.

Let’s call the science of these generalized Markov chains New Science One.
There is now reason to believe that New Science One can absorb not only the
quantum core but much if not all of present-day theoretical physics. Does
this
make it the new science? It’s certainly a step in the right direction, and
I believe
that had this step not been taken the new science would remain out of
sight. But
New Science One still doesn’t directly address the greatest failure of the
old
science, which is its inability to bridge the gap between our understanding
of
matter and our understanding of mind (we’ll return to this in the next
section).
However, New Science One does have something to say about a related failure
of the old science, which is its inability to deal with so-called psi
phenomena,
those strange physical phenomena that seem to reside in an almost dreamlike
borderland between matter and mind. The observational and laboratory
evidence for these phenomena, or at least for their physical side, is by now
overwhelming; that they are still ignored or dismissed by mainstream
science is a
scandal. Why has science so badly lapsed from its empiricist credo? The
kindest answer is that the phenomena make no scientific sense. They are
unthinkable, and therefore negligible. To put it more bluntly, science as
we know
it isn’t up to the job.

I’m hardly the first person to say so, nor for the same reason. The
physicist
Pauli, one of the pioneers of quantum mechanics, was deeply aware of the
conceptual problems presented by psi. He developed a close friendship with
the
psychologist Jung whose lifelong interest in psi had led to his break with
Freud
(Freud, curiously enough, finally came around to Jung’s way of thinking and
said
that if he had his life to live over again he would devote it to
parapsychology).
Jung challenged Pauli to create a new science that would unite physics with
psychology at a fundamental level, a level deep enough to make sense of psi.
Unfortunately, Pauli left no systematic record of his thoughts on this
subject, but
there are some intriguing fragments in his letters to his protégé Fierze2.

In one such letter he made the extraordinary claim that quantum phenomena,
the
deepest and most universal manifestations of matter, are in fact a special
kind of
psi phenomena. In his words "… the quantum is that domain of synchronicity
that lies closest to causality". The word “synchronicity” was Jung’s term
for a
postulated domain of acausal order within which he located psi. At the time
of
Jung’s and Pauli’s exploration of these matters there was no formalized
concept
of acausality. Pauli’s vision, however on target, didn’t have much of a
chance of
becoming a mathematical science. That situation, at least, has been changed
by
New Science One.

This point is discussed at length in 5, and is beyond the scope of the
present
paper. Suffice to say here that causality, in so far as it is formalized at
all in the
old science, is given by the structure of the transition matrix in standard
Markov
theory. New Science One extends standard Markov theory in two basic ways:

First, it introduces a new formal concept into process theory that is
analogous to
velocity in mechanics3. One result of this step is to make the general laws
of
process symmetrical in time, just as the laws of mechanics are symmetrical
in
time. Markov chains in the old sense are analogous to moving bodies whose
velocity always decreases exponentially. Seen more abstractly, this kind of
exponential “slowing down” is the second law of thermodynamics. For Markov
“chains” in the new sense, entropy is the sum of two parts, one increasing
in time
and one decreasing in time; such processes so-to-speak evolve both forward
and
backward in time simultaneously4. Just how this extends our ordinary
notions of
causality is discussed in detail in 5; to sum it up in one sentence, adding
a
velocity component to the chain state is equivalent to putting independent
boundary conditions on the beginning and end of the process, just as it is
in
mechanics, except that now the conditions are on probabilities.

The second step is simpler but more radical: probabilities are allowed to go
negative.

Both steps are necessary to make sense of quantum phenomena as processes.
The first, at least, would seem to be necessary to make sense of
precognition,
and is a crucial first step in going beyond causality in the direction
envisioned by
Jung and Pauli. Thus I believe that adding “velocity” to the concept of
process
will be necessary to get our new science started, just as adding velocity
to the
concept of state, which happened in the early 17th Century, was necessary
to get
mechanics started.

Necessary does not mean sufficient, however. The mathematics of the new
process theory is simple and straightforward, but it doesn’t tell us much
about
what the above-mentioned departures from the chain law mean for experience.
For this we need the deeper science that Jung proposed to Pauli, a science
that
not only unites matter with information but with mind.

Section 3. Phenomenology as theory

Psi is more than just a physical or informational anomaly – it points to
the need
for a fundamentally better understanding of the place of mind in nature. The
physical sciences, if they are to meet the challenge of psi, must enter
into some
kind of union with psychology.
How should we proceed toward this goal? One thing we cannot do is simply
lump the discoveries of physics and psychology together. As they stand,
these
two subjects have too little in common to mix into a coherent whole.
Physics is
without question the more impressive of the two sciences. For a while it
was the
fashion to try to “reduce” psychology to physics, physics being understood
rather
loosely as our knowledge of material things. In more recent times this has
given
way to the idea that psychology belongs to information science. In its most
extreme form, this new fashion would have it that the mind is a program
running
in a computer called the brain. We’ll pass here on both of these options,
and
start out with a much cleaner slate by availing ourselves of certain
insights of that
philosophical movement known as phenomenology.
Phenomenology began in Germany at the end of the last century as a school of
psychology whose best-known member was Brentano. Husserl later gave it a
more philosophical turn, and it has since become, in one form or another,
the
dominant philosophical movement on the continent, in contrast to the
analytic
school of philosophy that has prevailed in England and America. The guiding
insight of phenomenology, which dates from Brentano, is known as the
doctrine
of intentionality. Here is a quotation from Introduction to Phenomenology by
Robert Sokolowski6, a recent and very good book to which I’ll refer often
in this
and other essays:

“The doctrine of intentionality … states that every act of consciousness is
directed toward an object of some kind. Consciousness is essentially
consciousness “of” something or other. Now, when are presented with this
teaching, and when we are told that this doctrine is the core of
phenomenology,
we might well react with a feeling of disappointment. Why should
phenomenology make such a fuss about intentionality? Isn’t it completely
obvious to everyone that consciousness is consciousness of something, that
experience is experience of an object of some sort? Do such trivialities
need to
be stated?”

I should add that this triviality resembles the triviality that two
straight lines can
meet in at most at one point. What is not trivial is the body of
consequences that
follow from such “trivialities” in the right context. To return to
Sokolowski:

“They do need to be asserted because in the philosophy of the past three or
four
hundred years, consciousness and experience have come to be understood in a
very different way. In the Cartesian, Hobbesean and Lockean traditions,
which
dominate our culture, we are told that when we are conscious, we are
primarily
aware of ourselves or our own ideas. Consciousness is taken to be like a
bubble
or an enclosed cabinet; the mind comes in a box.” [ibid p11]
In contrast, phenomenology, when properly understood, bursts this bubble,
opens this box:

“Phenomenology shows that the mind is a public thing, that it acts and
manifests
itself out in the open, not just inside its own confines. Everything is
outside.” [ibid
p 12]

I’ll not dwell on this issue here; for an in-depth treatment, read
Sokolowski. Let
me only bring up one point in this connection, which is that it is just as
natural to
say “We are aware” as it is to say “I am aware”. The conscious subject can
be
plural as well as singular; consider “People know better” and “The common
man
knows better”. We hardly notice the difference (I hardly notice the
difference, you
hardly notice the difference, the common man hardly notices the difference).
Clearly who knows better is the same in both statements. We’ll return to
singular
and plural subjects shortly.

“Intentionality”, and its cognates “intending”, “intention”, “intentional
act”, etc. are
technical terms in phenomenology, and as such they can be confusing. Again,
Sokolowski:

“The phenomenological use of these words is somewhat awkward because it
goes against ordinary usage, which tends to use “intention” in the practical
sense. However … there is no way of avoiding them in the discussion of this
philosophical tradition. We have to make the adjustment and understand their
meanings as primarily mental or cognitive, and not practical. In
phenomenology,
“intending” means the conscious relationship we have to an object.” [ibid p
8,
slightly reworded]

Note that intending is characterized here as a relationship. Let’s
visualize this
relationship as an arrow from subject to object. We ordinarily think of the
subject
and object as things that preexist the arrow of intending. As I understand
phenomenology, it reverses this order, taking the arrow of intending to be
primary, the subject and object being its tail and head, so-to-speak. To
make an
analogy, we don’t start out with a husband and a wife and then marry them;
rather, it is by marrying them we make them into husband and wife. By the
same
token, it is the intentional act that makes two things into subject and
object.
Husserl and his followers have made an extensive and impressive
investigation
of the many varieties of intentionality. Our present enterprise is not
investigative,
though, but theoretical, in the sense discussed in Section 1. Our first
problem is
that the theoretical tools available to the scientist of mind are very
primitive, much
more primitive than those available to the physicist or geometer. We should
not
try to pretend otherwise. Let’s then imagine ourselves to be pre-Socratics,
still
wandering around in a new intellectual territory, curious but bewildered,
and
happily alighting on simple essences like points, lines and circles.
I believe that phenomenology has already handed us our lines as the arrows
of
intentionality and our points, which are now of two kinds, as the “heads”
and
“tails” of intending.

It’s worth pursuing this analogy. In section 1, points and lines were
introduced as
otherworldly abstractions from corners and edges. But points and lines are
also
abstractions from many other kinds of worldly things. A point can be a
corner,
but it can be a small object, where small depends on context – in cosmology
a
star is a very small object. A line can be an edge, or a line-of-sight, or
a rigid rod,
or a plumb line, or any other kind of taught string. One is tempted to say
that
there are many kinds of points and many kinds of lines, as if the
relationship
between the idea and its manifestations is one of genus and species. But
this is
not quite right. “Point” and “line” are simple words for simple ideas, even
though
they can be brought into diverse contexts in diverse ways. One should not
confuse the diversity of these ways with a variety among possible meanings
of
“point” and “line”.

So it is with subject, object and intention. The investigative
phenomenologist
may find that there are many varieties of intention, and also of other
basic ideas
like presence and absence that we’ll come to soon. But the theorist sees the
situation differently; precisely what makes these ideas so valuable for
theory is
that they are simple. That the subject of an intention can be either
singular or
plural, for instance, means that the simple idea of intention can brought
into a
real-world context in these two ways. It’s not that the investigative
phenomenologist is wrong. His discoveries are perfectly valid. It’s just
that
where he sees variations on the simple concept, the theorist tries to see a
variety
of ways in which the simple concept can relate to other simple concepts.
Another basic concept of phenomenology is the contrast between presence and
absence. Sokolowski speaks of this as one of three structural forms that
occur
constantly in phenomenological analysis, the other two being parts and
wholes
and identity in a manifold. The latter two are found in all of philosophy,
but
presence and absence are new:

“However, the theme of presence and absence has not been worked out, in any
explicit and systematic way, by earlier philosophers. The issue is original
in
Husserl and in phenomenology.” [ibid p 22]

Presence and absence are also technical terms, though their technical
meanings
come closer to the vernacular than the technical meaning of intentionality.
Like
intentionality, they get their philosophical status through “trivial”
truths, such as
that we can think about things that are out of sight. Husserl distinguished
between so-called filled intentions, in which the intended object is
present, and
empty intentions, in which the intended object is absent. To illustrate this
distinction, Sokolowski gives the example of looking at a cube. We can see
at
most three sides of a cube; these are the sides that are present. To see
them is
to have filled intentions. Since we are seeing the cube as a cube, however,
we
also implicitly intend the sides we can’t see, the absent sides. As
Sokolowski
puts it, our perception of the cube is a blend of filled and empty
intentions.
There are many varieties of absence: things hidden, things far away, things
remembered, things anticipated. There are also gradations of presence and
absence, e.g. foreground, middle ground, background. And then there are
questionable cases. Is the house I plan to build absent, even if I never
build it?
Is the present king of France absent? Can something be absent if it cannot
possibly be present? Is the rational square root of two absent? As I
understand
phenomenological, its answer would be yes, since the “intentional object”
is to be
regarded as “bracketed” against all considerations concerning its worldly
status.

And yet it’s not clear to me that one can bracket an object of consciousness
against a complete breakdown of rational coherence. But be that as it may,
the
concept of an intentional object, like that of a point, has an essential
simplicity
that is not affected by problems about borderline cases, and it’s this
essential
simplicity that makes it a good candidate for a key technical concept.
The value of technical concepts comes from their ability to bond with other
technical concepts to reveal structure that may otherwise be inaccessible.
There
is a crucially important bond of this sort between the pair
presence-absence and
the concept of identity. To quote Sokolowski again:

“There is a dimension of presence and absence, of filled and empty
intentions,
that we have not yet examined. It is the fact that both the empty and the
filled
intending are directed toward one and the same object. One and the same
thing
is at one time absent and at another present. In other words, there is an
identity
“behind” and “in” presence and absence. The presence and absence are “of”
one and the same thing. … If I talk to you about Leonardo’s painting, you
and I
intend one and the same painting, the same one that we will see directly
when
we walk into the room where it is present. The presence is the presence of
the
painting, the absence is the absence of the same painting, and the painting
is
one and the same across presence and absence. … The presence and absence
belong to the being of the thing identified in them. Things are given in a
mixture
of presences and absences, just as they are given in a manifold of
presentations.
We should also notice that it is this identity, this invariant in presence
and
absence, to which we refer when we use words to name a thing.”

Another key technical idea, identity, has made its first brief appearance
on stage.
It will now temporarily exit while we continue with the first two, but it
will be back
in force in the next section.

We have called subject and object the “points” of our new “geometry”. There
is a
problem here. To speak of the subject is to make it the object of our
intention,
which is of course to make it into an object. And yet we have already
introduced
the subject as the opposite pole of the object in the intentional act. It
is the
intentional act itself that creates the polarity of subject and object. It
would thus
seem that to make the subject into an object is nonsense; it’s like making
right
into left. Does this mean that we should not even try to speak about the
subject?
In fact phenomenology exercises a good deal of restraint in this regard;
Sokolowski often uses indirections such as “the dative of awareness”.
Still, there
is a serious need in our theorizing to face the subject directly, as we do
in
everyday life when we say “me” or “you” or “him” or “her”. Furthermore,
there is
a natural way to do so.

To understand this way requires a brief excursion into another topic, which
is the
contrast between singular and plural. Phenomenologists tend to favor the
singular case; they speak of the object of an intention rather than the
objects of
an intention. And yet there are certainly times when we are aware of two or
more
things at once. Our language is full of smooth gradation between singular
and
plural: sand – gravel – rocks, overcast – cloudy – clouds. Consider “The
crowd
cheered” vs. “The people cheered”. Our awareness of the crowd cheering has
the same intentional object as our awareness of the people in the crowd
cheering. Identity can cross the divide of singular and plural, just as it
can cross
the divide of presence and absence.

Singular and plural apply to the subject as well as the object – we’ve
already
taken note of this above. There can be a kind of identity between “I” and
“we”
that results from these being the same subjective pole of an intentional
act. I and
we can be aware of the same thing or things; indeed, this is why the
intentional
object is public. But if it is the arrow of intentionality that creates the
object as its
head, the identity of different presentations of that object should carry
over to an
identity of the datives of the presentation, to use Sokolowski’s term.
The identity of “I” and “we” can come and go, and when it fades, objects
called
“you” or “him” or “her” emerge that are the “others” in the “we”. I propose
that
this fading out is the birth of the so-called subject. A subject is an
object that has
faded out of “we” and can potentially fade back in.

This definition of subject will do for family and friends, but it does not
have the
generality of a basic theoretical concept, which is what we need. We must be
able to extend the concept of subject to strangers and foreigners and
animals, at
least higher animals, and if we hope to gain a basic theoretical
understanding of
the relationship between mind and matter, we must be able to extend it into
nature at large, perhaps even into inorganic nature.

The key to making this extension is the recognition that there can be
several
“we’s” that overlap without merging. It may happen that I form a “we” with
Bill
and at the same time Bill forms a “we” with Mary, but the three of us
together do
not form a “we”. However, when Bill fades out of our “we”, his status as a
subject
enables me to grasp his subjective identity with Mary within their “we”,
thereby
transferring to Mary the status of subject. This indirect way of seeing
objects as
subjects is transitive; it can in principle be extended indefinitely by
means of
overlapping “we’s”10. Whether it can be extended in practice is another
matter,
but that is not really the point. Just as Einstein made free use of
galaxy-sized
rigid measuring rods in his thought experiments, we’ll avail ourselves of
galaxysized chains of overlapping “we’s”.

Presence and absence apply to subjects as well as to objects, but in a
complementary fashion. That is, when Bill is fully present as an object, his
subjective identity within “we” is absent; he is merely a thing.
Conversely, when
Bill is fully merged into “we”, he is absent as an object, the only present
objects
being those that we together fully intend. His identity as Bill is
preserved across
these presences and absences, however; Bill the object and Bill the subject
are
one and the same.

The subject as presented here is clearly a very different kind of being
from the
Cartesian subject, the “I” of cogito ergo sum. And yet our construction of
the
subject started out with “I”. Why shouldn’t it simply end there? What about
selfawareness? Doesn’t that reveal the subject directly? If we can’t even
get started
without “I” and “we”, isn’t our definition circular?

Indeed it is circular, but unavoidably so. This circle is not an unpaid
debt,
however, but a progressive recursion. First we use “I” and “we” to
construct the
direct subject as he or she who drops in and out of “we”. Next we chain
overlapping “we”’s to construct indirect subjects. This enables us to
describe and
empirically study the formation of “I” in a social setting, which in turn
reveals a far
richer and more complex being than that which appears in the snapshot called
introspection (William James once said that introspection is like trying to
turn the
light on fast enough to see the darkness), The eye can only see itself in a
mirror.
An essential component of self-awareness is a certain kind of “third person”
reflection back from others. I and Bill form one “we”, Bill and I form
another “we”,
and by chaining these two “we”’s, I become better acquainted with a subject
called “me”.

We cannot leave the topic of phenomenology without remarking on a split
within
the movement between those who see it as a “transcendental” science whose
truths do not overlap those of the natural sciences, and those who, like
myself,
see it as an essential part of what natural science must someday become if
it is
to live up to its promise. Sokolowski, following Husserl, is of the first
school. I
believe he is clearly right in distinguishing between the so-called
transcendental
and natural attitudes (see 6, pp. --). It does not follow, however, that we
must, as
he claims, sharply divide human inquiry into two separate disciplines called
philosophy and natural science. How these two activities can and should
relate
to each other remains to be worked out, but to break off all relations
between
them is to doom both to sterility and oblivion.

Section 4. Identity theory (out for revisions)

Section 5. Summary and speculations
(Rewrite.)
Enough of the axiomatic and the oracular – it’s now time for the arcane.
As I said in Section 1, most of the new science will grow out of things
that are
under our very noses, things that we have simply neglected. But there are a
few
miracles that must happen before these neglected things can begin to grow
into
science. Actually, some of them have already happened. Twentieth Century
physics took some truly miraculous leaps. So, in fact, did Nineteenth
Century
physics, one of which was Hamilton’s amazing discovery that subject and
object,
in the mathematical sense, are interchangeable within the mathematical
formalism of Newtonian mechanics.

What Hamilton discovered was a big generalization of the relativity of
position
and of uniform motion. By the early nineteenth century, physicists were
working
on very difficult problems in multi-body mechanics, and picking the right
coordinate system was often the key to their solution. Hamilton pushed this
method to its ultimate limit. In keeping with the new abstract spirit, he
generalized the concept of space to that of so-called phase-space, which is
a
many-dimensional space in which every degree of freedom of position and
momentum in a mechanical system is a coordinate, and the state of the system
is a single moving point. He discovered the most general class of coordinate
transformations that would preserve the laws of mechanics, which he called
canonical transformations. He then showed, to his and everyone else’s
surprise,
that the time evolution of the state of the mechanical system can be
described as
the unfolding of a one-dimensional group of these canonical
transformations.

In other words, the evolution of the objective state of the system, however
complicated, is indistinguishable from a continuous uniform change in the
viewpoint or “state” of the subject.

This subject-object symmetry carries over to quantum mechanics, where it
actually takes a much simpler and more general form. In a quantum context
it is
sometimes called the equivalence of the Schrodinger and Heisenberg
representations, though in deference to its originator I’ll continue to
refer to it as
Hamilton’s symmetry. What it says in a nutshell is that, for a mechanical
system, it is matter of viewpoint whether the appearance of change results
from a
change in the state of the object or a change in the viewpoint of the
subject.
We have no grounds for extending Hamilton’s symmetry beyond mechanics, and
indeed it would seem to have no analogue for irreversible processes like
Markov
chains. Still, the question remains whether, within the context of
mechanics, it
might apply to itself. Might it be that the change of viewpoint from
regarding the
change in an object J as objective to regarding it as subjective has an
objective
counterpart in the change of some binary state variable of the object J?
The physicist Pauli, though he didn’t do much actual work on Jung’s proposed
project to unify physics and psychology, had a powerful vision toward the
end of
his life that the secret of this unification lay in the imaginary number i
[ref and ref].
Pauli was hardly a new-age airhead, and his judgments as to what is worth
pursuing in physics were so highly regarded that they earned him the title
“the
conscience of physics”. It would therefore seem advisable not to dismiss
this
vision out-of-hand. What could it possibly mean?

We calculate the probability of an event in quantum mechanics by squaring
the
modulus of a complex number called the amplitude of that event. The
mathematician George Mackey showed that the quantum-mechanical use of
complex amplitudes can be regarded as shorthand for a kind of symmetry
imposed on a more general form of quantum mechanics in which the amplitudes
are always real8. An equivalent way of stating this is to say that all
objects in
“real” quantum mechanics have a certain binary quantum variable in common,
call it C, whose state is unobservable. Now the binary variable that we
hypothesized to be the objective manifestation of Hamilton’s symmetry, call
it B,
would of course have to be unobservable. Could it be that C is B?

This hypothesis does in fact pass one mathematical test that might have
shot it
down. However, to put it to an empirical test, we would have to move into a
broader domain of process than that of standard complex quantum mechanics.
Within quantum mechanics as it stands, such a domain is not even
conceivable.
Fortunately, the enlarged domain of Markov processes mentioned in Section 2
does give us the necessary mathematical room. Within this enlarged domain,
quantum processes are a very special case, distinguished from the others by
certain basic symmetries5. Such symmetrical processes can coexist with and

fade into less symmetrical processes, much as the (almost) flat parts of
empty
space, as portrayed by general relativity, can fade into regions warped by
gravity.
What this shows is that we can imagine matter as an extended field-like
structure
that is mostly quantum-symmetrical with respect to subject and object, but
which
is modulated by regions of broken symmetry in which the subject-object arrow
has a preferred direction. I propose that this is the bare beginning of an
intelligible conception of the relationship between matter and mind.

It is a very bare beginning, however. The image of a “subject-object” field
with
lumps of broken symmetry rests too heavily on physics. It scarcely reaches
out
to psychology at all. For that to happen, we need something deeper.

My best present hope for this deeper thing to emerge is the prospect that
identity
theory will provide a conduit between physical science and phenomenology.
Freud once remarked that our best psychologists are our novelists and
dramatists. One of the reasons why phenomenology is so attractive is that we
can at least begin to see our beliefs and doubts and confusions and hopes
and
dreams within its austere abstract categories. A purely mathematical “field
theory” of subject and object offers no such attractions.

In support of this hope I can report that the enlarged notion of process
that
provides room for the “subject-object field” is very naturally grounded in
identity
theory8.

In particular, identity theory reveals that the seemingly odd step of
allowing “negative probabilities” is a consequence of the symmetry among the
several varieties of identity that are needed to describe a process as an
identity
structure.

In conclusion, let me briefly return to the society of my cells, and of
yours.
Each of these cells, and there are many billions of them, is presumably a
sentient
being. This means that each of them has in some way broken the perfect
subject-object symmetry that characterizes dead matter. But how do these
minute individual breaks in subject-object symmetry add up to anything more
than a soft hiss? As an intended world, a blend of presence and absence,
what
could be the intended objective counterpart of their collective
subjectivity other
than a slightly restless void?
I said I would enter the arcane mode. Now I’ll wade even deeper into it, and
become oracular again too.
You’ve probably heard of the “entangled” state of a pair of two-state
quantum
particles that exhibit the “paradox” of EPR. One feature of these two
particles is
that if you measure their states from any (quantum) viewpoint, these
measurements always agree. Now there is also a similarly entangled state for
three or more two-state quantum particles. However, for many-particle
entangled

systems there turns out to be only one viewpoint from which their measured
states are perfectly correlated. Curiously enough, there is also a
complementary
viewpoint on such systems from which their measured states are almost
completely independent, the only departure from independence being that one
of
them is either the odd or even parity of the others, depending on whether
the
number of particles in their group is odd or even. Now if any member of a
set of
binary variables is the parity of all the others, then every member of that
set is
the parity of all the others, so this departure from independence is a
symmetrical
property of the set as a whole. Notice that in case the group has only two
members, this departure from independence is complete, i.e. the two members
are perfectly correlated, which why the EPR pair is such an interesting
special
case.
“We are one for all and all for one and we are happily busy together”. So
think
the busy bees as together they calculate where to find today’s sugar water.
And
together they can calculate very well; experiments have shown that they can
even catch on to the human experimenter’s plan of moving the sugar water
further away every day in a geometric progression [ref].

Perhaps our cells are like that. Suppose that together they are in something
resembling a quantum-entangled state. The viewpoint called “I” is, in
quantum
terms, part way between the cell’s eye view in which “we cells” are
perfectly
correlated, seeing as one and marching as one in perfect synch, and the
crowd’s
eye view of a featureless blur that “we cells” would see from the viewpoint
that
has each going its own way. This middle viewpoint would have to be the
subjective pole of the broken subject-object symmetry in the macro-region
of offquantum matter that I call my body. This symmetry would be broken in
such a
way as to create a subject-object polarity that is literally an objective
feature of
my body, just as the polarity of “up-down” is an objective feature of space
in the
neighborhood of a massive body. From this middle viewpoint, my cells would
act
with just the kind of organized independence necessary to create a
“supersubject” who can think, namely me.

Perhaps there will someday be a physics of the “subject-object field”
capable of
carrying out experiments that reveal this polarity. This would make it
possible in
principle to construct a chain of overlapping “we’s” that reveal my inner
life to the
scientific community at large, even if I were an alien blob. The people who
talk
about smart computers may have an intimation of this, but computers are not
where it’s at, and we can now see why. Which brings us back to psi.
Of course psi is a grab-bag term, covering a variety of things we don’t
understand which may have little in common. What does seem to be common to
many of the frequently reported psi phenomena is that they resist causal
explanation. This is what led Jung to coin the term “synchronicity”,
meaning an
acausal ordering principle. In today’s science, to explain an event
literally means
to find its causes. “Why did that happen?” “Because of such-and-such”. And
yet

when you ground a causal explanation at the level of structure theory or
identity
theory, you find that it has a very specialized “shape”, which is that of a
Markov
chain. But Markov chains do not even cover quantum processes, let alone the
myriad of more exotic processes that result from breaking strict quantum
symmetry.

We have briefly seen how the need to generalize the concept of process,
which
in turn comes from the need to incorporate the insights of phenomenology
into
natural science, forces us to take “acausality” be the norm. Causality is
the
organizing principle of the everyday practical world, but the everyday
practical
world is a very special place. Today’s conception of psi phenomena will seem
quaintly archaic in the new science. Perhaps we should retain the word
“psi”,
however, for that horizon of mystery that will always belong to the quest
for
understanding.

References
1. Horgan, John The End of Science, Helix Books, 1996
2. Reproduced in: Lorikeinen, Paavo, Beyond The Atom
3. Tom Etter, Digram States in Markov Processes, work in progress
4. Tom Etter, On the Occurrence of Familiar Processes Reversed in Time,
1960, www.boundaryinstitute.org
5. Tom Etter and H. Pierre Noyes, Process, System, Causality and Quantum
mechanics, Physics Essays, Dec. 1999, also on Boundary Institute site
6. Robert Sokolowski, Introduction to Phenomenology Cambridge University
Press, 2000
7. W. V. Quine, Philosophy of Logic, Harvard University Press, 1955
8. George W. Mackey, Mathematical Foundations of Quantum Mechanics,
W. a. Benjamin, Inc., 1963
9. Tom Etter, Relation Arithmetic Revived, project report for
Hewlett-Packard
E-Speak project, 2000, also on Boundary Institute website.
10. Tom Etter, Third Person Presence, work in progress.
11. Suppes etc.

OUT TAKES
Consider a very simple language. It contains the words “and”, “or”, “not”,
and the
phrases “for some” and “for all”, together with parentheses and an
inexhaustible
supply of pronouns (“it”, “this”, “that”, “these”, “those”, “this2”,
“this3”, etc.). It also
contains the word “Red”, though with a new syntax; instead of saying “This
is
Red” we say “Red(this)”. The grammar of this language is the normal minimal
grammar of these words and phrases in English, supplemented by parentheses
for grouping. Thus “(For all these)(Red(these) or not Red(these))” is a
sentence,
and a true sentence.

Subject-object, in a mathematical context, is the contrast between the
manner in
which a thing is represented and the thing itself that is being
represented, e.g.
the coordinate system and the space. There is only one mathematical subject,
however, which is the “we” so dear to the mathematician, the “we” who
understand what he is saying.

Etc.

Appendix. The fundamental theorem of identity theory

By an axiom system we’ll mean a system formalized in first-order logic
(predicate
calculus) with an identity predicate that satisfies the principle of
substitution for
the other predicates, i.e., if x=x’ then P(x) iff P(x’). We’ll assume that
there are no
primitive constant terms, and that other terms are only introduced as
notational
conveniences, which can be eliminated by definite descriptions. We’ll also
assume there are a finite number of primitive predicates. These plus
formulae
constructed from them using AND, OR, NOT, SOME , ALL and “=” will be
referred to collectively as statements.

By a sameness predicate will be meant a two-term predicate that satisfies
the
three axioms of an equivalence relation, namely
• Sameness 1. x is the same as x.
• Sameness 2. If x is the same as y then y is the same as x.
• Sameness3. If x is the same as y and y is the same as z then x is the
same as z.
By a pairing predicate will be meant a three-term predicate Pair(p,x,y),
read “p is
the ordered pair <x,y>”, that satisfies the following three axioms:
• Pairing 1. Pairing is universal in x and y. More formally,
∀x,y∃p(Pair(p,x,y)).
• Pairing 2. p is a function of x and y. More formally, ∀x,y,p,p’(
Pair(p,x,y)
⇒ Pair(p’,x,y) ).
• Pairing 3. x and y are functions of p. More formally, ∀x,y,p,p’(
(Pair(p,x,y)
and Pair(p’,x,y)) ⇒ p = p’.)
A rough statement of the fundamental theorem is that any axiom system with
the
power to define a pairing predicate also has the power to define three
sameness

predicates that can completely replace its original primitive predicates.
Here is a
more precise statement:

Fundamental theorem. Let S be an axiom system with primitive predicates
P1,P2,P3…Pn and identity x=y. If it is possible to define a pairing
predicate
Pair(p,x,y) in terms of the Pi together with identity, then it is also
possible to
define three sameness predicates R, C and V in terms of the Pi together with
identity, such that the Pi and x=y can in turn be defined in terms of R, C
and V.
Let’s briefly discuss the meaning and implications of this theorem before
turning
to its proof.

The main thing to notice about this theorem is that it gives us the ability
to
translate any axiomatized branch of mathematics into a language whose only
concept is sameness. Every statement, be it an axiom, a theorem or a defined
predicate, is translated into a statement about sameness. Let’s call this
an RCV
translation. Whether an RCV translation leads to new insights or is useful
for
solving mathematical problems has to be decided in particular cases. But, as
we’ll see, the RCV translation procedure is quite natural, and there is
reason to
hope that it will strip away some of the arbitrariness that mars the
set-theoretic
“encoding” of concepts like function and relation and invariant.

Another thing to notice, and this is rather strange, is that there seems to
be a
certain minimum of expressive power required in a system for it to have an
RCV
translation. I say “seems” because we don’t yet know what that minimum is;
all
we know is that the ability to define ordered pairs is sufficient. What is
strange is
that it’s not the other way around; you’d think that weaker languages would
be
easier to translate. It would appear, though, that in order to translate
certain
simple concepts at all, a certain complexity of “entanglement” is required
among
the sameness predicates, and this is only found when they are abstracted
from a
relatively expressive language. Of course a simple system S can always be
translated by first translating a more complex system of which it is a part.
However, when this is done, R, C and V are not intrinsic to S, and there is
no
direct way to translate S without first rising to a more complex level.
What sort of systems can define pairing? For a start, any reasonable
version of
set theory can. In most versions, <x,y> is defined as {{x,y},{x}}. However,
other
set-theoretic encodings are possible; for instance, Suppes11 defines an
ordered
n-tuple as a function on the first n natural numbers, which would make
<x,y> a
function on {1,2}. Arithmetic can also define pairing; for instance we can
define
Pair(p,m,n) to mean 2m3n
. Clearly this encoding satisfies the functionality axioms
for pairing, and the fundamental theorem of arithmetic guarantees that it
satisfies
Pairing 3.

Conversely, what kind of systems can’t define pairing? Axiomatic Boolean
algebra almost certainly can’t. Similarly, other austere systems like
linear orders.
There are certain partial orders, though, that can.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T52749d62f73acb31-Mc3874483fc53c023e3793b13
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to