Re: Are we simulated by some massive computer?

2004-05-11 Thread Eric Hawthorne
I saw the documentary movie Tibet: Cry of the Snow Lion the other day.

In one scene, a group of monks is sitting around  in a circle, and the 
Dalai Llama is
overseeing.

The monks are industriously and methodically placing individual tiny 
coloured
beads (there are maybe 4 or 5 colours)
around the perimeter of an enormous circular mandala pattern (made of 10s of
1000s of beads). The pattern has grown to almost two metres in diameter, 
and it
features an extrordinarily elaborate kaleidoscopic pattern with perfect 
radial symmetry,
and large complex  patterns built on tiny patterns.

If someone places a single bead out of its proper place in the pattern, 
the pattern
will be distorted and it will not be possible to maintain the growing 
recursive pattern.
But if every bead is placed correctly, the perimiter can grow by one 
bead width maintaining
the order of the pattern, and the process can repeat, growing larger and 
larger.

OBSERVABLE REALITY IS LIKE THE MANDALA. EVERYTHING MUST BE
JUST SO, TO MAINTAIN THE OBSERVABLE ORDER OVER A LARGE
PERIMETER. ALMOST EVERY CHOICE (ABOUT WHERE TO PLACE BEADS) OR ABOUT
PROGRAM NEXT STEPS, LEADS TO CHAOS RAPIDLY. A SELECT FEW PATHS
CAN MAINTAIN THE ORDER.
p.s. Later in the movie, they return to this scene, with the monks 
around an enormous,
wondrously complex circular pattern. A monk takes a wooden yardstick, 
and with
a few brief sweeps, obliterates the pattern, leaving chaos. The chaos; 
the sand of beads,
is cleared to one side, and a monk places a single bead in the centre of 
the circle

That last part is the real lesson of the mandala.

Eric

George Levy wrote:

Bruno,

Bruno Marchal wrote:

And a priori the
UD is a big problem because it contains too
many histories/realities (the white rabbits),
and a priori it does not contain obvious mean
to force those aberrant histories into
a destructive interference process (unlike
Feynman histories).


It may be that using the observer as starting points will force White 
Rabbits to be filtered out of the observable world


George






More on mandalas

2004-05-11 Thread Eric Hawthorne
The other thing to note about mandalas is that there can be more than 
one possible pattern
that would maintain order and recursive complexity as it expands outward 
(i.e. forward in time).
However, an observer subpattern embedded in one mandala (and created by 
ITS rules of order)
can only see whatever order is in its own mandala pattern.

A different mandala pattern, with slightly different rules, or
with a different initial pattern, might arguably contain a White Rabbit 
subpattern, but alas the White Rabbit
cannot be seen be our first observer, and vice versa, because the 
attempt to see the contents of another
mandala pattern would necessarily destroy our own mandala pattern.

Whatever (computational paths) would destroy the self-consistent mandala 
pattern of our universe are
inherently unobservable by us. One way of looking at it is that light 
seen by an observer A can only
illuminate A's universe pattern. That's kind of a definition of light, 
and of A, and of universe pattern,
all at once.



Re: Definitation of Observers

2004-04-27 Thread Eric Hawthorne




pattern
|
physical pattern (constraint on the arrangement of matter and energy in
space and time)
|
physical process (physical pattern with characteristics like that some
regular and often localized, and yet complex
form of change is of its essence. Can be described as comprised of
states, events, and subprocesses)
|
|
physical computational process physical sensing
process
|
mind-of-intelligent-observer


The | relation is "is-a" inheritance.

Does that help successfully communicate what I mean by a pattern that
computes and stores information about
its surroundings?

Eric

Brent Meeker wrote:

Eric Hawthorne wrote

  

  An observer is a pattern in space-time (a physical
  

process) which engages


  in the processing and storage
of information about its surroundings in space-time.
  



  
  
This seems like a failure to communicate because of mixing levels
of description. If you're going to define "observer" as a pattern
you need to say what kind of pattern it is.  If you skip to a
functional, "processing and storage" or intentional "engages in"
level of description then you introduce terms with no definite
relation to patterns.

Brent Meeker


  





Re: Definitation of Observers

2004-04-26 Thread Eric Hawthorne
An observer is a pattern in space-time (a physical process) which 
engages in the processing and storage
of information about its surroundings in space-time. Its information 
processing is such that the observer
creates abstracted, isomorphic, representative symbolic models of the 
structures and processes surrounding
it, as well as other, purely abstract informational model structures. 
The observer has subprocesses of itself
which process its representative models in such a way as to model, 
simulate, or calculate relations between
informationally connected local parts of the space-time surroundings of 
the observer. These cognitive
subprocesses also model, simulate, or calculate relations between the 
observer process itself and its
surrounding structures and processes in space-time.

An observer is constrained to exist as a substructure of  an 
informationally self-consistent medium,
and a medium in which notions of change, locality, and metric space and 
time can be defined.

Further, an observer is constrained to exist in a locale which has a 
thermodynamic range of variation,
and a fine-grained structural variety suitable for the random 
coalescence of structures (slow localized processes)
which can attain auto-poietic (pattern-self-sustaining) properties 
relative to alternative patterns of organization of
matter and energy. As a restatement and refinement of that constraint; 
the locale of the observer must be suitable
for the emergence of and growth of stable, organized complex systems 
with adequate degrees of freedom to explore
many possibilities for their form and function. Only in such a 
constrained environment could an observer
general-information-processing-and-epresenting-and-abstracting process 
arise spontaneously and maintain itself
long enough to do meaningful observation of its surroundings.

An observer is constrained to perceive only informationally 
self-consistent states (with respect perhaps to some
notion of locality and metric space-time) that its medium exhibits. It is
conceivable that the medium exhibits other, informationally mutually 
inconsistent states, but any aspect of the
extent of these other pseudo-states of the medium can in principle
not be perceived by any information receiver and processor  such as the 
observer.

Hal Ruhl wrote:

I would like to explore just exactly what the various members of the 
list mean by observer as in the following from Wei Dai's post.

Hal




Re:The difference between a human and a rock

2004-04-17 Thread Eric Hawthorne
How does a human differ in kind from a rock?

-Well both are well modelled as being slow processes (i.e. localized 
states and events) in spacetime.
- A process is a particular kind of pattern of organization of some 
subregion of spacetime.
- We share being made of similar kinds of matter particles that stay 
close to each other in spacetime for
some finite time period, and some finite spatial extent.

Oh, but you said how do we differ?

Well, a human roganism is a sub-unit of a longer-lived species pattern 
within an organic emergent system eco-system
pattern.
A rock does not appear to have that much complexity of form and 
autopoietic function.

A rock is one of those kind of local spacetime patterns or systems that 
doesn't have much choice about how it is.
The laws of physics, and the nature of the rock's components and the 
thermodynamics of its vicinity are such that it
pretty much collects into how it's going to be at some time, then is 
physically constrained to stay just that way,
at macro scales anyhow, for a long period of time. Of course, being a 
big physical process pattern subject to
the laws of thermodynamics, it is, actually, changing, and usually 
dissipating (disorganizing), just very, very slowly.

A human organism pattern is existing at a thermodynamic range 
internally, and in a thermodynamic regime in its
environment, that allows for more options. for how (and e.g. where) to 
be (over short time scales.) Interestingly,
this makes for the presence of all kinds of other similar organic 
patterns with options, and interesting behaviours
(like eating you for dinner, or infecting you and eating your cell 
structure.) In other words, this thermodynamic
regime, and the particular kinds of atoms and chemical bonds in 
ecosystems, make for active competition for
which should be the dominant pattern of organization of matter and 
energy in the vicinity. i.e. You can't always
just be a rock, because there might be a creature with a hammer wanting 
to break you down into cement.
Or you can't live for ever, as an organism, because something else wants 
to re-pattern your matter and energy;
that is, the matter and energy your pattern has competed successfully to 
borrow for its form for a while.

Clear as oozing primordial subterranean sulphur-vent mud?

Ok but here's the interesting part of the story. Because there are 
options for how to be i.e. how to hold together
at our organic ecosystem thermodynamic regime, there is 
pattern-competition for who is the most auto-poietic
(i.e. what forms of matter and energy collection can hold together best, 
at the expense of others).

And it turns out that life-like ecosystem patterns, species patterns, 
and organism patterns win out for a time,
precisely because their main function is autopoiesis, and they 
eventually, through natural selection, get very
good at it.

And it may turn out that the way you survive best as a pattern in 
spacetime, assuming you have a certain
thermodynamic range to work with, is to store inside yourself 
INFORMATION about that which is
outside yourself and nearby. i.e. about your environment. In otherwords, 
pattern, if you want to live, get
out there and start RE-PRESENTING aspects of your environment WITHIN 
YOURSELF (in some
partly abstract form within some aspect of your own form.)
Eventually, if you do that, simple representation
of your environment. Ouch that hurt. I'm going to flail the other way 
outa here. or
hmmm, my complex molecules like the smell and molecular fit of YOUR 
complex molecules
will give way to complex representation within the organism of its 
environment, and complex action plans
to be carried out to protect the organism (and its kin's) pattern from 
nastier aspects of the environment.
So we get Hmmm. I think that guy and his army is out to get me and 
mine. I think I will pre-emptively
strike on that other guy's country because he vaguely looks like the 
first guy. Ok, bad example.
or you get Hmmm. What an intelligent (accurate 
environment-representer), capable (effective environment
modifier and pacifier), and beautiful (pattern-form-average-conformant) 
woman she is. I'll ask her to marry me.

Or something like that.

And that's the major difference between humans and rocks. Our 
thermodynamic regime necessitates that
we navigate options for our existence/non-existence as stable patterns 
by representing informationally, then
navigating and affecting, our surrounding  space, time, matter, and 
energy forms.

Eric

Hal Ruhl wrote:

Hi Stephen:

Observers:

In this venue dances interact and change each other discontinuously by 
mutual collision or by exchanging smaller dances.

How then does a human differ in kind from a rock?  Should we expect 
them to differ in kind?

Yours

Hal




Re: Computational irreducibility and the simulability of worlds

2004-04-17 Thread Eric Hawthorne


Hal Finney wrote:

How about Tegmark's idea that all mathematical structures exist, and we're

living in one of them?  Or does that require an elderly mathematician,
a piece of parchment, an ink quill, and some scribbled lines on paper in
order for us to be here?
It seems to me that mathematics exists without the mathematician.
And since computer science is a branch of mathematics, programs and
program runs exist as well without computers.
 

Ok, but real computers are math with motion. You have to have the 
program counter touring
around through the memory in order to make a narrative sense of anything 
happening.

Mathematics, being composed of  our symbols, is an abstract 
re-presentation. I think what Tegmark
must be saying is that something exists which is amenable to 
description by all self-consistent
mathematical theories (logical sentence sets) , and by no inconsistent 
theories. To me, this is just
equivalent to saying that all possible configurations of differences 
exist and that any SAS that
represents its environment accurately (e.g. via abstract mathematics) is 
constrained, by its own
being part of the information structure, to only perceive 
self-consistent configurations of differences
as existing. Self-consistency of mathematical theory, as it translates 
from the representation level
to the represented level, just means that things perceived can only be 
one way at a time, and that's
the kind of thing that a consistent mathematical theory describes.



Re: The difference between a human and a rock

2004-04-17 Thread Eric Hawthorne


Hal Ruhl wrote:

I see nothing in the rest of your post that makes my believe there is 
a difference of kind between rocks and humans.


I believe it is a mistake to concentrate only on the reductionist theory 
of the very small, and to assume that there
is nothing else interesting about systems that are larger. Theories of 
spacetime and matter's unit composition
are not the be all and end all. To explain emergent system behaviour, 
you have to have a theory whose language
is a vocabulary of various kinds of complex properties. This is because 
emergent systems, as one of their
interesting properties, do not depend on all of the properties of their 
substrate. They only depend on those properties
of the substrate which are essential to the interaction constraints that 
determine the macro behaviour of the system.
Thus, in theory, you can change the system's substrate and still have 
the same complex system, at its relevant
level of description.

However, that being  said, I think, Hal, that we're on a similar 
wavelength re. fundamental info physics.
Ref. my previous everything-list posts on the subject:

Riffing on Wolfram http://www.escribe.com/science/theory/m4123.html
Re: The universe consists of patterns of arrangement of 0's and 1's? 
http://www.escribe.com/science/theory/m4174.html
Re: The universe consists of patterns of arrangement of 0's and 1's? 
http://www.escribe.com/science/theory/m4183.html
Constraints on everything existing 
http://www.escribe.com/science/theory/m4412.html
Re: Constraints on everything existing 
http://www.escribe.com/science/theory/m4414.html
Re: Constraints on everything existing 
http://www.escribe.com/science/theory/m4427.html
Re: Running all the programs 
http://www.escribe.com/science/theory/m4525.html
Re: 2C Mary - How minds perceive things and not things 
http://www.escribe.com/science/theory/m4534.html
Re: are we in a simulation? http://www.escribe.com/science/theory/m4566.html
Re: Fw: Something for Platonists 
http://www.escribe.com/science/theory/m4594.html
Re: Why is there something instead of nothing? 
http://www.escribe.com/science/theory/m4896.html
Re: Why is there something instead of nothing? 
http://www.escribe.com/science/theory/m4900.html
Re: Is the universe computable? 
http://www.escribe.com/science/theory/m4950.html

Warning, my vocab in these posts is a little informal.Go for the
fundemental concepts if you can get them out of the writing.
Cheers, Eric



Re: Gravity Carrier - could gravity be push with shadows not pull?

2004-02-26 Thread Eric Hawthorne
Caveat: This post will likely demonstrate my complete lack of advanced 
physics education.

But here goes anyway.

Is it possible to model gravity as space being filled with an 
all-directional flux of inverse gravitons? These would be
particles which:
1. Zoom around EVERYWHERE with a uniform distribution of velocities (up 
to C in any direction).
2. Interact weakly with matter, imparting a small momentum to matter (in 
the direction that the iGraviton
was moving) should they collide with a matter particle. The momentum 
comes at the cost that the
iGraviton which collided with mass either disappears or at least 
reduces its velocity relative
to the mass's velocity.

So note that:
1. If there was just a single mass,  it would not receive any net 
momentum by collisions from iGravitons
because iGravitons with an even distribution of velocities impact it 
from all sides with equal probability,
no matter what the mass's velocity. (This is true because C is the same 
for each mass no matter how
it's travelling, so even distribution of velocities up to C is also 
the same from the perspective of each
mass regardless of its velocity.

2. If two masses are near each other, they shadow each other from the 
flux of iGravitons which
would otherwise be impacting them from the direction in between them. 
This shadowing would
be proportional to the inverse square of the distances between the 
masses, and would be proportional
to the probability of each mass colliding with (i.e. absorbing) 
iGravitons, and this probability would
be proportional to the amount of each mass.   
(So the iGraviton shadow between the masses would have properties like a 
gravitational field).

3. The mutual shadowing from momentum-imparting flux from all directions 
means that net momentum
would be imparted on the masses toward each other (by nothing other than 
the usual collisions
with iGravitons from all other directions.)

4. The deficit of iGravitons (or deficit in velocity of them) in between 
absorbtive masses
could be viewed as inward curvature of space-time in that region. Amount 
or velocity distribution
of iGraviton flux in a region could correspond in some way with the 
dimensionality of space in that region.

I find this theory appealing because
1. it's fundamental assumption for causation of gravity is simple (a 
uniformly-distributed-in-velocity-and-density
flux of space-involved (i.e. space-defining) particles.)
2. The paucity of iGravitons (or high iGraviton velocities) in a region 
corresponding to inward-curving space
is an appealingly direct analogy. You can visualize iGravitons as 
puffing up space and a lack of them
causing space there to sag in on itself.

I'd be willing to bet that someone has thought of this long before and 
that it's been proven that
the math doesn't work out for it. Has anyone heard of anything like 
this? Is it proven silly already?

Cheers,
Eric



Re: Gravity Carrier - could gravity be push with shadows not pull?

2004-02-26 Thread Eric Hawthorne


Hal Finney wrote:

Again, this is not really a multiverse question.  I hate to be negative,
but there are other forums for exploring nonstandard physics concepts.
Alright I take your chastisement somewhat, while also grumbling a bit 
about list-fascism.

For one thing it's possible that such a model, were it a valid 
reformulation, may be easier to equate
to a computational/information-theoretic model of the 
universe/multiverse (which is in list-scope) than
the standard formulation, in that it (the push model) gives a 
discretizable, local-interaction based
model for the curvature of space-time.

Eric





Re: Fw: Gravity Carrier - could gravity be push with shadows not pull?

2004-02-26 Thread Eric Hawthorne


Eric Cavalcanti wrote:

But the main flaw, if I recall it, is that objects moving around in space
would feel a larger flux of 'iGravitons' coming against the direction
of movement, causing a decrease in velocity. So much for inertia...
 

Ok but let's say (for fun) that the iGravitons were all moving at C in 
all directions with uniform density.
So since C is perceived the same by an object no matter what the 
objects' velocity, there would
be no additional iGraviton drag against the direction of the object's 
motion. Because the iGravitons
coming up from behind would still be approaching at C.
This property is exactly the property I was trying to convey about the 
iGravitons. That they don't
cause drag no matter the velocity of the mass.

Maybe that's just impossible, but there's something very weird about C 
remember.



Continuation of group selection and emergence discussion

2004-02-11 Thread Eric Hawthorne






Tianran Chen wrote (in private reply to my earlier post, but I thought
this discussion 
generally interesting, hope that's ok Tianran):

  i do agree that many very valuable point of view had been 
criticised unfairly due to their 'group selection' nature. 
however, i am quite convinced that there are fundamental 
problems embedded in it.

first of all, i think to look at evolution through a 
individual or group point of view is always dangerous. since 
they are not the fundamental unit of evolution. evolution, 
(in the sense which it is referred to in most biology 
context) DOES NOT manipulate individual NOR species directly, 
instead, it is always the small packets of genetic 
information (such as gene or meme) that is being 
manipulated. so, it is a safe theory about evolution should 
always able to be translated into languages in terms of gene 
or meme, and if a theory cannot, then it is not safe to use. 
and this is exactly the problem of 'group selection'.
  

But just because it is the genes are being manipulated DIRECTLY does
not mean that
other factors are not important to understanding what's going on, and
understanding what
factors may be most ESSENTIALLY driving a particular evolutionary
direction. 
For example, biology now understands that it is not single genes, but
sets of genes acting together in 
regulatory networks, that are the fundamental units of "functionality"
and therefore, of
"adaptive or maladaptive functionality" in organisms.



  
BECAUSE THE EVOLVABLE "GOAL" IS NOT SIMPLY TO MAXIMIZE THE
CHANCE OF SURVIVAL OF AN ORGANISM OF THE NEXT SHORT-TERM ENCOUNTER.
THE "GOAL" IS TO MAXIMIZE THE PROBABILITY OF SURVIVAL OF THE SUM TOTAL
OF ALL OF THE ORGANISM'S ENCOUNTERS UP TO WHEN THE ORGANISM REPRODUCES.

  
  
disagree. a gene's 'goal' is to maximize the availability of 
its own copies in the entire gene pool. so to look at it in 
individual level, it implies that an individual is more 
likely to behave in such a way that it tend to maximize the 
chance of some gene to replicate. and 'some gene' here, 
refered to not only genes in its own body, but also in 
other's body. one thing to notice here is that very often, 
individual try to do so at the cost of its own chance for 
breed. such behavior can be found commonly in social 
animals, symbiosis systems, and etc.

again, here mention about the 'goal' of a gene. but what i 
really mean is that due the the selection pressure, genes 
who had survived selections behave in the way as if they had 
the 'goal' although they are really blind about future. so 
the 'goal' is simply a short hand notation, do not take it 
literally.
  

You'll note that I was the one who started the practice of putting
"GOAL" in quotes, indicating
that it is not to be taken literally, but as a stand-in or short-hand
for a complex set of factors
that lead to a tendency of evolution to support one kind of trait over
another.

  valid theories has to be general enough to explain all sorts 
of things in the domain, not just part of them. and now 
better theories does exist (such as self-gene, and memic 
evolution). so 'group selection' SHOULD be marked as 
obselete.
  


I disagree with your last clause.
To me, a fan of general theories of emergent complex ordered systems, 
of which life evolution is only one example, one of the most
fundamental questions
is what is the best scope-boundary that best defines what is the most
"interesting" or 
"systematic" or "robust" system. What I mean by this is that we have a
degree of free reign
about what elements (of the world, universe, what have you) that we
choose to include in
our definition of a system. Or in other words, for ANY particular set
of elements that have something
to do with each other, some crazy guy will have the right to call THAT
collection a perfectly 
valid system; his most important system perhaps. So you can imagine
possible "system-scopes"
or "system-boundaries" as being an infinitely variable set of
concentric spheroids overlapping,
Venn-diagram-like, being at wide ranges of spatiotemporal scales, and
including/excluding 
different elements.

Faced with such a scenario, one is forced to ask "are there any
universal principles that would
let me decide which "boundary-spheroid" is the most "systematic" or
consideration-worthy system
(at this spatiotemporal scale in this vicinity, anyway)? 

Or, both generalizing and specializing a bit; given that, for example
natural systems tend to be
"fractally functional"; that is, comprised of nested layers of
smaller-scale functional systems, we can
ask "how (at what scale boundaries) do we best divide this natural
system up into nested layers (i.e. where
if anywhere are the best-defined layer boundaries (those layer
boundaries that are the best at separating of 
distinct functionalities) , and for EACH spatio-temporal-scale layer of
this natural system, what are the
best-defined system-scope spheroids? What are the system-scope
spheroids that 

Re: measure and observer moments

2004-02-06 Thread Eric Hawthorne





Given temporal proximity of two states (e.g. observer-moments),
increasing difference between the states will lead to dramatically
lower measure/probability
for the co-occurrence as observer-moments of the same observer (or
co-occurrence in the
same universe, is that maybe equivalent?) .

When I say two states S1, S4 are more different from each other whereas
states S1,S2 are less different 
from each other, I mean that a complete (and yet fully abstracted i.e.
fully informationally compressed) informational 
representation of the state (e.g. RS1) shares more identical
(equivalent) information with RS2 than it does with RS4.

This tells us something about what time IS. It's a dimension in which
more (non-time) difference between 
co-universe-inhabiting states can occur with a particular probability
(absolute measure) as the states
get further from each other in the time of their occurrence. Things
(states) which were (nearly) the same can only 
become more different from each other (or their follow-on most-similar
states can anyway) with the passage
of time (OR with lower probability in a shorter time.)

Maybe?

Eric 

Saibal Mitra wrote:

  - Original Message -
From: Jesse Mazer [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, February 05, 2004 12:19 AM
Subject: Re: Request for a glossary of acronyms


  
  
Saibal Mitra wrote:


  This means that the relative measure is completely fixed by the absolute
measure. Also the relative measure is no longer defined when
  

  
  probabilities
  
  

  are not conserved (e.g. when the observer may not survive an experiment
  

  
  as
  
  

  in quantum suicide). I don't see why you need a theory of consciousness.
  

The theory of consciousness is needed because I think the conditional
probability of observer-moment A experiencing observer-moment B next

  
  should
  
  
be based on something like the "similarity" of the two, along with the
absolute probability of B. This would provide reason to expect that my

  
  next
  
  
moment will probably have most of the same memories, personality, etc. as

  
  my
  
  
current one, instead of having my subjective experience flit about between
radically different observer-moments.

  
  
Such questions can also be addressed using only an absolute measure. So, why
doesn't my subjective experience ''flit about between  radically different
observer-moments''? Could I tell if it did? No! All I can know about are
memories stored in my brain about my ''previous'' experiences. Those
memories of ''previous'' experiences are part of the current experience. An
observer-moment thus contains other ''previous'' observer moments that are
consistent with it. Therefore all one needs to show is that the absolute
measure assigns a low probability to observer-moments that contain
inconsistent observer-moments.



  
  
As for probabilities not being conserved, what do you mean by that? I am
assuming that the sum of all the conditional probabilities between A and

  
  all
  
  
possible "next" observer-moments is 1, which is based on the quantum
immortality idea that my experience will never completely end, that I will
always have some kind of next experience (although there is some small
probability it will be very different from my current one).

  
  
I don't believe in the quantum immortality idea. In fact, this idea arises
if one assumes a fundamental conditional probability. I believe that
everything should follow from an absolute measure. From this quantity one
should derive an effective conditional probability. This probability will no
longer be well defined in some extreme cases, like in case of quantum
suicide experiments. By probabilities being conserved, I mean your condition
that ''the sum of  all the conditional probabilities between A and all
 possible "next" observer-moments is 1'' should hold for the effective
conditional probability. In case of quantum suicide or amnesia (see below)
this does not hold.

  
  
Finally, as for your statement that "the relative measure is completely
fixed by the absolute measure" I think you're wrong on that, or maybe you
were misunderstanding the condition I was describing in that post.

  
  
I agree with you. I was wrong to say that it is completely fixed. There is
some freedom left to define it. However, in a theory in which everything
follows from the absolute measure, I would say that it can't be anything
else than P(S'|S)=P(S')/P(S)


 Imagine
  
  
the multiverse contained only three distinct possible observer-moments, A,
B, and C. Let's represent the absolute probability of A as P(A), and the
conditional probability of A's next experience being B as P(B|A). In that
case, the condition I was describing would amount to the following:

P(A|A)*P(A) + P(A|B)*P(B) + P(A|C)*P(C) = P(A)
P(B|A)*P(A) + P(B|B)*P(B) + P(B|C)*P(C) = P(B)
P(C|A)*P(A) + 

Re: More on qualia of consciousness and occam's razor

2004-02-01 Thread Eric Hawthorne


Stathis Papaioannou wrote:

; you might even be able to read the brain, scanning for neuronal 
activity and deducing correctly that the subject sees a red flash. 
However, it is impossible to know what it feels like to see a red 
flash unless you have the actual experience yourself.

So I maintain that there is this extra bit of information -subjective 
experience or qualia - that you do not automatically have even if you 
know everything about the brain to an arbitrary level of precision. 
Moreover, it cannot be derived even in theory from the laws of physics 
- even though, of course, it is totally dependent on the laws of 
physics, like everything else in the Universe.
I'll grant you that the subjective experience of red etc cannot be 
derived from a theory of physics.
However, by Occam's Razor we can say that the qualia that other people 
experience are the same as those that we experience.
The reasoning is as follows:

The theorem that the qualia are the same is justifiable on the simple 
theory that near-identical physical brain structure and function
(amongst humans) leads to near-identical perception of the qualia of 
consciousness.

What simple theory which is consistent with the rest of our scientific 
knowledge would justify that the qualia are significantly
different? Right now, in the absence of such a 
qualia-difference-explaining theory, and with a plausible and simple and
non-revolutionary and reasonable theory of qualia-sameness, a 
scientific-thinking default assumption should be qualia-sameness.


Long aside: Parallel example:
A similar Occam's Razor argument can explain why the 
scientific-thinking default assumption should be in the non-existence
of God, except for the undeniable existence of God as a human abstract 
concept, like the concept of Nation-State.

There is a simple and reasonable theory of intelligent co-operating 
agent behaviour which runs something like that
1. We do a lot of reasoning about how agents, and in particular animal 
agents and intelligent human agents, affect
the outcomes in the world.
2. We do a lot of reasoning about how to influence these agents to act 
on the world as we would wish.
3. An unknown-agent proxy is an easy-to-understand extension to such 
an agent-behaviour and effects theory.
4. We can extend the same attitudes of obeisance and desire to please to 
the unknown-agent-proxy as we would
to any powerful animal agent or powerful human (king, warlord) agent. If 
we do (we would reason), we may
obtain the unknown-agent-proxy's favour and the outcome of 
unknown-agency events might come out in our favor.

Aside:
Note that the fundamental fallacy in the ancients' God-theory here is 
the ascription of unknown-cause events
as being the effects of intelligent agency. This is an example of a 
theory that is elegant, simple, and wrong. Physical
science and mathematics has by now provided alternative explanations 
(which have the advantage of being consistent with each other
and with observation i.e. of being logical and scientific) for the vast 
majority of the types of events (cosmic and planetary
origin, and life and human origin, weather, illness, love (reflection 
and elaboration of mating instincts into stories at
conscious-level of brain, in an information-processing model of 
brain/mind), crop-failure, failure or success of various
forms of psychological make-up and group-organizational behavior 
(reasons that kings might be successful or not) etc.,

5. Humans with intellect and other leadership qualities would also see 
how to harness the power implicit in the populace's
fear of and desire to be obeisant to the unknown-agent-proxy (i.e. the 
god). By proclaiming that they have special
access to the god, knowledge of its intentions, ability to influence it 
etc. they can harness the psychologically based
power that the god has over the believers' actions, and turn it into 
power that they themselves (the priesthood, the
god-kings or just kings-by-divine-right) have over the populace. 
Convenient. Too convenient not to result in a whole
entrenched societal structure of rules and hierarchical authority 
connected ultimately to the authority of the god itself.

6. Such an organised religion structure, or god-empowered government 
structure, if it succeeds in organizing
people for an extended period of time, as it seems they did, would 
naturally tend to take on a life of its own, a
self-reinforcing aspect, an autopoietic function as one of its 
functions. This self-preservation subfunction of
the god-empowered governance organization would take the form of 
religious education about the great history
of beneficial acts and mercies and wisdoms conferred on the people over 
their glorious history by the god via
the god-henchmen.

In my view, the governance aspect; that is the societal cohesion and 
organization aspect of always was the genuine
essence of organized religions, and also of divine-right governments. 
The god-basis was just a 

Re: More on qualia of consciousness and occam's razor - tiny addendum

2004-02-01 Thread Eric Hawthorne


Eric Hawthorne wrote:

6. Such an organised religion structure, or god-empowered government 
structure, if it succeeds in organizing
people for an extended period of time, as it seems they did, would 
naturally tend to take on a life of its own, a
self-reinforcing aspect, an autopoietic function as one of its 
functions. This self-preservation subfunction of
the god-empowered governance organization would take the form of 
religious education about the great history
of beneficial acts and mercies and wisdoms conferred on the people 
over their glorious history by the god via
the god-henchmen.
I should add that the other half of the autopoietic (self-preservative) 
subfunction of the
god-fear-and-god-obeisance-empowered organization is of course the 
enforcement branch: Mechanisms would
develop for enforcement-of-membership, rule-adherence, and enforcement 
that members conform to (express) the
orthodox forms (orthodox in that particular organization of course) of 
belief in the deity.

Thus we have religious intolerance, we have shunning, outcasting, 
excommunication, we have
dehumanization as worthless infidels and enemies of adherents to other 
(incorrect and defiant) religious orthodoxies,
and also, of course, stigmatization and de-valuing (not to mention 
torture and execution as an example) of those who
profess not to believe in the god (or any god) at all.

If I were living in the time (or a present-day place) of overwhelming 
and brutal dominance of god-empowered governance
organizations (e.g. everywhere before the beginning of the last century, 
and in a number of fundamentalist-Islamic
states (and southern US states? today,) I would have to profess belief 
in God to survive, and just hope that no-one heard
the quotation-marks in my statement which indicate belief in the power 
of the god-myth concept in human psychology and
thus in human society.



Flaw in denial of group selection principle in evolution discovered?

2004-02-01 Thread Eric Hawthorne
Blast from the recent past.
This is pertinent to the previous discussions on  evolution
as a special case of emergent-system  emergence.
It was argued that group selection effects have been discredited in
evolutionary biology. I counterargued that  denying  the possibility of
a selection effect at each more-and-more complex system-level in
a multi-layer complex-ordered emergent system (such as ecosystems,
biological species etc) denies the likelihood of spontaneous emergence of
those complex systems at all.
I think I've found the source of the confusion regarding group selection
effects. It goes like this:
A species can evolve a group-benefit behaviour so long as the development
of the behaviour does not, on average, reduce the reproductive success 
of individuals
that engage in the group-benefit behaviour, and so long as the behaviour 
does
confer, on average, a benefit to the reproductive chances of each 
individual in
the well-behaving group.

The key is in how we interpret average. The question is whether an 
individual
organism always acts in each short-term encounter in a manner which 
maximizes their
chance of survival-to-breeding-age IN THAT ENCOUNTER, or whether it is 
possible
for the individual to wager that taking a slight risk now  (and 
believing or observing that
others will also do so) will lead to a better chance that the individual 
will survive ALL
ENCOUNTERS from now up until it breeds. The organism doesn't have to be 
smart enough
to believe in this wager. It is sufficient that the wager be on average 
beneficial to the
individual.In that case, through repeated trials by multiple 
individuals, the behaviour
which is group-adaptive and individually lifetime-average adaptive can 
evolve.

BECAUSE THE EVOLVABLE GOAL IS NOT SIMPLY TO MAXIMIZE THE
CHANCE OF SURVIVAL OF AN ORGANISM OF THE NEXT SHORT-TERM ENCOUNTER.
THE GOAL IS TO MAXIMIZE THE PROBABILITY OF SURVIVAL OF THE SUM TOTAL
OF ALL OF THE ORGANISM'S ENCOUNTERS UP TO WHEN THE ORGANISM REPRODUCES.
So it is just a time-scale misunderstanding. Group-adaptive behaviours 
increase the member's
probability of surviving to reproductive age, even if they slightly 
increase the chance of the
indvidual losing some particular encounter.

True extreme altruistic behavior which conveys CERTAINTY of death in a 
single encounter
may not fit into this model, but it can be argued as to whether the 
altruistic individual believes
they are going to die for certain in many incidents or not, or whether 
they hold out faint hope
in which case the argument above could still hold. In any case, true 
certain death altruistic behaviour
is an extreme anomoly case of group-adaptive behviour. Most 
group-adaptive behaviours are
not of that kind, so extreme, definitely fatal altruism is not a good 
model for them.

Eric






Re: Incompleteness and Knowledge - errata

2004-01-31 Thread Eric Hawthorne
Corrections inserted here to the following paragraph of my previous 
post. (Apologies for the sloppiness.)

Eric Hawthorne wrote:

 so truth itself, as
a relationship between representative symbols and that which is 
(possibly) represented, is probably a limited
concept, and the limitation has to do with limits on the information 
that can be conveyed about one structure  (e.g. all of reality)
 BY another structure (e.g. a formal system which is itself part of 
that reality.). 

Clearly an embedded structure (e.g. formal system or any finite 
representative system) cannot convey all information about both itself 
and the
rest of reality which is not itself. There is not enough information 
in the embedded structure to do this.




Re: Modern Physical theory as a basis for Ethical and Existential Nihilism

2004-01-30 Thread Eric Hawthorne


Stathis Papaioannou wrote:

fact vs. value;
formal vs. informal;
precise vs. vague;
objective vs. subjective;
third person vs. first person;
computation vs. thought;
brain vs. mind;
David Chalmer's easy problem vs. hard problem of consciousness:
To me, this dichotomy remains the biggest mystery in science and 
philosophy. I have very reluctantly settled on the idea that there is 
a fundamental (=irreducible=axiomatic) difference here, which I know 
is something of a copout. I really would like to have one scientific 
theory that at least potentially explains everything. As it is, even 
finding a clear way of stating the dichotomy is proving elusive.

Some previous posts in the current thread have attacked this idea by, 
for example, explaining ethics in terms of evolutionary theory or game 
theory, but this is like explaining a statement about the properties 
of sodium chloride in terms of the evolutionary or game theoretic 
advantages of the study of chemistry. Yes, you can legitimately talk 
about ethics or chemistry in these terms, but in so doing you are 
talking meta-ethics or meta-chemistry, which I think is what Bruno 
means by level shift.

I really think that to get a good grasp on this kind of issue, one has 
to get over ones-self. Step outside for a moment and
consider whether you feeling conscious is as amazing or inexplicable 
as you think. Consciousness may very well just be
an epi-phenomenon of a self-reflection-capable world-modelling 
representer and reasoner such as our brains.
Minsky's society of mind idea isn't fully adequate as a consciousness 
explanation, but it makes inroads.
Some of the most exciting work in this area IMHO is being done by the 
neurologist Antonio Damasio. Here is a
review of his book on the topic of the feeling of consciousness:

http://homepage.ntlworld.com/anthony.campbell1/bookreviews/r/damasio-2.html

One of his key idea is that the lowest level of consciousness is just 
the brain's representation of the sensor data about
what our body is doing (how it is positioned and moving, if it aches 
anywhere, and what we're seeing, hearing in each instant
etc). He says this is the brain's representation for the purpose of 
homeostasis i.e. the instantaneous status of the body.
This homeostatis awareness (reflection of sensor data in the brain) he 
calls the proto-self.

Then comes a level (he calls core consciousness) at which those 
low-level sense data are integrated into a conceptual
(or object-modelling) level to form a continuous stream of 
consciousness feeling. This is the watching a movie but
you are in the movie sense.

Finally, at the high level, is added (or filled in) ideas from the 
memory and planning facilities of the higher brain.
So what we are doing here is adding in ideas about things which take 
time. We are adding in (to help explain
the stream of consciousness object-movie that we're in) a whole bunch 
of remembered specific episodes and
facts and generalized space-time-world-situation-model concepts that we 
produced by processing experience
after experience after experience. And we are adding in hypotheses about 
how things could go if (i.e.
object-movie-that-we're-in-explorations of counterfactuals and 
hypotheticals and desired future states and
plan run-throughs for getting there.) This is just using the same 
watching-object-movie-that-I'm-in capability
but to daydream (remember, or wish, or plan) alternative scenarios 
rather than the sense-data direct movie
of the core-self. This highest level self, he calls the 
autobiographical self because the highest level sense of
consciousness is in effect, us writing the story of ourselves (that 
we're in) as well as reading the story of ourself (that we're in)
at the same time. It is a story, and not just a stream-of-consciousness, 
because it has added in memories and
experiences from the past, to provide a meaningful causal narrative to 
ourself about what is going on now, and
what is going to happen next.

So highest-level consciousness IS an autobiographical story of ourself 
and our doings and present-time but
past-experientially interpreted experiences.

And that is just the back-and-forth-in-time (or sideways to 
hypotheticals/counterfactuals) extension of the
core-self movie that I'm both watching AND sensing that I'm in it 
sense, which itself is the
CONCEPTUAL-OBJECT-INTERPRETATION of the continuous stream of homeostasis 
raw sense-data
that the brain is continually  receiving and processing in real-time to 
know what the state of the body is
and what it senses to be around it.

This makes PERFECT sense (and feels almost adequate, as an explanation 
of the feeling of consciousness) to me.

Eric

p.s. before someone jumps in about how off-topic this is, I think that's 
narrow minded because understanding
consciousness is integral to understanding observers and their role in 
physics.



Re: Modern Physical theory as a basis for Ethical and Existential Nihilism

2004-01-29 Thread Eric Hawthorne


Stathis Papaioannou wrote:

Take these two statements:
(a) Dulce et decorum est/ Pro patria mori (Wilfred Owen)
(b) He died in the trenches during WW I from chlorine gas poisoning
The former conveys feelings, values, wishes, while the latter conveys 
facts. The former is not true or false in the same way as the latter 
statement is. This has always seemed obvious to me and it has been 
stated in one form or another by philosophers of an empiricist bent 
since David Hume. Does anyone subscribing to this list really disagree 
that (a) and (b) are different at some fundamental level?


Well since I don't really read Latin, this will be a little tough. 
Luckily this website does read latin.
http://lysy2.archives.nd.edu/cgi-bin/words.exe?Dulce+et+decorum+est
http://lysy2.archives.nd.edu/cgi-bin/words.exe?Pro+patria+mori

So I'll assume that the second one is something like It's good to die 
for one's country.

So what is this saying? It may simply be explaining that countries 
would do better if people were willing
to die for them. If one were to do some kind of game-theory model of 
geopolitical evolution,
one might conclude that this is factually true.

What does the first one say? flattery is pleasing? or sweetness is a 
virtue?

I'm sure that given enough time, one could show that both of these have 
a basis in evolution and specifically
the evolution of successful cooperative social behaviour.

Moral truths are complex truths. That doesn't make them less true. Just 
harder to explain.

Eric




Re: Subjective measure? How does that work?

2004-01-26 Thread Eric Hawthorne






Wei Dai wrote:

  On Sun, Jan 25, 2004 at 03:41:55AM -0500, Jesse Mazer wrote:
  
  
Do you think that by choosing a 
different measure, you could change the actual first-person probabilities of 
different experiences? Or do you reject the idea of continuity of 
consciousness and "first-person probabilities" in the first place?

  
  
The latter. I came to that conclusion by trying to develop a theory of 
first-person probabilities, failing, and then realizing that it's not 
necessary for decision making. If someone does manage to develop a theory 
that makes sense, maybe I'll change my mind.

No one has tried to answer my other objection to an objective measure,
which is that since there are so many candidates to choose from, how can
everyone agree on a single one?
  

I think that a notion of measure which is so flexible that there are
infinite numbers of possible measures
to choose from, is a wrong, or non-useful, definition of measure. I
think people have to try harder
to find a stronger and even more objective notion of measure.

I would argue that all of the observers who co-exist should agree that 

1. their universe has a very high measure, and
2. their universe generates complex order

They should say "it's overwhelmingly most likely that we're observing a
high-measure universe which generates
complex order." 

I think the form of any high-measure universe which can generate
complex order is exceedingly
constrained, because the two constraints (high measure) and (generates
complex order) can only be obtained with
onerous constraints on form of universe (physical law etc).

Eric





Re: Subjective measure? How does that work?

2004-01-25 Thread Eric Hawthorne






Wei Dai wrote:

  On Sat, Jan 24, 2004 at 11:49:09PM -0500, Jesse Mazer wrote:
  
  
But measures aren't just about making decisions about what to *do*, the main 
argument for a single objective measure is that such a measure could make 
predictions about what we *see*, like why we see regular laws of physics and 
never see any "white rabbits". Although Bob can decide that only universes 
where gravity is repulsive matter to him in terms of his decision-making (so 
that he'd be happy to bet his life's savings that a dropped ball would fall 
up), he'll have to agree with Alice on what is actually observed to happen 
when a particular ball is dropped. 

  
  
Well, when the ball is dropped, in one universe it falls down, and Bob has
to agree with Alice, and in another universe it up, and Alice has to agree
with Bob. Alice thinks the second universe is less important than the 
first, but Bob thinks it's more important. How do you break this symmetry?
  

Well, each of us only experiences a single universe (and further, all
of the other humans that we observe
are also observing the same universe we are observing.) Even if one
believes a strong version of MWI in
which there are untold numbers of other us's experiencing other
universes, it's still true that each of those
duplicates only gets to experience a single universe. That's something
about the nature of observation and
observable universes themselves.

So if Alice and Bob are IN the same universe, where balls fall down,
they'd both be well-advised to
"believe in" the facts of their own universe, and not some speculative,
or at the very least completely
inaccessible, alternate universe. From the perspective of an observer
(within a universe), the universe
they inhabit is more important. 

PRINCIPLES:
---
1. A UNIVERSE IS WHERE ONLY ONE OF THE POSSIBILITIES FOR ANYTHING
HAPPENS. 

2. EACH OBSERVER ONLY EXPERIENCES ONE UNIVERSE

3. COUNTERPART OBSERVERS SHOULD BE CONSIDERED DIFFERENT OBSERVERS,
BECAUSE
THERE IS NO SUCH THING AS CROSS-UNIVERSE ACCESSIBILITY OR EXPERIENCE.

I simply do not believe that the notion of an observer being able to
access or meaningfully experience the life
of the observer's other-universe counterparts, even if counterpart is a
well-formed notion.
I'm familiar with all of the various logic variations of the notion
of trans-world identity, and I find them to be model-level concepts
(matters of representative opinion) more than
object-level concepts. What I've learned about identity is that there
is a mixture of objectivity and subjectivity 
(affected by focus of concern) to it as a concept. What trans-world
identity means (or is useful for) if a premise of
total inter-world inaccessibility is accepted, is questionable.

Eric



  
  
  
Without an objective measure, I don't 
think there's any way to explain why we consistently see outcomes that obey 
the known laws of physics (like why we always see dropped balls fall towards 
the earth).

  
  
What good are the explanations provided by an objective measure, if I
choose to use a different subjective measure for making decisions? How do
these explanations help me in any way?
  

Choosing a measure from some other universe that you speculate exists
(with, necessarily, no evidence) is
risky and counterproductive to your survival in your universe. I'd
advise you to get out of the path of
the falling ball. Even if "counterparts" makes sense (not granted), if
all counterparts made decisions
based on their speculations about other-world likely happenings, then
all counterparts of that particular
observer would quickly die off for sure. Observers like that would not
evolve. Only home-body observers
(with local-universe concerns) would.






Re: Subjective measure? How does that work?

2004-01-24 Thread Eric Hawthorne
Can you explain briefly why the choice of measure is subjective? I 
haven't read any of the
books you mentioned (will try to get to them) but am familiar with 
computability theory
and decision theory.

In my favourite interpretation of the multiverse, as  a very long 
(possibly lengthening)
qubitstring containing all of the possible information-states implied in 
such a long bitstring,
the absolute measure of any information-state (instantaneous state of 
some universe)
would be the same as any other state of the same bitstring length.

In that framing of things,  I guess there's another definition of 
measure, which goes something
like this:

Let Ui be an internal-time-ordered set of information-states 
s1,s2,...,s(now)comprising
an observable universe.

Ui, to be observable, is constrained to be an informationally 
self-consistent
(too complex a concept to get into right here) set of information-states.

There is a constraint on any information-state which qualifies to be 
s(now+1) in any observable
universe path s1,s2,...,S(now). Specifically, any information-state that 
can be S(now+1)
must be informationally consistent (not law violating) in conjunction 
with s1,s2,...,S(now).

Furthermore, the history that has evolved as s1,s2,...,s(now) has the 
result of determining
the Ui-relative probability of any particular other information-state 
being able to become
s(now+1) in that observable path.

That now-in-an-observable-universe-relative probability of successorhood 
in that universe
of any other information-state is then a universe-specific measure 
value, or more specifically,
a now-state-of-universe specific measure value.

That now-in-an-observable-universe measure (for potential successor 
information states for that
universe state-set) may correspond to the probabilities of  all the 
outcomes of all the wave equations
of quantum-states which are observable in the now moment in that universe.

As a comp sci person and not a physicist, I look forward to your read on 
where my interpretation
is misguided, and for a better interpretation.

Eric

Wei Dai wrote:

I have to say that I sympathize with Caesar, but my position is slightly
different. I think there is a possibility that that objective morality
does exist, but we're simply too stupid to realize what it is. Therefore
we should try to improve our intelligence, through intelligence
amplication, or artificial intelligence, before saying that objective
morality is impossible and therefore we should just pursue other goals
like survival, comfort or happiness.
Some people have argued that in fact survival is an objective goal,
because evolution makes sure that people who don't pursue survival don't
exist. But if we assume that everything exists, the above statement has to
be modified to an assertion that people who don't pursue survival have low
measure. However the choice of measure itself is subjective, so why
shouldn't one use a measure in which people who don't pursue survival have
high measure (e.g., one which favors universes where those people
survive anyway through good luck or benevolent gods)?
 




Re: Subjective measure? How does that work?

2004-01-24 Thread Eric Hawthorne






John M wrote:

  I find some inconsistencies in your post:
  
  
qubitstring containing all of the possible information-states implied in
such a long bitstring,...

  
  possible, of course, to OUR knowledge (imagination). Anthropomorph
thinking about the MW.
  

I'm really talking about "convertible to binary-representation"
information states here. i.e. formal notion
of information i.e. a count and structuring of discrete differences. As
such, 
the number of information-states representable in a qubitstring of
length n is 2 ^ n.


  
 Let Ui be an "internal-time-ordered" set of information-states
s1,s2,...,s(now)comprising an observable universe.

  
  How 'bout the Uis where 'time' has not evolved? Excluded?
  

Those Uj's are not observable (unless we change the conventional
meaning of that word.)
"Observe" as conventionally meant is defined with respect (at least
indirectly) to notions
of time. 

  Observable by what means? 

Any means where information can be conveyed from something outside of
the observer SAS,
at the speed of light or lower, to the representing mechanism inside
the observer.

BY THE WAY. I'M NOT A PHYSICIST. Can someone who knows please clarify
the answer to
the rather basic question of whether something like the
slit-experiment means anything (or DOES
anything to the quantum phenomena of the photons) in the absence of a
perceiving observer like
ourselves. I'm fairly basically and profoundly ignorant on that score.
i.e. can 
"the measuring experiment machine itself" without the person (or AI
etc, or dog, say) to perceive 
the result, still cause a difference in "what happens" to the photons?


  We have a pretty narrow range in mind.
Would you restrict the MWI to our cognitive inventory of 2004?
Does that mean that the MW was "smaller" in 1000 (with the then
epistemized contents of cognition)?

  

The observable, classicized portion of the Ui observable universe was
smaller in 1000, or at any
previous time-within-itself than now, yes. Of course, to be precise,
now actually means here-now,
as these are inseparable in relativistic physics.

  
  
... must be informationally consistent (not law violating) in conjunction

  
  ...
what "law"? presumed omniscient?
  

Observed and verified physical laws of the Ui universe.

  
Just malicious remarks. I appreciate to try and to criticize.
I have no better ones.

  

No problemo

  JM
  

Eric






Re: Subjective measure and turing machine terminology

2004-01-24 Thread Eric Hawthorne






Wei Dai wrote:

  On Sat, Jan 24, 2004 at 12:21:40PM -0800, Eric Hawthorne wrote:
  
  
Can you explain briefly why the choice of measure is subjective? I 
haven't read any of the
books you mentioned (will try to get to them) but am familiar with 
computability theory
and decision theory.

  
  
Since you do not mention that you're familiar with the theory 
of algorithmic complexity, I suggest that you read the first book on that 
list ASAP. The following response might not make sense until you do.

  

I took some small smattering of that stuff in comp sci undergrad, but
essentially
what it lets met understand is that some algorithms are O(1), O(n),
O(nlogn),O(n^2) O(e^n) etc.
I'm also generally familiar with Turing Machine concepts, but I'm rusty
on the details.
I'm a bit confused as to what is meant by a string having a lower
algorithmic complexity.
Does that mean that ths shortest program that could result in a symbol
string of that form
has a vertain algorithmic complexity that is lower than the algorithmic
complexity that
could compute some other string? What are these strings anyway? Symbol
strings which
are a finite subpart of the turing machine's tape, conceptually?

A question that would arise with that definition above of what the
"algorithmic complexity of
a string" means is: Shortest algorithm that could generate that string
starting with what as its
input? Surely if the input were a string that was, say, just one value
in one tape-position
different than the output string, then any output string can be
computed by a trivial turing
machine program (one step or so) from that special input. So how do you
define what
the input is in assessing "the algorithmic complexity of a string?" 

Or is the string a sequence of instructions and datastore positions
comprising the turing machine program itself? 
and we're discussing the inherent computational complexity of that
particular program, for any (or average or whatever)
input?

I guess I have more trouble mapping directly in my head from turing
machine programs to multiverse states than
I do mapping raw bitstrings to multiverse states.

The general question I asked above would seem to come down to "isn't
the complexity of getting to
some subsequent information state determined by what the previous
information state is?"


Second terminology thing: When you say "each universal Turing machine,
again I get confused".
Isn't "a turing machine" just the abstraction consisting of the movable
read/write head and a tape?
Isn't the correct terminology "each turing machine PROGRAM which is
NP-complete" or which is
"universal"? How can we have different machines themselves? Or is it
conventional to say that
"a turing machine" is "the movable head, plus its current position,
plus a particular set of values
on a tape (i.e. a particular program?) In normal computing
terminology, the machine is the machine
and the software program is the software program and the data is the
data.

If you can just help me a little with these terminology stumbling
blocks, I'm sure I (and other 
computational-complexity-theory-tourists on the list) can understand
the concepts.




  Basically, all of the sensible proposed measures are based on the
universal distribution, which assigns a larger probabilities to strings
that have lower algorithmic complexities. However there's actually an
infinite class of universal distributions, one for each universal Turing
machine, and there's no objective criteria for determining which one
should be used.

Another problem is that using the universal distribution forces you to 
assume that non-computable universes do not exist. If one does not want to 
make this assumption, then a more dominant measure need to be used (for 
example, based on a TM with an oracle for the halting problem or
the complexity of a string's logical definition) but then there are even 
more measures to choose from (how high up the computability 
hierarchy do you go? how high up the set theoretic hierarchy?).

Now suppose that two people, Alice and Bob, somehow agree that a measure M
is the objectively correct measure, but Bob insists on using measure M' in
making decisions. He says "So what if universe A has a bigger measure than
universe B according to M? I just care more about what happens in universe
B than universe A, so I'll use M' which assigns a bigger measure to
universe B." What can Alice say to Bob to convince him that he is
not being rational? I don't see what the answer could be.

  






Re: Is group selection discredited?

2004-01-23 Thread Eric Hawthorne
Unfortunately, disallowing notions of group selection also disallows 
notions of
emergent higher-level-order systems. You must allow for selection 
effects at all
significantly functioning layers/levels of the emergent system, to 
explain the emergence
of these systems adequately. For example, ant colonies (as an emerged 
system) live
for 15 years whereas the ants live for at most a year. Yet the colony 
(controlling for
colony size) behaves diffently when it is a young colony (say its first 
five years) compared
to when it is in its old age. (Essentially, the colony's behaviours 
become more
conservative (less amenable to change of tactics.))
It would be very difficult to explain this solely from the perspective 
of the direct benefit
to any individual ant's genes. For the benefit of ant-genes in general 
in the colony,
yes.

I think that it's just been too difficult to get adequate controlled 
studies to determine
whether a group selection effect is happening. Because the individuals 
tend not to
live at all if removed from their group.

I think it is still an open debate. Group selection being discredited is 
just Dawkins
and  some like-minded people's favorite theory right now.

Group selection is now discredited as an evolutionary force.

See http://www.utm.edu/~rirwin/391LevSel.htm for some class lecture
notes discussion group selection.
 





Re: Modern Physical theory as a basis for Ethical and Existential Nihilism

2004-01-22 Thread Eric Hawthorne


Stathis Papaioannou wrote:

This sort of argument has been raised many times over the centuries, 
both by rationalists and by their opponents, but it is based the 
fundamental error of conflating science with ethics. Science deals 
with matters of fact; it does not comment on whether these facts are 
good or bad, beautiful or ugly, desirable or undesirable. These latter 
qualities - values - are necessarily subjective, and lie in the domain 
of ethics and aesthetics

Saying that life is worth living, or that you believe it is bad to 
kill, are simply statements of your values and feelings, and as such 
are valid independently of any scientific theory.


It may not be an error to equate science and ethics. Science continually 
moves into new domains.

I'm of the opinion that there is a valid utilitarian theory of 
co-operating intelligent agent ethics.

Utilitarian because the purpose of the ethical principles can be shown 
to be group success
(i.e. emergent-system survival / success in the competition with other 
potential
variants of emergent intelligent-agent systems that don't include 
ethical principles as
behaviour guides for their the agents.)

Note the subtlety that the utility NEED not be to an individual agent 
directly, but may only
accrue to individuals in the group, ON AVERAGE, due to the ethics and 
moral rules generally
obeyed by the group members, and the consequent floating of (almost) 
all boats.

One of the common debates is between ethical/moral relativism versus 
absolutism.
I call this a confusion due to oversimplification of the issue, rather 
than a debate.
In this regard, this debate is as silly as the nature vs nurture debate 
and its influence on,say,
human behaviour, in which the answer is of course it's a complex 
feedback loop involving
the interaction of inherited traits and the accidents of life. Duh! 
There is no nature vs nurture.
It's always nature AND nurture. Arguing about which is more fundamental 
is truly unproductive
hair-splitting. We should be researching exactly how the feedback loops 
work instead.

So completely analogously, with absolute, and relative morals and ethics.

My position is that there are absolute ethical principles and moral 
rules, but  that  those
are all  general rules, not instantiated rules. (i.e. absolutes in 
ethics/morals are
all universally quantified rules that apply to general classes of 
situations and actions.)

Relativism is justified in as far as it is simply debate about how the 
absolute general
ethical and moral principles should map (do map) onto the current 
particular situation
at hand. This mapping may not be simple. A single situation can be 
boundary-scoped
differently, for example, or its agents can be seen as engaging in 
several different kinds
of acts, with many effects for each act, and the importance to the 
essence of the situation
of each act and effect can be debated from different perspectives that 
involve the interests
and knowledge of different agents. So the single situation may map 
validly to several
different instantiations of several ethical principles. And the moral 
rules applicable to
the situation may be subject then to legitimate debate.

Relativism may also question whether some moralist group's absolute 
moral principles
are general enough, and may argue with some validity that they are not 
general enough
to be applied without frequent error (and tragedies of injustice).

e.g. Dont Eat Pork   -- Yeah, whatever

however,  Don't eat the kinds of meat that are often rotten and 
disease-ridden in our climate, like Pork
may be a valid moral rule at some historical time and place.

e.g. Thou shalt not kill. -- Well that's an easy to remember 
simplification, but a little over simplified and too specific.
How about:

Minimize the amount of quality-life-years lost in this encounter.

So, women and children first into the lifeboats. You old geezers are 
shark-bait.

Or.. Take out the guy wearing the bomb. Now.

And relativism is also justified in as far as it is the correct 
observation that
many (most) situations of complex interactions beteen multiple 
intelligent agents can
be described from multiple perspectives (and/or multiple situation-scope 
inclusions/exclusions).
A specific situation can be (probably validly) described as co-incident 
incidences of
the several instances of several different general ethical principles.

A to B
Our people have lived here from time immemorial. And your grandfathers 
killed my grandmother.
You are pestilent invaders. Get out or we will have a just war against you.

B to A
Our people have lived here from time immemorial. And your grandfathers 
killed my grandmother.
You are pestilent invaders. Get out or we will have a just war against you.

Clearly, it is easy to imagine a situation in which both A and B are 
factually correct, except perhaps in their
use of the word just.

Most complex interaction situations requiring application of ethics and 
moral rules are 

Re: Ethics and morals (brief addendum)

2004-01-22 Thread Eric Hawthorne
Oh and

Do unto others as you would have them do unto you.

That's not Christianity. That's a successful strategy in game theory.



Re: Ethics and morals (brief addendum)

2004-01-22 Thread Eric Hawthorne
I don't think there's just one successful game theory strategy.

Do unto others as you would have them do unto you.
is a kind of a planning-ahead strategy if you believe that others
are going to use tit for tat. Maybe?
And besides, I'm talking about a strategy that is beneficial to the group
(and to group members indirectly thereby. Not a strategy that is most
beneficial on average to the individual at each encounter.


Frank wrote:

Actually, the successful game theory strategy is tit for tat

which would be quivalent to: an eye for an eye  
 




Re: Are conscious beings always fallible?

2004-01-20 Thread Eric Hawthorne
How would they ever know that I wonder?
Well let's see. I'm conscious and I'm not fallible. Therefore ;-)
David Barrett-Lennard wrote:

I'm wondering whether the following demonstrates that a computer that can
only generate thoughts which are sentences derivable from some 
underlying
axioms (and therefore can only generate true thoughts) is unable to 
think.

This is based on the fact that a formal system can't understand sentences
written down within that formal system (forgive me if I've worded this
badly).
Somehow we would need to support free parameters within quoted 
expressions.
Eg to specify the rule

It is a good idea to simplify x+0 to x

It is not clear that language reflection can be supported in a completely
general way.  If it can, does this eliminate the need for a 
meta-language?
How does this relate to the claim above?

- David
 

I  don't see the problem with representing logical meta-language, and 
meta-metalanguage... etc if necessary
in a computer. It's a bit tricky to get the semantics to work out 
correctly, I think, but there's nothing
extra-computational about doing higher-order theorem proving.

http://www.cl.cam.ac.uk/Research/HVG/HOL/

This is an example of an interactive (i.e. partly human-steered) 
higher-order thereom prover.
I think with enough work someone could get one of these kind of systems 
doing some useful higher-order
logic reasoning on its own, for certain kinds of problem domains anyway.

Eric



Re: Modern Physical theory as a basis for Ethical and Existential Nihilism

2004-01-20 Thread Eric Hawthorne
Sorry. Can't help myself : Is there any point in completing that term 
paper really?

On a few points.

I don't believe in the point of view of nihilism because everything 
will happen in the multiverse, anyway, regardless of what I do..
My reasons are a little vague, but here's a stab at it:

1. I look at us group of human observer SAS's as results of and 
guardians of emerged complex order in our universe.
In fact I believe our universe (its temporal arrow etc) is only 
observable because it is the set of paths through the multiverse
that has all this emerged complex order in it.I believe these 
potentially observable sets of paths through the multiverse's general
disorder are rare (of small measure.)

2. Somehow, all of us human observers are clearly in or observing 
the SAME set of paths through the multiverse.
Now that is significant. It tells us that in the emergent-order paths of 
multiverse info-state evolution, that those paths
are observable consistently to ANY observer that emerges as part of the 
emerged complex order present in those paths.

3. I see humans (or other intelligent lifeforms) as in some strange ways 
the smart-lookahead guardians of the particular
piece of emergent-order their most a part of (their planet, their 
ecosystems, their societies, themselves).The reason
we emerged (or are still here) is because we have helped make our 
emergent complex system successful (robust).

4. For some strange reason, I value the most complex yet elegant and 
robust emergent order (for itself). This is why
for example, I'm an environmental activist in my spare (hah!) time.

5. I think if one values elegant, robust complex order, and if one is an 
active part of the elegant, robust, complex
order, who emerged precisely so that a SAS of the emerged system could 
sense and make sense of the surroundings,
and could model and influence the future, and guard the SAS's own 
existence and that of the whole emerged system of
which it is a part, then guard away I say, actively, not 
nihilistically. Model your world. Predict its different possible
futures, and use your emerged (and cultivated, same thing) wisdom to 
steer yourself, and your society, and your
ecosystem, and your planet, away from harm and too-soon reduction to 
entropy. In the very, very end, it is said,
entropy wins (like the house wins in Vegas.) But why not have as good a 
game as possible before it ends in a billion
or trillions of years.

6. Of course, it doesn't make sense to try to protect (and advance in 
elegance) an emergent order that is indeed truly
robust, does it? But my point back there was that we are supposed to be 
part of the emergent system's self-defense
mechanism, because we can think and plan, and change things in our universe.

7. So can we change the multiverse as a whole? Probably not. But all 
that observers can ever co-observe
is a single self-consistent universe in the multiverse. Look at earth 
and earthlife like a surfboard and surfer surfing
this big coherent wave of informationally self-consistent order that is 
our universe. What we as the surfer can
do is look ahead, and steer the board, and prolong the ride, and make it 
as amazing as possible before it
tumbles into the vortex. That's enough control to say let's delay 
nihilism til the very last possible moment at least,
shall we. Let's see where we might wash up if we keep riding well. 
Enough. Enough. This tortured analogy is
killing me.

8. You may say that there's all these other virtual doppelganger surfers 
and surfboards (even on our same order-wave universe)
so why bother steering anyway? One of us will make it. Yeah well I don't 
think so. I think all the emergent systems
kind of compete with each other to organize things, and there's winners 
and losers, and the losers are all just info-noise.

8. I guess the above is premised on the supposition that we CAN steer. 
That we have any say over when and how
our part of our universe degrades into entrop (info-noise.) This is 
really vague but I have some strange
sense that what observing AGENT (actor) systems such as ourselves are 
doing is choosing (or having a part
in choosing) the way in which their quantum world becomes their 
classical world. I think there's the possibility
of free will there. It's like their steering the NOW wavefront itself 
(in their shared universe). If the possibly ordered
paths through multiverse infospace near these observers are more than 
one possible path, maybe its the observers,
by the sum total of their collective actions, that micro-manage the 
choice of future info-paths that will still be
consistent with the path(s) their all on. Maybe the set of possible 
consistent and ordered paths is narrower and narrower as
time goes on for them, but I think there are still choices to be made. 
It's possible that that's an illusion, but choice being an illusion
is a concept for the theoretical meta-level, for OUTSIDE our universe 
path. Inside our path(s), our paths and the 

The Facts of Life

2004-01-18 Thread Eric Hawthorne


CMR wrote:

Indeed. The constraints to, and requirements for, terrestrial life have had
to be revised and extended of late, given thermophiles and the like. Though
they obviously share our dimensional requisites, they do serve to highlight
the risk of prematurely pronouncing the facts of life.
 

Just to be mischievous, I'll here pronounce the facts of life or more 
precisely
a sketch of a theory of the emergence of life which will serve the 
purpose of partially constraining/
defining what is meant by life. This is a hobby project.

The Emergence of Life Via Weak (Stochastic) Physical Pattern Replication
==
Definitions:

pattern a form of order or regularity, which can be described by a 
finite and usually simple set of constraints.

living organism is a subtype of spatially organized pattern of matter 
and energy with some
distribution for a time period in some spatial region in otherwords, of 
physical pattern in space-time.

ecosystem or supporting environment of an organism is also a subtype 
of physical pattern in space-time.

species is also a subtype of physical pattern in space-time, ranging 
over a larger span of time than an
organism pattern, and which includes instances over time of the 
subpatterns of the species pattern
that constitute the individual organisms of the species.

Abstract:
-
The natural selection process that results in the evolution of lifeforms 
as we know them can be extended
backwards in time further than is traditionally assumed, to fully 
explain the emergence of life from
chance-occurring patterns of matter and energy. A model of the form of 
this earliest natural selection
process is presented, in terms of three specific weakenings of the 
self-replication and metabolism processes
that lifeforms exhibit.

Characteristics of a living organism:
---
1. It self-replicates (aka reproduces).
   Part of what this means is that the organism assimilates surrounding 
matter and energy so that
   they become part of its species pattern, if not necessarily of its 
own individual organism pattern.

2. It metabolizes. It ingests matter and energy and converts them to a 
form more directly usable for the maintenance
of the form and function of the organism pattern and for its reproduction.

3. It is an autonomous agent (within some environmental constraints.)
The matter and energy that is inside the organism pattern can 
replicate the pattern, and metabolize
pattern-external matter and energy, in a relatively diverse set of 
surroundings, compared to its own form and function
constraints anyway, and it can do these things substantially by itself 
so long as an appropriate supporting environment 
(which may not itself qualify as an organism but has some form and 
function constraints itself)
is maintained near it. In a sense, this autonomous replicating and 
metabolizing criterion just helps us
define a boundary around what matter and energy is the organism to  
and what is its environment.

Thesis
---
1. Before there was strong individual-organism self-replication, 
there was weak (stochastic) replication
of weakly constrained (and possibly physically dispersed) 
pre-organism patterns of matter and energy.
The only property (constraint on form and function) that these patterns 
had to exhibit was just enough
probability and frequency of just as roughly accurate pattern 
reproduction so as to maintain the
order (i.e. the pattern constraints) of the pre-species pattern 
against the various forms of pattern-dissolution
attacks that occurred in its environment. These attacks don't need to be 
explained much. They are comprised
just of
a.. the natural tendency of any physical system to increasing entropy 
(disorder) and
b.  active processes of dissolution of the pattern or its resources in 
its supporting environment where those active 
processes are the result of the actions of competing weakly-replicating, 
weakly-metabolising physical patterns in the
vicinity.

2. Before there was strong organism-internalized  metabolism 
process, there was weak (stochastic)
pseudo-metabolism. That is there were processes of energy conversion 
(and temperature regimes and
matter mobility regimes (e.g. liquid phases) ) IN THE VICINITY OF A 
WEAKLY REPLICATING PATTERN
which were such as to support the (at least probabilistic) carrying on 
of the weak replication process
of the pattern. That is, early metabolism could be defined as happening 
both within and in the environment
of the pattern. Since the weakly replicating pattern initially may have 
been somewhat spatially distributed, and
only stochastically present at various time intervals, it's just as well 
that we don't require that the pattern-supporting
energy conversion processes (heat-engine processes) be carrried out 
initially entirely WITHIN the pattern (pre-organism)
itself.

Weak Replication and Weak Metabolism Concepts

Re: The Facts of Life and Hard AI

2004-01-18 Thread Eric Hawthorne


CMR wrote:

I think it's useful here to note that from the strong AI point of view
life as it could be is empahasized as opposed to life as we know it.
It's also worth pointing out that the latter is based upon a single data
point sample of all possible life, that sample consisting of life that
(apparently) evolved on our planet. Given that, defining life in the
universe, and certainly in all universes, based only upon that sample is
speculative at best. (Unless, as some claim, our biosphere is truly unique;
I doubt this is the case).
 

Just to be clear I'm not at all attempting to dis the possibilities of 
hard artificial intelligence.
I studied it to postgrad-level in the past, and would hope to be able to 
work in that field for real some
day.

The Emergence of Life paper is talking specifically about those sorts 
of life that can emerge
WITHOUT THE ASSISTANCE OF AN ALREADY SMARTER, MORE-ORGANIZED AGENT.
That's why that kind of life (natural life) is a truly emergent or 
(emergent from less-order) system.

One way of looking at A.I. is that it may become in some attributes 
life-like (I prefer just to say
it will become a true cognitive agent i.e. a true thinker (active 
modeler) without NECESSARILY
also independently being a fully self-sufficient life-form. If WE can be 
considered part of the environment
of AIs, then they are a life-form that uses US to reproduce (at least 
initially).

It's traditional to think of the environment of a lifeform as less 
ordered than the lifeform itself, so this
AI case, where the environment includes extremely ordered self-emergent 
SAS's (ourselves)
is a little bit strange situation and it's hard to categorize.

With AI, it's probably best just to say that there is another emergent 
system emerging, which is
(at this stage) a combination of humans (the human-species pattern and 
its behaviours) and  the software
(informational) and computing hardware technological/cultural artifacts 
we produce, acting together
to form the new emergent system.

People do talk about AI computers/robots and nano-tech, in combination 
perhaps, becoming self-sufficient
(self-replicating and self-advancing/adapting independent of their human 
creators.)

I have no trouble believing that this is in-principle possible. I just 
want to point out that
the properties for true long-term sustainability of pattern-order are 
HARD  (difficult, onerous)
requirements, not easy ones. Natural life (in the admittedly single case 
we know) is highly constrained
because of the constraints on its long-term survival and incremental 
improvement in a less-ordered
environment.

It seems easier (but is it much easier really?) to get AIs to 
self-improve/self-sustain purely as virtual (informational) patterns
or entities (i.e. as software and data ie. pure-informational 
entities/thinkers/knowledge-bases) rather than as informational/physical
hybrids as we are. I suppose some of the people on the everything-list, 
myself included, may see the
distinction between informational and physical as more just a matter of 
degree than of substance,
so this is a puzzling area. Certainly both human-built computers and 
physical machines (robots eg mars rovers,
nanobots etc) have a long way to go, not only in their basic FUNCTIONAL 
development, but
perhaps more significantly and certainly more difficultly in their 
ROBUSTNESS (lack of brittleness)
AND EVOLVABILITY ( META-EVOLVABILITY?) criteria, and their raw-material 
choice
(natural life uses primarily the most commonly occurring-in-the-universe 
chemically-bondable elements
(hydrogen, carbon, oxygen, nitrogen etc) for good reason), before they 
could hope to be very self-sustainable.

It is interesting to speculate that the mechanisms available to a future 
AI robot/nanotech-conglomerate/web-dweller
for self-adaptation might be far more flexible and wide-ranging than 
those available to early natural life on Earth,
because we are building AI's partly in our image, and
we, after all, by becoming general thinker/planners (information 
maestros if you will) have managed
to increase enormously the range of ways we can adapt the environment to 
our needs. (Caveat: As an eco-aware
person however I can tell you the jury's out on whether we're doing this 
to system-survival-levels of sophistication,
and the jury's leaning toward guilty of eco-cide - or more precisely 
guilty of severe eco-impoverishment and disordering).



BTW I'm most excited today in the AI field by the possibilities that the 
combination of the WWWeb's
information as accessed via google (and similar) and AI 
insights/technologies will have. The web is
not a big distributed brain yet, but it could get there.

Eric











Computational complexity of running the multiverse

2004-01-17 Thread Eric Hawthorne




Georges Quenot writes:

  
I do not believe in either case that a simulation with this level
of detail can be conducted on any computer that can be built in
our universe (I mean a computer able to simulate a universe
containing a smaller computer doing the calculation you considered
with a level of accuracy sufficient to ensure that the simulation
of the behavior of the smaller computer would be meaningful).
This is only a theoretical speculation.

  
  Hal Finney responded:
What about the idea of simulating a universe with simpler laws using such
a technique?  For example, consider a 2-D or 1-D cellular automaton (CA)
system like Conway's "Life" or the various systems considered by Wolfram.


One of the issues is the computational complexity of "running all the
possible i.e. definable programs" to
create an informational multiverse out of which consistent, metric,
regular, observable info-universes
emerge. If computation takes energy (as it undeniably does WITHIN our
universe), then an unfathomably
impossibly large amount of "extra-universal" energy would be required
to compute all info-universes.

(def'n qubitstring = a bitstring capable of simultaneously holding all
of its possible values i.e. all possible 
combinations of 1s and 0s in a bitstring of that length)

For example, say that we have a relatively tiny info-multiverse
consisting of a qubitstring 
where the qubitstring is only 1 billion bits long. i.e. It
simultaneously exhibits (if queried appropriately) any of
(2 raised to the power 1 billion) different information-states.

Now lets imagine computation upon this qubitstring multiverse, in order
for a god-computer to "tour" some
set of info-states of the qubitstring in some "time-order" in order to
simulate the operation of some universe
within the qubitstring multiverse.

Let's further imagine that the god-computer doesn't like to
discriminate amongst its potential universes within
the multiverse qubitstring, so it wants to try (just the next
computation step, for now, in) all possible computations 
upon the multiverse qubitstring.

How many ways of choosing the next computational step are there? Again,
there are 2 to the power 1 billion ways
at each step. 
So if the god-computer wants to simulate only 1 million
discrete-computing-steps 
(defined as different-info-state-selecting steps) of each universe
simulation, but to to do this for all possible 
"potential universes i.e. state-change traces" in the billion-qubit
multiverse, then the number of ways of
doing this (and we're saying all of these ways are going to get done,
because the god-comp is non-favoring
of one potential universe over another), the number of ways of
simulating a million comp-steps in each
universe is (2 to the power (1 million billion).) (express as 2 ^
1,000,000,000,000,000)

This "number of possible computing-step-sequences to compute all of
these million-step-old universes
is the same as the number of computing-steps NECESSARY to compute all
these universes.
-
Now lets make the numbers more realistic. Our current universe has the
following statistics (roughly!)
# of protons = 10 ^ 78
# of material particles and photons = 10 ^ 88 (give or take)

Entropy H = 10 ^ 88 = "the log of the number of possible velocities
and positions of the material particles and photons"

Bitstring length needed to represent a single "instantaneous state" of
our universe
IS THE SAME AS THE ENTROPY, so the bitstring needed to represent a
"shortest distinguishable moment" of
our universe is 10 ^ 88 bits long. So the qubitstring needed to
"simultaneously" represent all possible moment-states
of our universe is also 10 ^ 88 bits long.

So, how many "shortest-distinguishable-moments" have their been in our
universe since the big bang anyway?
Well the shortest distinguishable moment of time is the Planck time
unit = 10 ^ -43 seconds.

And there have been 3 x 10 ^ 60 Planck time units in our own universe's
lifetime so far.

So, putting it all back together in the qubitstring-computational
framework,

The number of possible ways of choosing computing steps, to compute all
possible info-universes up to those
universe evolution stages of the same age and particle+photon
population and entropy as ours is: (+- fudge factors):
(Drum-roll please.) 42 (just kidding, it is.)

2 to the power (Entropy * the number of
distinguishable time moments) =

2 to the power (bitstring-length * the number of distinguishable info
states in each universe's sim. computation i.e. history) = 

2 to the power ( (10 ^ 88) * (3 * 10 ^ 60) ) = 

==
2 to the power (10 ^ 148) (approximately)

Which is the number of computing steps that must be done to compute the
simulations of 
the histories of all possible universes of comparable age and particle
population and entropy as our own.
==

-

So 

Re: Computational complexity of running the multiverse - errata

2004-01-17 Thread Eric Hawthorne


Eric Hawthorne wrote:

So probably, the extra-universal notion of computing all the 
universe simulations is not traditional computation
at all. I prefer to think of the state of affairs as being that the 
multiverse substrate is just kind of like a
very large, passive qubitstring memory, capable of holding the qubits, 
that is of exhibiting, if appropriately
queried, the different bit values in the information states (all 10 ^ 
148 information-states in the histories of
universes as big and old as ours, that is.)
Correction:

That last part should read: (all 2 ^ (10 ^ 88) information states that 
could be the instantaneous
state of all possible universes as big and old as ours, that is.)

Eric



Re: Tegmark is too physics-centric

2004-01-17 Thread Eric Hawthorne


Kory Heath wrote:


Tegmark goes into some detail on the
problems with other than 3+1 dimensional space.


Once again, I don't see how these problems apply to 4D CA. His 
arguments are extremely physics-centric ones having to do with what 
happens when you tweak quantum-mechanical or string-theory models of 
our particular universe.

Well here's the thing: The onus on you is to produce a physical theory 
that describes some subset of the computations of a 4D CA
and which can explain (or posit or hypothesize if you will) properties 
of  observers (in that kind of world), and properties of the space
that they observe, which would be self-consistent and descriptive of 
interesting, constrained, lifelike behaviour and interaction
with environment and sentient representation of environment aspects etc.

My guess is that that physical theory (and that subset of computations 
or computed states) would end up being proven to
be essentially equivalent to the physical theory of  OUR universe. In 
other words, I believe in parochialism, because
I believe everywhere else is a devilish, chaotic place.

You can't just say there could be life and sentience in this 
(arbitrarily weird) set of constraints and then not bother to
define what you mean by life and sentience. They aren't self-explanatory 
concepts. Our definitions of them only apply
within universes that behave at least roughly as ours does.

You'll have to come up with the generalized criteria for generalized N-D 
SAS's (what would constitute one)
before saying they could exist.

Eric





Re: Peculiarities of our universe

2004-01-10 Thread Eric Hawthorne


Hal Finney wrote:

One is the apparent paucity of life and intelligence in our universe.
This was first expressed as the Fermi Paradox, i.e., where are the aliens?
As our understanding of technological possibility has grown the problem
has become even more acute.  It seems likely that our descendants
will engage in tremendous cosmic engineering projects in order to take
control of the very wasteful natural processes occuring throughout space.
We don't see any evidence of that.  Similarly, proposals for von Neumann
self reproducing machines that could spread throughout the cosmos at a
large fraction of the speed of light appear to be almost within reach
via nanotechnology.  Again, we don't see anything like that.
 

So why is it that we live in a universe that has almost no observers?
Wouldn't it be more likely on anthropic grounds to live in a universe
that had a vast number of observers?
Could be that
1. It's extremely rare to have a window for biological evolution to our 
level. (I highly recommend
the  well written  basic-level but accurate and comprehensive new book 
called Origins of Existence
by Fred Adams ISBN 0-7432-1262-2 which gives a complete summary of what 
had to  happen
for our emergence, and all the many ways how things could have gone 
differently, very few of which
would lead to life anything like we know it.)

2. We're a distinguished member of the successful evolvers in the first 
available window-of-opportunity
club.

3. If you believe 1 and 2, then note that we ourselves have not yet made 
galactically observable construction
projects or self-replicating space-probes. Sure, we talk, but we haven't 
put our money where our mouth
is yet. The (few, lucky to have emerged unscathed) other intelligent 
lifeforms in our observable universe may
also not have done this within out lightcone (space-time horizon) of 
observability yet.





Re: Why no white talking rabbits?

2004-01-09 Thread Eric Hawthorne
Hal Finney wrote:

What about a universe whose space-time was subject to all the same
physical laws as ours in all regions - except in the vicinity of rabbits?
And in those other regions some other laws applied which allow rabbits
to behave magically?
 

While this may be possible, we seem to have found so far that the 
universe admits of many
simple regularities in its complex systems and its fundamental laws. 
Therefore many of the
essential properties (future-form-and-behaviour-determining properties) 
of these complex
systems admit of accurate description by SIMPLE, SMALL theories that 
describe these
simple regularities in the complex systems.

I challenge you to come up with a simple, small, (thus elegant), and  
accurately explanatory
theory of how space-time could be as you propose above, and also how 
this wouldn't
mess up a whole bunch of other observed properties of the universe.

My point is I don't think you (or anyone)'d ever be able to come up with 
a small, simple,
yet explanatory theory of the white rabbit universe you suggest.

AND THAT THEREFORE, at least according to how we've always seen the 
essential aspects
of the universe conform to simple elegant theories and laws before, THE 
RABBITS SCENARIO
(bizarrely strange yet still straightforwardly observable spacetime 
pockets)
IS UNLIKELY TO BE THE TRUE STATE OF AFFAIRS in the universe.

Could such a bizarre universe exist? Well possibly, (I personally think 
not an observable one),
but in any case it would be a highly difficult universe (unmodellable 
with simple models) and
physicists would be unemployed in that universe, as their predictions 
based on simple, clever
theories would never turn out to work. Magicians and wizards (those able 
to pretend they'd been
responsible for the last bit of observed extreme weirdness) would hold sway.

Eric



Why no white talking rabbits?

2004-01-08 Thread Eric Hawthorne


Jesse Mazer wrote:

Why, out of all possible experiences compatible with my existence, do 
I only observe the ones that don't violate the assumption that the 
laws of physics work the same way in all places and at all times?
Because a universe whose space-time was subject to different physical 
laws in different regions would not have
been able to generate you and sustain you, or more precisely I suppose 
would only be able to generate
and sustain you with infinitesimal probability.

And it would be even more highly unlikely that should you have been 
magically conjured by this
inconsistent-or-inconstant-physical-laws universe, that you would 
observe any other people (or rabbits, white or otherwise)
because they themselves would  have only infinitesimal probability of 
being magically, coincidentally conjured into
that universe.

It's better to find the all of the essential constraints (all the way 
back to 10^-43 seconds after the big bang) which made it highly probable
that you (or something like you) would exist in the universe, and then 
explain how those constraints are
all consistent with each other and with information theory,
and then to realize that a set of constraints HAS TO BE consistent with 
(all of) each other and with information theory
and with making your (or equivalent creature's) existence highly 
probable, in order for you to actually exist with any
high probability. By the argument de facto, I think it's safe to say 
that things in the universe are such that people
(or functional equivalents) are highly  probable to exist  on a small 
but significant set of planets
(those with the right temperature ranges and  proportions of different 
elements) in the galaxies in our observable
portion of the universe.

It is ONE HELL OF A DETAILED SET OF CONSTRAINTS that made all of this 
(us) highly probable,
White talking rabbits with watches are inconsistent with those 
constraints, in ways too boring perhaps to get into.
Ok, since we're way down here in the post, I'll get into it. General 
intelligence of human-like level (involving
ability to hypothesize, abstract flexibly, construct a wide variety of 
functional, purposeful constructions out of
raw materials, and plan actions and consequences in detail), only 
evolves by natural selection
in critters that are physically equipped to DO SOMETHING with their 
intelligence. For a rabbit, it's pretty
much limited to hopping about in more complex patterns to avoid being 
eaten, based on some kind of vastly
intelligent psyching out of where its preditor is going to strike next, 
and to determining where to find the
very best places to find the most nutritious and tasty grass. This is 
too limited a domain to require or select
for a general, long-range constructing and planning mind-firmware to 
develop in a rabbit brain..

Another favorite of mine is why dolphins and whales are KIND OF 
intelligent (like a poodle or parrot is)
but not extremely...So what, we're going to develop more complex 
tricky ways to bump things with
our snouts? I don't think so. Group hunting (in a too-easy, too uniform, 
too
acceleration-constrained-because viscous fluid habitat)
is as complex as dolphin brains ever need to be.

Cheers, Eric



Re: Is the universe computable?

2004-01-06 Thread Eric Hawthorne
Frank wrote:

Indeed, I've always thought there was a dubious assumption there.
There isn't a universal time to pace the clock tics of a simulation.
Relativity forbids it.
Anyway, time is a subjective illusion.
Back to the question:
So what happens when the simulation diverges from regularity?
Some possibilities:
a) The universe ends
b) Pink elephants pop up everywhere
c) It's already happening
I like (c)


Ok. How about:

The multiverse is a very long qubit-string. (This is an informal 
statement to drive intuition.)

Being a qubit string it simultaneously exhibits all of its potential 
information-states.

If there is something like this qubitstring simultaneously exhibiting 
all possible information
states, then note that to do computation, within that qubit-string, no 
actual computational
process need take place. Any tour through any subset of the information 
states (i.e.
visiting one information-state after another after another...) can be
considered equivalent to a computation. Any tour through a subset of the 
information
states which is such that the direction of the tour is restricted to 
only those successor
information-states Si+1 (of the state Si we're currently at) which are 
different from Si
by only a single bit-flip in a single position in the bitstring, and 
where that bit-flip
would only happen based on some function of only the state of the bits 
in a local vicinity
of the flipping bit, can be considered equivalent to a computation which 
is comprised
solely of localized operations, similar to the kinds of computation we 
understand.

So the universe (or any observable universe) could be a tour through a 
subset of the
information-states of the qubit-string multiverse, which is such that 
the tour
computes only self-consistent spaces and objects, perhaps using only 
local computational
steps (part this computational locality is part of the secret of 
ensuring consistency, locality,
metric etc properties of the space and the objects, prehaps).

Observers which were self-aware substructures WITHIN  the set of objects 
computed
in a consistent tour, maybe can only observe other information states 
which are also
within that tour.

TIME AND LIGHTSPEED
As Wolfram postulates, the concept of time and speed of light c within 
such an
informational universe may be related to how fast the informational 
changes (from one
state to another) can propagate (across the qubitstring) using only 
local computations
as the medium of state-change propagation. It is wrong to suppose that 
this implies
computational time outside of the qubitstring. How fast state-change 
propagates
is purely a question of how the metric spacetime that the consistent 
tour defines
can evolve in form within a consistent tour.

The tour itself could be imagined to be real if you like (with the
qubitstring really in some god-quantum-computer-thingy which has a 
god's-now-program-
pointer which moves from state to state in the consistent tour).
But it is better to think of the consistent tour as a virtual tour, an 
abstraction,
defined by nothing more nor less than its BEING a subset of information 
states, and an order
of traversal of those (very large) information states which is such that 
the ordered set
of information-states IS and CONJURES reality.

OBSERVERS, AND TOUR-TRAVERSAL AS THE TIME ARROW FOR OBSERVERS

An OBSERVER is a set of local subsets of the some of the set of 
information-states in the
consistent tour which is the universe. The notion of locality there is 
information-distance.

OBSERVERS can observe any aspect (part) of the information states in the 
tour which has
the following properties:

1. The observable substates must be within a light-cone of the observer. 
Photons or waves of light are
information travelling through the set of information-states. They are 
closely related to the putative
local computations which are imagined as defining sensible localized 
change between sets of
information states. So the observable substates are those that are 
reachable from the observer
states by local computations. These observation computations are 
computations that can
affect the observer-part of the now information-state based on the 
prior-to-now configuration
of other adjacent-to-the-observer parts of the prior-to-now information 
states, with the information
moving at a speed of one local computation (or is that one bitshift) per 
information-state-distance
in the consistent tour. Confusing? Yes I'm confused too. This bit's 
hard. (Pun intended)

2. Argument 1 implies that only parts (in some informational locality to 
the observer within the
information-states) of PRIOR-IN-THE-TOUR information-states can be 
observed by the observer.
That's what being in the light-cone from the observer implies: 1. 
Informationally-local to the observer's
own states, and also 2. PRIOR in the consistent tour to the 
now-in-tour state of the observer.

In fact we will stand these arguments on their heads now, and say that 

Re: Why is there something instead of nothing?

2003-11-16 Thread Eric Hawthorne
In the spirit of this list, one might instead phrase the question as:

Why is there everything instead of nothing?

As soon as we have that there is everything, then we have that some aspects
of everything will mold themselves into observable universes.
It is unsatisfying though true to observe that there of course cannot be
a case in which the question itself can be asked, and there simultaneously
be nothing in that universe.
I'm with the last respondent though in thinking that the right answer is
that there is BOTH nothing and everything, but that the nothing is 
necessarily
inherently unobservable by curious questioners like ourselves.

Norman Samish wrote: Why is there something instead of nothing?

Does this question have an answer?  I think the question shows there is a
limit to our understanding of things and is unanswerable.  Does anybody
disagree?
Norman



 




Re: Why is there something instead of nothing?

2003-11-16 Thread Eric Hawthorne


Norman Samish wrote:

...
I don't understand how there can be both something and nothing.  Perhaps I
don't understand what you mean by nothing.  By nothing I mean  no thing,
not even empty space.
 

I think of it this way.

1. Information (a strange and inappropriately anthropocentric word - it 
should just be called differences) is the most
fundamental thing.

2. The plenitude, or multiverse (of possible worlds) can be conceived of 
as the potential for all possible information states
or in other words, all possible sets of differences, or in otherwords, 
an infinite length qu-bitstring simultaneously exhibiting
all of its possible states.

3. In that conception,  nothing is just the special state of the 
qu-bitstring in which all of the bits are 0 (or 1 - there are two
possible nothings, but they are equivalent, since 1 is defined only in 
its opposition to 0 and vice versa.)
That is, in that conception, nothing is a universe in which there is 
no difference, and thus no structure. i.e. That
state of the bitstring has zero entropy, or zero information. So it is 
truely nothing.

4. but that special state of the qu-bitstring is only one of the  2 to 
the power (bitstring-length) simultaneously existing
information-states of the qu-bitstring. And some of the other sets of 
information-states are our universe (i.e. something.)
and similar universes (everything? or at least everything of note.)




Re: Quantum immortality - pragmatics again.

2003-11-13 Thread Eric Hawthorne
All this talk of quantum immortality seems like anthropocentric wishful 
thinking to me.

You are a process. All physical objects are best understood as slow 
processes.

A life process is a very complex physical pattern, which is an 
arrangement of matter and energy in space-time,
that has properties that allow it to cannibalize other matter and energy 
in its vicinity to retain its form for
a while, but only for a while...

The kind of process, or pattern, that you are has built-in time limits 
in it, which have to do with the
imperfect maintenance of order in your bodily subprocesses (cellular 
processes).

In other words, the kind of hoops that (Earthly) organic processes go 
through to be self-reproducing, and
form-differentiating, and non-destructively evolvable, and so forth, 
seem to have limitations in their
perfection of operation, as far as maintaining the existence of the 
individual organism. The individual
organism's form does not HAVE to persist immortally, to ensure 
persistence of the self-reproducing
pattern (species, ecosystems) as a whole. In fact it would be 
counterproductive to the persistence of the
species and ecosystem as a whole if the individual organism patterns 
persisted indefinitely. So the pattern
rules allow the intercession of disorder to eventually destroy each 
individual organism pattern. (When
that process has run its useful (i.e. reproductive, and possibly 
meme-contributing) course.)

I cannot imagine an alternate possible world in which a process would be 
constructed
essentially as you are (as your process is), and yet would somehow  
miraculously avoid
the cell replication errors and cell replication cessation that comes 
with age in our
organic bodies. It would seem to be that only ridiculously small-measure 
scenarios could
permit this kind of implausible immortality of organic structures, at 
least or organic structures
bearing any great similarity to ours.



Social issues with replicated people

2003-11-08 Thread Eric Hawthorne
Readers of this list interested in issues of personal identity in the 
face of replication
might enjoy the Sci-Fi novel Kiln People by David Brin.

In the novel, a technology
has been discovered that allows a person's soul standing wave (sic) to 
be copied into
a kind of bio-engineered clay substance (molded into a shape like you 
and animated
by some kind of enzyme-battery energy store that gives it about a day or 
two of life
before expiry. ) These ditto people come in different qualities (more 
expensive to
get a super-smart, super-sensitive version of yourself, cheap to get a 
worker-droid
rough copy with fuzzy thinking capabilities and dulled senses.)  The 
novel, apart from
being a hard-boiled detective yarn in this world, explores issues of 
identity,
and how social conventions and rights and responsibilities change with 
the presence
of replication of personalities.

Brin's one of the good writer sci-fi writers.





A random collection of questions about info-cosmology

2003-11-02 Thread Eric Hawthorne
Some of these questions may be profound, and some silly. (In fact, they
may be sorted in order of profound to silly.) My education is spotty
in these areas. I'm most interested in specific references that help 
answer (or destroy)
these questions.

1. What test could determine if a computational hypothesis holds?

2. Is it enough that a theory be elegant and explain all the known 
physics observations,
or does the test of the theory also have to rule out all competing 
theories, or at least force
all known competing theories to add ugly complex terms to themselves to 
continue to work?

3. Is it not true that the kind of computation that computes the 
universe or multiverse
must be an energy-free computation, because energy itself is INSIDE the 
computed
universe, and it would be paradoxical if it also had to be OUTSIDE.

4. What range of energy regimes and physical laws are required to 
produce spontaneous
order where the order retains the dynamism required for life. (e.g. as 
opposed to producing
one big, boring crystal.)

5. Do these special energy regimes and physical law sets NECESSARILY 
produce
spontaneous order with the required dynamism?

6. Why does spontaneous order emerge in these energy/law regimes?

7. If we were in a possible world where thermodynamics ran backwards 
(entropy decreased),
would the time-perception of observers within that world also run 
backwards? Would these
backwards worlds (as far as classical physical observations go, anyway) 
thus be equivalent
to and theoretically equatable with the corresponding possible world 
which was the same except
that thermodynamics runs forwards as we are used to?

8. What is the significance of the fact that observers like ourselves 
(possibly with some notion of
free will) are separated in space and can only communicate / cooperate 
with each other at the
speed of light. They cannot interfere with some decisions that the other 
makes, because the other
has already made the decision before a lightspeed communication can tell 
them or force them
to stop. Imagine Jane on Venus and Joe on Mars getting into an argument. 
Immediately after
receiving Joe's last communication (which he sent an hour ago), Jane 
decides to detonate her
solar-system bomb in frustration and spite. Nothing Joe can say or do 
can stop her, because
it will take two hours for him to know she's about to push the button, 
and communicate his
desperate and well-crafted plea for forgiveness. The idea of 
FUNDAMENTALLY independent
decision makers co-existing seems interesting. Open ended question. 
It's just as if Joe and
Jane lived at different times. (And yet they CAN communicate with each 
other, just slowly. Hmmm)






Re: Unsolicited weirdness

2003-11-02 Thread Eric Hawthorne
Could someone please send to the list and/or this lunatic the instructions
for unsubscribing from the list. My old machine's disk crashed taking my 
email
archive with it so I don't have the removal instructions.
Thanks
Eric

Frank Flynn wrote:

the devil is watching you I put a curse on all of you that bad thing 
will happen to you
and your love ones you may die to bad keep on sending me these email 
and the
curse will get stronger so get fucked





Re: Quantum accident survivor

2003-10-31 Thread Eric Hawthorne

Yes, this is Quantum Immortality in a nutshell.  If the MWI is 
correct, it is impossible to die from a subjective point of view.

Hooray!

Yes but there can be no communication from one possible world to another 
(thus no cross-world awareness), because, think
about it, if I could communicate with another world, then the other 
world would by definition be in my world (where I define
my world as all parts of the universe that I can influence with a 
lightspeed communication), so it would just
be some other part of my world. Oops. The bottom line is that if there 
are other possible worlds existing, they can be of
nothing other than theoretical interest to us. Damn. So try to avoid 
running into any creatures weilding large scythes
or other sharp implements tonight.

Eric





Re: multiverse paradox of a number of posts back

2003-10-30 Thread Eric Hawthorne
Someone wrote:

 The paradox consists of the fact that the theory of multiverses tells us
 that there must be infinite observers who experiment other physical laws.
 There is not only the possibility of being wrong, it is the model itself
 which proves to be wrong. In fact it tells us that there are infinite  
 places and times in this multiverse where, if any people observe the world
 around them in the same way we are doing hic et nunc, they necessarly find
 another model to describe the universe. So the outcome of the model is
 that it must be wrong in infinite places and times, and the paradox is
 that we have proved that it is wrong, but we have been able to draw this
 conclusion because we have considered the hypothesis of applying the
 physical system itself. But if it was wrong, the conclusions would be
 wrong, too.

Apologies to long-time list members for re-iterating like a broken record...

I think when people speculate about other universes in the multiverse, 
they continually fail to
grasp the likely extremely constrained nature of OBSERVABLE universes. 
An observable
universe MUST be structured/defined so as to be capable of evolving 
self-aware substructures
(SAS's) such as ourselves, in order for it to be in-principle 
observable. I posit that these constraints
are EXTREMELY ONEROUS. No, this is not some naive anthropocentrism. I'm 
working from
intuitions about emergent systems theory, and notions of the highly 
constrained energy regimes
in which self-organization of systems can occur (At least, 
self-organization of systems that have
properties likely to lead to coherent observer-systems.)

IT COULD BE that all alternative people MUST be seeing a universe very 
similar to ours, or indeed
possibly EXACTLY ours, simply because otherwise their self-organization 
would NECESSARILY
break down in their universe, and they couldn't observe.

In other words, it COULD be that there is only one OBSERVABLE POSSIBLE 
world. Now that's
an extreme, I admit, but I think it's closer to the truth than imagining 
infinite numbers of really weird, unimaginable
observers in really weird, unimaginable alternative universes. The main 
point is that the constraints required
to produce EMERGENT SYSTEMS that can be classified as what we think of 
as OBSERVERS may
be, again EXTREMELY onerous, extremely possibility-constraining 
constraints.

There may be, in the imagination, other weirdo observers coming up with 
a weirdo model of the universe, but maybe
some inconsistency in the notion of their existence (as complex, stable 
systems in a complex yet stable habitat)
in their world means that they simply CAN'T exist.

Eric





Re: Ideal lamps

2003-10-25 Thread Eric Hawthorne
Perhaps you've heard of Thompson's Lamp.  This is an IDEAL lamp, capable of
INFINITE switching SPEED and using electricity that travels at INFINITE SPEED.
Is it pedantic of me to point out that this is an IDEAL lamp, i.e. one which only
exists as an idea, and one which, because of its transcendence of the speed of 
light, can never exist in our universe?

Therefore, there are probably many fanciful or mathematical answers which work within
one ideal, abstract, mathematical model of the situation or another. These models
must all be incorrect models of known reality however.
I'm with Hal. The question doesn't mean anything about the real world.

This just means I'm too lazy to try to figure it out, but sometimes that's the
right answer.
Eric




 




Re: Ideal lamps

2003-10-25 Thread Eric Hawthorne
Like I said, in mathematics, there MAY be an answer, depending what 
mathematical theory
you choose. Even within mathematics, there may be questions that don't 
have an answer, and
are ill-formed, and only seem well-formed because they seem to read ok 
in informal English.
Without your extra axiom, for example, the question is infinity odd or 
even
is not well-formed, because infinity is not a number, and only numbers 
(integers) can be odd
or even. (CAVEAT: IANAM)

Eric





Re: Fw: Something for Platonists

2003-06-17 Thread Eric Hawthorne
Lennart Nilsson wrote:

But in fact, the only thing that privileges the set of all
computational
operations that we see in nature, is that they are instantiated by

the laws of physics. It is only through our knowledge of the physical
   

world
 

that we know of the di.erence between computable and not computable. So

it's only through our laws of physics that the nature of computation can
   

be
 

understood. It can never be vice versa.
   

I don't agree. I think computability is a pure abstract property
describing the reachability of some states (or state descriptions)
from others via a set of incrementally different states (or
state descriptions). I think computability is tied to
notions of locality. But computability may define locality
and not the other way around.
Eric

--
   We are all in the gutter,
but some of us are looking at the stars.
 - Oscar Wilde



















Re: are we in a simulation?

2003-06-15 Thread Eric Hawthorne
Stephen Paul King wrote:

[SPK]

   Oh, ok. I have my own version of the anthropic principle:

   The content of a first person reality of an observer is the minimum
that is necessary and sufficient for the existence of that observer.
   I am trying to include observer selection ideas in my definition of
anthropy. ;-) I conjecture that the third-person aspect could be defined
in terms of a so-called communication principle:
   An arbitrary pair of observers and only communicate within the overlap
or set theoretic intersection of their first person realities.
  

To me, that is too complicated a theory.

I think reality is a structure/system that is a
set of paths through the plenitude, where those paths exhibit 
properties like self-consistency, coherence, locality, 
stability, energy etc. 

That structure can contain observers that can observe the 
very structure they are part of, precisely because of those
properties of self-consistency, coherence, locality, stability
etc that the structure (i.e. those paths through a state-space
plenitude) exhibits.

Every observer will see the structure from their own limited
point of view (from their place and time within it) so there 
will be disagreements about it, but fundamentally, the 
observers (those who can observe and communicate with each 
other) are within the same structure
and are viewing parts of the same thing.

If that is physicalist I don't know. It still seems purely
mathematico-logical to me. But I'm just positing a larger
structure that is a commons that is observed by parts of itself.
I think this is Tegmarkian anthropy.
Look at it this way. The content of reality of an observer
is (their limited perspective on) the minimum (self-consistent
structure) that is necessary for themselves, and all the other 
observers they observe, and for the whole sustaining environment 
for them and the physics that produced it, to exist.

I wrote this just before much better and my email client
flipped out and killed it. So sorry for the sleepy, angry, 
more muddled version you got.

Eric

--
   We are all in the gutter,
but some of us are looking at the stars.
 - Oscar Wilde



















Re: Are we in a simulation

2003-06-10 Thread Eric Hawthorne
My corollaries to: 
Any sufficiently advanced technology is indistinguishable from
magic.

1. Any sufficiently detailed and correct reality simulation is indistinguishable from reality.

2. Any artificial consciousness which communicates in all 
circumstances within the range of communication behaviours of 
conscious humans, is indistinguishable from a human consciousness.

Further to 1.
-
Because reality may be a set of programs selected
from the plenitude of all possible state changes, a 
programmed simulation of it, if it was really any good,
would essentially be reality. In fact, there is perhaps
a law that any completely precise simulation of reality
is identical to reality, by definition.

Further to 2.
-
The qualia of consciousness (i.e. the feeling or
experience of consciousness and how sense data seem
to us) are only explainable to other conscious beings
through communication and observable behaviour.
The only but compelling reason to assume that others
experience essentially the same kind of qualia that
you do (their red is like your red) etc. is that the
simplest theory would say that since our brains are similar,
and, since communication assures us that the behaviours
of our minds (yours and mine) are similar, then the 
qualia are also similar. A theory that postulated
substantial differences in qualia-experience for different
people would be hard pressed to explain why it is different.
You don't have to explain why qualia-experience is similar
from person to person. That's just the simplest (and thus the 
default) theory.

Since all qualia of consciousness, and all other results
of consciousness, are only explainable to or able to be
made evident to other conscious beings via communication
and other behaviours (i.e. through patterns in I/O), we might
be forced to say that it is impossible in principle to prove
the existence of anything in human consciousness that is different
than the consciousness of an artificial mind that communicated
and behaved indistinguishably from a conscious human (in
all kinds of circumstances, contexts.)
Consciousness's only manifestation outside itself is via
I/O. If the I/O patterns are indistinguishable, it is simplest
to say that the consciousness processes themselves are
essentially equivalent.
8-Count
---
I fall twisted.
I lie at a strange angle.
I stand corrected.
The punchline came out of nowhere.
 
Eric

--
   We are all in the gutter,
but some of us are looking at the stars.
 - Oscar Wilde



















Response to R.Hlywka's brain/mind comments

2003-06-05 Thread Eric Hawthorne
R Hlywka wrote:

There are so many things we need to take into consideration. Genetics. 
We are born with a specific preprogramed set of organization and 
hardware. the way the neurons are preorganized, and the way they go 
about utilizing and organizing and transfering specific information. 
We are predisposed if you will. However, there's also nurture. Even 
from starting in the womb, we recieve biorythms of our mother, which 
our whole body sets to. What she ingests, the anxieties she feels. We 
feel. Not that it's a good or bad thing. 
Yes. Ok. I left out some details. One way of looking at it though is 
that the brain
evolved from hard-wired control-system hardware to become a more and 
more general
information processor. This must have something to do with the fact that
the terrestrial environment has lots of different opportunities for an 
ecological
generalist with opposable thumbs, if only that generalist can figure out 
what to
do (how to behave) in novel situation-types.

You could even say that the human brain (cortex?) has distinguished 
itself from
the brains of other animals by the evolution of this general computing 
capability
(and the consequent ability to do abstract thinking, situation-modelling 
with hypotheticals,
conceptualizing, precise but extensible linguistic commucation, 
introspection etc.)
even if (granted) the general computing ability is employed in habitual and
stereotypical ways most of the time, and is optimized to support those 
habitual
or instinctual patterns.

Many other animal species  share with us the hard-wired or firm-wared
kinds of behaviours that you ascribe to our brains. We have gone further
than any of them in generalizing the information-processing and storage
capabilities of our brains so that they are turing-equivalent AND ALSO
still optimized for carrying out instinctual behaviours, albeit in 
creative ways.

You brain is so much more than a computer.. think of it like a galaxy 
or even it's own universe.
I think that's going a little far in the Carl Sagan direction.

This all brings up more questions. What about memory transfer. We code 
our memory by the continious rearangement of pathways. Unless you 
could copy the coding and rearrangement, decode it by that persons 
CODING... 
I don't remember claiming (maybe someone else claimed) you could copy a 
mind.
I do believe we'll eventually be able to build them, (and not out of 
traditional
organic materials) but if we do build them, then once each A-mind
starts processing and assiimilating information from its uniquely 
situated point-of-view
and its unique experiences/learning sessions, then it will become 
different from all
other A-minds and from all other human minds, in the same way that ours are
different from each other because of nurture. There would be ways to mimic
differences in built-in biases, preferences, 
cognition-optimization-directions etc
as well, if that was useful for groups of A-minds.

The task is not only to understand what a human brain does in the process of
its being/becoming a mind. The task is to figure out IN GENERAL what
being a mind is and to figure out how the human brain is doing THAT and
also what are all the things that something other than a human brain would
have to do to be also doing THAT.
Eric




Re: 2C Mary - Check your concepts at the door

2003-06-04 Thread Eric Hawthorne
My physics is decades-old first-year U level (I'm a computer science type).

But if I'm not mistaken, there's no such thing as a 2C speed, or a 2C 
closing
of separation between two objects. All speeds can only be measured
from some reference frame that is travelling with one of the objects 
(say A) or another,
and no other object (say B) can be observed to be closing at faster than C.

Similarly, if we're measuring the approach speed of A from our reference 
frame
that is travelling with B, we can never observe A approaching at greater 
than C.

I'm not really sure how this relativistic stuff impinges on the rest of 
your argument.

I've always held out the weirdness of what happens to the concept of 
speed at
high speeds to be an example of the limited domain of applicability of 
every concept
idea. i.e. speed only makes sense at low speeds, paradoxically enough.

Similarly, color wouldn't make sense below the size of wavelengths of 
light,
etc.

What this tells us is that words (terms) e.g. speed, color, 
right-wing zealot make
sense only within delineated contexts. (e.g. the latter term probably is 
hard to apply to
slugs, but then again... ok it is really hard to apply to rocks 
sensibly..) Words are
descriptions which arguably only make sense within a (theory - in the 
formal-logic sense)
or at most within a closely related cluster of similar theories. 
Theories just being possibly
large but finite self-consistent logical descriptions of lots of 
things and relationships between
those things.

Every theory has a domain of discourse that it can be said to be 
about. It may be
a very broad domain of discourse, but there will always be perfectly 
valid and coherent
other concepts and theories whose domains of discourse bear no relationship
(or no essential relationship)  whatsoever to the domain of discourse of 
the first theory.



--
   We are all in the gutter,
but some of us are looking at the stars.
 - Oscar Wilde



















Re: 2C Mary - How minds perceive things and not things

2003-06-04 Thread Eric Hawthorne
Colin Hales wrote:

The real question is the ontological status of the 'nothing' in that
last sentence. I am starting to believe that the true nature of the
'fundamental' beneath qualia is not only about the 'stuff', but is
actually about all of it. That is, the 'stuff' and the 'not stuff'.
So. Anyone care to comment on the ontological status of 'not thing'?
 

I believe our brains and minds are difference engines.

What they do is respond in a feedback loop with perceptual signals in 
such a way as to
continually sort things, by the single rule of this is more different 
from that than it is from that,
so I'll represent that comparative level of difference (in a compact way 
that can be stored and retrieved
quickly).

In other words, it organizes its internal representation of what's out 
there so
that the more different, less different relations between 
representational symbols in the brain
are as close as possible to mirroring the more different, less 
different relations among chunks
of reality. Objects in the world, for example, are individuated (their 
boundaries from other objects
determined, and thus the extent that their identity applies to) on the 
basis of a rigorously
mathematical, and simple, algorithm of these are the best clusters of 
all kinds of similarities
and their boundaries are where the most differences (of many kinds) occur.

This individuation by difference-measurement applies equally well when 
turned inward on itself
to create abstract theories of abstract domains (e.g. higher math and 
logic, language about thoughts).

I would contend that notions like abstraction into 
generalization-specialization hierarchies of
noun and verb (thing and relationship) concepts emerge 
spontaneously if you simply
mix a represent the differences principle with an achieve most 
compact representation principle.

So what does all this musing about conceptualization of the world have 
to do with the world
(universe) itself, or what that universe really is ? That's a hard one.

The best I could come up with is that the multiverse or plenitude is 
the capacity for
all differences and configurations of differences to manifest 
themselves. Most parts of that
will be ungrokable by brains like ours because only those parts which 
have organized
configurations of differences exhibiting space-time-like locality, 
energy, matter etc which
behave within limits that allow formation of emergent systems of 
bigger, observable,
simple configurations of differences will be observable universes (to 
difference-engine brains
like ours that were lucky enough to emerge as one of those emergent 
systems in a
hospitable energy regime.

Or Whatever.



--
   We are all in the gutter,
but some of us are looking at the stars.
 - Oscar Wilde



















Re: Constraints on everything existing

2003-01-22 Thread Eric Hawthorne
My comment at the bottom of the message.
  Eric

Jean-Michel Veuillen wrote:


Eric Hawthorne wrote: 



Unless a world (i.e. a sequence of information state changes)
has produced intelligent observers though, there will be
no one around in it to argue whether it exists or not.



Then our universe did not exist before there were intelligent 
observers in it,
which is not true.

I think that is better to say that all self-consistent mathematical 
structures exist.
To restrict existence to universes containing SASs (self-aware 
structures)
is not only is very cumbersome but leads to contradictions.


Perhaps we're just quibbling about terminology.

My argument for a narrower definition of exists would be that
if everything (or even just everything self consistent) exists, then
perhaps existence in that sense is not that interesting a concept.

So I posit that a better definition of exists or classically exists
is: self-consistent, and metric and organized to the degree to be 
observable

Notice that this does not require is observed. It requires would
be observable if observers happened to be around. So our Earth 3 billion
years ago was still observable in this sense, even though we weren't 
there yet.

So, in otherwords, I define exists as
that which is an aspect of a structure which is of the form/behaviour 
as  to
be, in principle, observable.

I think we will be able to define a set of properties (stronger than just
self-consitency) that will define in principle, observable. -- 
difficult exercise.

All other self-consistent mathematical structures are, to me, just 
potentially or
partially existent, because there is something wrong with their properties
that would make them, in principle, unobservable.

Vague statement building up this intuition:
The operative question is whether a mathematical structure can only be
abstract (without observable instantiation) or whether it can also be 
tract.

I would argue that these other less-than-existent
self-consistent mathematical structures may be part of quantum 
potentiality
but can never be part of  an existent world that exhibits classical physical
properties.

Eric











Re: Constraints on everything existing

2003-01-17 Thread Eric Hawthorne
John M wrote:


Eric:

do I detect in your 'circumstances' some 'anthropocentric/metric/logic' 
restrictions? is the multiverse exclusively built according to the system 
we devised on this planet as 'our physical laws'? (your 'factor' #1, 
although you oincluded in factor #2 the (CLASSICAL existence) modifier.)

Brings to mind Mr Square's opponents in Abbott's Flatland, 
with the 2-D vs 3-D joke.
 

It may seem that way (anthropocentric) but when I say intelligent
observer I mean any kind of intelligent observer or couched
in some more terminology any emergent system or pattern
that functions as an intelligent observer.

So no, I'm not talking about a human-centric anthropic principle,
I'm talking about an arbitrary intelligent observer, generically
defined. As you would expect, I would guess that there are
some pretty tight constraints on how an intelligent observer
would have to function to be considered such, but human is
definitely too narrow a definition of it.

I see intelligent observer production as being 
a threshold level of organization achieved by certain
constraint regimes on all sequences of state changes.

Of course, as a thought experiment, you could set a lower 
threshold criterion for fully existing worlds, such as 
the ability to be organized enough to produce 
some interesting (non-trivial) stable emergent systems
that seem to exhibit some higher-level functions
including self-preserving functions.

Unless a world (i.e. a sequence of information state changes)
has produced intelligent observers though, there will be
no one around in it to argue whether it exists or not.

Which brings us around to the conclusion that after all,
the question of classical existence or not of some world
is only ever a concern of intelligent observers. It is
not really a concern for the non-thinking aspects of
worlds or potential worlds, precisely because those parts
are content to just be, or maybe be, as the case may be.
Those parts are just the potential for information.
Only when something comes along that cares to conceptualize
about the various possibilities borne of different states
of information, does there arise a question of existence,
and then, it is a question of existence from the perspective
of those that can observe and care about such things.










Counter to a simple SWI Fermi argument

2003-01-14 Thread Eric Hawthorne
On the likelihood of detecting alien intelligences:
(single-world case)

1. It is an enormously stupid conceit of us to assume that
aliens would be broadcasting, or tightbeaming something like
analog radio signals, for communication.
We ourselves have only being doing that for 100 years,
and will be ceasing to do it before the next 100 are up,
having switched to a combination of closed fibre-optic and
massively spread-spectrum (i.e. noise-like) digital radio.


2. We have not built dyson spheres, nor are we likely
to. There were a number of crazy megaproject engineering
fantasies that we had for the first few short years after
we discovered how to build with reinforced concrete, and
Dyson spheres were one of them. (As were those incredibly
ugly but functional 60s and 70s concrete skyscrapers. The
first crude phalluses erected using a new but not completely
mastered building technique.

I'd like to think that we have a slightly more refined
sense of megaproject risk analysis now that will prevent
us doing quixotic projects like Dyson spheres.


3. We can barely detect planets the the size of Jupiter around
nearby stars today. Why would we be able to detect non-radiating
dyson spheres? Wouldn't we mistake them for black holes at the
least?

4. The life span of a higher mammal species (clad, actually i.e.
tree of derived species i.e. branch of evolution) 
like ours is estimated in biology to be 5 to 10 million years,
and we're a significant way through our tenure, so we'd
better hurry up sending out those self-replicating V-ger
robot probes all over the place for them to be detected a 
million years hence. We'll probably be gone (as a species
and clad) by the time the reply arrives.










Re: Possible Worlds, Logic, and MWI

2003-01-11 Thread Eric Hawthorne
Re: possible worlds in logic.

Logic (and its possible worlds semantics) 
says nothing (precise) about external reality.
Logic only says something about the relationship of 
symbols in a formal language.

Remember that the reason non-sloppy mathematicians
use non-meaningful variable-names (i.e. terms) is
to avoid names that connote something in the world
and would lead one astray in understanding the precise
formal semantics of the mathematical formulae.

e.g. of problematic meaningful variable names:

one = 2.
two = 2.
four = 4.
therefore, one + two = four.

This strict anonymous symbols interpretation
is how one must treat formal logic and propositions
expressed in formal logic too. Every time
I read someone bemoaning how logic has difficulty with
expressing what is going to happen in future, I think,
why would you expect a formal system of symbols to have
anything to do with future time in reality?

As far as I know, there is no good formulation of
a formal connection between a formal system and 
reality  -unbalanced quotes, the secret
cause of asymmetry in the universe. How's that for a
quining paragraph?

Is there? For example, truth is defined in formal 
logic with respect to, again, formal models with an infinite
number of formal symbols in them. It is not defined with respect
to some vague correspondence with external reality.

Someone was writing about correspondence theory
with this goal in mind many years back, and that sounded
interesting. I haven't read Tegemark et al. What do they say
about the formalities of how mathematics extends to 
correspond to, or to be? external reality? To me, there is
still a huge disconnect there. 

E.g. again, Godel's incompleteness
theorem is a theorem about the properties and limitations
of formal symbolic systems. The original theorem says nothing 
whatsoever about reality itself, whatever that may informally be,
nor about the limitations of human minds, unless we take minds
to be theorem provers working on formal symbolic systems.

 





Re: Possible Worlds, Logic, and MWI

2003-01-11 Thread Eric Hawthorne
Interleaving...

POINT 1







 For example, truth is defined in formal logic with respect to, 
again, formal models with an infinite
number of formal symbols in them. It is not defined with respect
to some vague correspondence with external reality.


Actually, science is just about such correspondences with external 
reality.

I haven't argued that logic alone is a substitute for science, 
measurement, experimentation, refutation, correction, adjustment, 
model-building


All I was saying is that the semantics that define the meaning with 
respect to each
other of symbols and symbol-relationships is formal and, within each given
well-formed framework, inarguable.

whereas the semantics of the mapping of formal models to their 
supposed subject is
not, itself, formal (yet anyway), and hence is suspect as to whether we 
understand it or
get it right all the time. With science, all we have is:

this formal symbol system (theory) A
seems to correspond better to our current observations than any competing
formal symbol system (theory) B (that we've conceived of so far), so we'll
consider A (as a whole) to be TRUE  i.e.
the best observation-corresponding theory (for now.)

This scientific process works pretty well
but is somehow loosy-goosy and unsatisfying. Do theories which replace
other older, now discredited theories, keep getting better and better? 
Probably yes.
But what is the limit of that? Is there one? Or a limit in each domain 
about which
we theorize? But hold on, most of the scientific revolutions tell us 
that we had a nice
theory, but were theorizing about a badly-scoped, badly conceptualized 
idea of what
the domain was. A better theory is usually a better set of formal, 
interacting concepts
which map to a slightly (or greatly) differently defined and scoped 
external domain than the
last theory mapped to. None of this is very straightforward at all.

For example, would you go out on a limb and say that Einstein's theories are
the best (and only true) way of modelling the aspects of physics he 
was concerned
with? If so, would you be equally confident that his theories cover 
essentially
all the important issues in that domain? Or might someone else, 
someday, re-conceptualize
a similar but not 100% overlapping domain, and create an even more 
explanatory
theory of fundamental physics than he came up with? Can we ever say for 
sure,
until that either happens or doesn't?

You can interpret the history of science in two ways: either we were 
just really
bad at it back then (in Newton's day) and wouldn't make those kind of 
mistakes
in our theory formation today, or you can say, no we're about as good at 
it as always,
maybe a little more refined in method but not much, and we'll continue 
to get
fundamental scientific revolutions even in areas we see as sacrosanct 
theory today.
And the new theory will not so much disprove the existing one (as 
Einstein
didn't really disprove Newton) but rather will be  just relegating the 
old theory
to be an approximate description of a partially occluded view of reality.
And then one day, will the same thing happen again to that new theory? Is
there an endpoint? What would the definition of that endpoint be? 


(SILLY) POINT 2

 


As far as I know, there is no good formulation of
a formal connection between a formal system and reality  
-unbalanced quotes, the secret
cause of asymmetry in the universe. How's that for a
quining paragraph?


I don't understand your secret cause of asymmetry in the universe 
point. We understand some things about symmetry breaking in particle 
physics theories, via gauge theories and the like. If you want more 
than this, you'll have to expand on what you mean here.

It is a Koan (kind of). A self-referential, absurd example of a notion 
that an imbalance in a formal symbol system (the words I'm using, and 
the quotes) could possibly be the cause of
asymmetry in the physical universe. It is an attempt to highlight the 
problems we get into
when we confuse the properties of a model with the properties of the 
thing we are
TRYING to model with it.

Quining is the use of self-reference in sentences, often to achieve 
paradox. It is
a childish ploy. e.g. of a Quine:


Is not a sentence is not a sentence.





Re: The Mind (off topic, but then, is anything off topic on this list?)

2002-12-28 Thread Eric Hawthorne
John M wrote:


Eric,

your proposal sounds like: here I am and here is my mind .
What gave you the idea that the two can be thought of as separate
entities?
The fact that we differentiate between a bowel movement and a thinking
process in philosophy ... does not MAKE them separate
entities. 

Eric's first law of abstraction: (known variously as the trivially 
profound law or the profoundly trivial law:)

Every two things are both the same and different.

Bowel movements and mental processes. They are both physical processes 
in the body, it's true.
The difference is that a mental process is
in its essence a process of representation (re presentation) of 
reality and similarly
structured potential realities. That is, it is a process of using some 
aspect in the brain as a stand-in
for some aspect of the external world. And it is a process of doing so 
in a way that is flexible and
general enough to allow the generation of representations (mental 
stand-ins) of new hypothetical
or counterfactual states of the external reality, as well as of actual 
states. Thus we can think of how
things we have not directly apprehended might be, and how things that 
haven't yet taken place could be,
if only this would happen, or, sadly but instructively, of how things 
might have been.

Models of the mind:

Back when 90% of the world and its behaviour was unknown and attributed 
to God, the mind and soul
was thought to be an earthbound, temporarily trapped part of the greater 
mind of God.

During the early industrial age, the mind was thought by some to be the 
process of operation of
a machine comprised of cogs and gears and things like steam power.

And today, we believe that the brain is a computer and the mind is 
software.
The conventional wisdom is that this theory is as naive as the earlier 
theories; that we are similarly
deluded by our present-day fetish with computers. But I think this 
dismissal of brain-as-computer, mind-as-software
is facile. I think our theories of mind have been improving over time. 
The brain IS a form of machine, as the 19th century
people thought. And much more specifically, the brain IS a form of 
universal computing machine, as we think today.

Let me ask this. What category of machine do we have that can hold in it 
symbolic representations
that have correspondences with aspects of the external world? What 
category of machine do we have whose
representations of the world are manipulable and malleable in ways that 
can correspond to changes in the
state of the external world? The computer of course. What part of the 
computer stores the representations
of the world? Well I guess we could say its disk and memory. But what 
part of the computer performs the
manipulations on the symbols which sometimes correspond to the formation 
of hypotheses about the state of the
external world? It's not really part of the computer at all. It is the 
software.

The historical thinker who came closest to understanding the nature of 
the mind was undoubtedly Plato,
who first understood a world of abstract concepts, his world of ideals. 
The only thing he didn't know is that
you could build a machine (the computer) capable of holding inside 
itself and manipulating those ideals, and that
in fact we already had a particularly sophisticated form of that type of 
machine on top of our shoulders.
Our brains. Where is Plato's world of ideals? He didn't know. We do. It 
is the representative and asbstracted
representative information about the world stored symbolically in our 
brains,
manipulated by the cognitive software running on our brains.

The brain-mind duality is solved (and now officially boring). If you can 
say that you truly understand:
  1) the distinction between computing software and computing hardware,
  2) issues such as what makes one piece of software different from or 
the same as another
  versus what makes one piece of hardware different from versus the 
same as another,
  3) What the relationship of computing software to computing hardware
  4) How the essential particulars of high-level software gain 
independence from particulars of computing hardware through
the construction of hierarchies of  levels or layers of software process 
with emergent behaviours at each level,

and yet you claim not to know what a mind is with respect to a brain, 
then I would say you're just not thinking
hard enough about the issue.





Re: The Mind (off topic, but then, is anything off topic on this list?)

2002-12-27 Thread Eric Hawthorne
See response attached as text file:

Joao Leao wrote:


Both seem to me rather vaccuous statements since we don't
really yet have a theory, classical or quantum or whathaveyou , of what a
mind is or does. I don't mean an emprirical, or verifiable, or decidable
or merely speculative theory! I mean ANY theory. Please show me I
am wrong if you think otherwise.



If you don't like my somewhat rambling ideas on the subject, below, perhaps try
A book by Steven Pinker called How the Mind Works. It's supposed to be pretty
good. I've got it but haven't read it yet.

Eric

-



What does a mind do?

A mind in an intelligent animal, such as ourselves, does the following:

1. Interprets sense-data and symbolically represents the objects, relationships, 
processes,
and more generally, situations that occur in its environment.
  
  Extra buzzwords: segmentation, individuation,
   cutting the world with a knife into this and not-this 
   (paraphrased from Zen  The Art of Motorcycle Maintenance)
 
2. Creates both specific models of specific situations and their constituents,
and abstracted, generalized models of important classes of situations and situation
constituents, using techniques such as cluster analysis, logical induction and 
abduction,
bayesian inference (or effectively equivalent processes).
   
  Extra buzzwords: structure pump, concept formation, episodic memory 
  

3. Recognizes new situations, objects, relationships, processes as being instances
of already represented specific or generalized situations, objects, relationships, 
processes. 

The details of the recognition processes vary across sensory domains, but probably
commonly use things like: matching at multiple levels of abstraction with feedback
between levels, massively parallel matching processes, abstraction lattices.

Extra buzzwords: patterns, pattern-matching, neural net algorithms, 
 constraint-logic-programming, associative recall


4. Builds up, through sense-experience, representation, and recognition processes, 
over time, an associatively interrelated library of symbolic+probabilistic models or 
micro-theories about contexts in the environment.

5. Holds micro-theories in degrees of belief. That is, in degrees of being considered
a good simple, corresponding, explanatory, successfully predictive model of some
aspect of the environment.

6. Adjusts degrees of belief through a continual process of theory extension,
hypothesis testing against new observations, incremental theory revision, assessment of
competing extended theories etc. In short, performs a mini, personalized equivalent
of the history of science forming the evolving set of well-accepted scientific 
theories.

Degree of belief in each micro-theory is influenced by factors such as: 

a. Repeated success of theory at prediction under trial against new observations

b. Internal logical consistency of theory.

c. Lack of inconsistency with new observations and with other micro-theories of 
possibly
identical or constituent-sharing contexts.

d. Generation of large numbers of general and specific propositions which are
deductively derived from the assumptions of the theory, and which are independently
verified as being corresponding to observations. 

e. Depth and longevity of embedding of the theory in the knowledge base. i.e.
the extent to which repeated successful reasoning from the theory has resulted in the 
theory becoming a basis theory or theory justifying other extended or analogous 
theories in the knowledge base. 


7. Creates alternative possible world models (counterfactuals or hypotheticals),
by combining abstracted models with episodic models but with variations generated
through the use of substitution of altered or alternative constituent entities,
sequences of events, etc.

 Extra buzzwords: Counterfactuals, possible worlds, modal logic, dreaming 

8. Generates, and ranks for likelihood, extensions of episodic models into the future,
using stereotyped abstract situation models with associated probabilities to predict
the next likely sequences of events, given the part of the situation that has
been observed to unfold so far.
 
9. Uses the extended and altered models, (hypotheticals, counterfactuals), as a context
in which to create and pre-evaluate through simulation the likely effectiveness of 
plans of action designed to alter the course of future events to the material 
advantage of the animal.

10. Chooses a plan. Acts on the world according to the plan, either indirectly, 
through communication with other motivated intelligent agents, or directly by 
controlling its own body and using tools.

10a. Communicates with other motivated intelligent agents to assist it in carrying
out plans to affect the environment:
Aspects of the communication process:
- Model (represent and simulate) the knowledge, motivations and reasoning processes of

Re: Quantum Probability and Decision Theory

2002-12-25 Thread Eric Hawthorne
Stephen Paul King wrote:


it seems to me that if
minds are purely classical when it would not be difficult for us to imagine,
i.e. compute, what it is like to be a bat or any other classical mind. I
see this as implied by the ideas involved in Turing Machines and other
Universal classical computational systems.


Ah, but human thinking is a resource-bounded, real-time computational activity.
Despite the massive parallelism of brain computation, we are of necessity
lazy evaluators of thoughts. If we weren't, we'd all go mad or become
successful zen practitioners. Sure, we do some free-form associative thought,
and ponder connections subconsciously in the background, but if there's one thing
my AI and philosophy studies have taught me, it is that prioritization
and pruning of reasoning are fundamental keys. There are an infinite
number of implications and probability updates that could be explored, given our
present knowledge. But clearly we're only going to do task-directed, motivationally
directed, sense-data-related subsets of those inferences, and a finite amount 
of related associative inference in the background to support those. 

Therefore, if nothing else, we can't imagine what it is like to be a bat
because we would have to have the reasoning time and resources to explore all of a bat's
experience to get there. And it would also be difficult and probably impossible, 
because the bat's mind at birth would be preloaded with different firmware instinctive 
behaviours than ours is. Also, the bat's mind would be connected to a different
though analogous set of nerves, sense organs, and motor control systems, 
and to a differently balanced neurochemical emotional (reasoning prioritization) system. 

Regarding emulating another person's experience. The trouble is, again, that you'd
have to emulate all of it from (before) birth, because clearly our minds are built 
up of our individual experiences and responses to our environment, and our own 
particularly skewed generalizations from those, as much as from anything else.
And again, you'd have to compensate (emulate) for the subtle but vast differences in the firmware
of each person's brain as it came out of the womb. It's an impossible project in
practical terms, even if the brains are Turing equivalent, which they are.

You don't need to resort to QM to explain the difficulty of emulating other minds.
It's simply a question of combinatorics and vast complexity and subtlety of firmware, 
experience and knowledge. 

Remember on the other hand that human linguistic communication only communicates
tips of icebergs of meaning explicitly in the words, and assumes that the utterer
and the reader/listener share a vast knowledge, belief and experience base, and
have similar tendencies toward conjuring up thinking contexts in response to the
prodding of words. (Words are to mentally stored concepts as URLs are to documents).

In order to communicate, we do have to emulate (imagine) our target audience's 
thought patterns and current thinking context and emotional state, so that we can know 
which sequence of words is likely to direct their thoughts and
feelings thus and so as we wish to direct them.

Eric






Quantum Omni-Presents

2002-12-24 Thread Eric Hawthorne
It is well known that a classical Santa Claus is not possible, because,
even with the best travelling salesperson algorithm at his disposal, 
Santa would have to travel faster than the speed of light to deliver presents
to every household on Earth on Christmas Eve or morning, even considering the
rotation of the planet and different time zones.

Isn't the answer obvious? Santa claus, using the isolation of the North Pole
to get some peace and quiet 11 months a year to do serious research (the 12th month
being unfortunately filled with squealy violin-heavy Christmas music piped through 
the north pole mansion on tinny speakers), has 
long ago cracked the mysteries of a TOE and has reigned in the power
of quantum entanglement to achieve simultaneous present delivery, and even,
to the very observant, those likely to be disturbed in their sleep by 
a stirring mouse, to appear in every house, if in shadowy form and only for a moment.
The significance of the chimney in this theory is yet to be determined. Perhaps
it is the most massive object in the house on which a lock can be obtained, so
that the presents aren't accidentally deposited on top of sleeping residents or
out in the back yard. There are more mysteries to be solved here, clearly.




Which is more interesting? Complexity or Simplicity?

2002-11-30 Thread Eric Hawthorne
Wolfram is fascinated by the generation of complexity and randomness 
from simple
rules, and sees this as a fundamental and unexpected observation.

(As a long-time programmer, I'm puzzled by his surprise at this. My bugs 
often have
a complex and seemingly random nature, even in programs thought to be 
trivially simple. ;-)

But seriously, we were taught in 3rd or 4th year comp sci. that if your 
computing
system can do IFs, LOOPs, and SUBROUTINE CALLS (or equivalent), it can 
compute
anything that can be computed (anything that can be computed using a 
finite number of
computing steps operating on finite data, that is). It is a universal 
computer.

It is not really surprising at all to a programmer that some simple
combinations of IFs LOOPs and SUBROUTINE calls can start to generate 
assymetric
output, which when fed back in as input data can lead to non-linear 
systems and
complexity, and even randomness, in a hurry.

-
Wolfram criticizes current scientific theories, almost all based on 
simple mathematical
equations, as being able to model only the simple and regular aspects of 
systems.
These aspects, he seems to imply, might in many cases not be the most 
interesting
aspects of the systems. We are only describing those aspects, and those 
particular
systems, he says, because simple regularities are all that our 
pathetically limited
mathematical equation toolbox allow us to describe. And there is so much
more interesting complexity to the world, which cellular automata can 
better
emulate.
-
But what if, in general, irregular complexity is boring, and it is only 
really
fundamental simplicities, and emerged simplicities, that are interesting?
What if mathematical-equation-based science was right all along?

Alright. Overly simple arrangements might be a little dull (limited in
capacity for interesting properties or behaviours) too.

What if there is a kind of interesting range of complexity of system.
A system characterised by simplicities and order sufficient to ensure 
some regular
structures (identifiable system components, hierarchical organization of 
components)
and regular behaviours, but with enough constrained complexities of
interaction between components to make the system capable of  a range
of non-trivial behaviour and interaction with other systems or components.

Is this a kind of system that is only of interest to us with our particular
human interests? Or is there anything more fundamentally important about
systems with particular levels or arrangements or mixes of order and 
complexity?

Are there, for example, any general rules about the mix of simplicity, 
order,
and complexity (arrangements of entropy) that can produce higher-level
emerged systems which may have properties of  being identifiable,
sustainable or recurring, instrumental in even higher level systems etc.

This is way out there stuff vaguely sketched. I know.

In any case, I tend to agree with Kurzweil's criticism of Wolfram that 
Wolfram
doesn't focus enough the issue of how we find rules that produce the 
emergence of
higher-level order (simplicities, but with enough mobility to be 
interesting).
Wolfram, he says, focusses purely on the generation of
arbitrary complexity, and that's only part of the picture.
















Re: The universe consists of patterns of arrangement of 0's and 1's?

2002-11-27 Thread Eric Hawthorne
Stephen Paul King wrote:


Dear Russell,

   Neat! I have been thinking of this idea in terms of a very weak
anthropic principle and a communication principle. Roughtly these are:
All observations by an observer are only those that do not contradict the
existence of the observer and any communication is only that which
mutually consistent with the existence of the communicators. I will read
you paper again. ;-)


Yes! That's exactly it.

Now how about this. Observers are not constrained to observe a single path
through potential-state space, but rather, are constrained to only 
observe )and
communicate via) one of the paths (or all of the paths) that remain 
consistent with
existence. So there is room for (a limited form of) free will and 
limited observation
of quantum uncertainty in these theories, if necessary.

Eric





Is emergence real or just in models?

2002-11-27 Thread Eric Hawthorne
I'm in the camp that thinks that emergent systems are real phenomena, and
that eventually, objective criteria would be able to be established that 
would
allow us to say definitively whether an emerged system existed in some
time and place in the universe.

I think the criteria would have to do with factors such as:

1. There is something systematic in evidence

There is simplicity and regularity of structure and/or behaviour, when some
view (with some granularity level and some inclusion/exclusion of aspects
of the phenomenon) is taken of the system.

Aside: Note that although the taking of the abstracted view of the 
system is a
modelling/representation task, the fact that the real system conforms
to (yields experimental observations consistent with) some simple abstract
and systematic model is not a model-domain phenomenon; it is a real
phenomenon. If it were not, we could not expect the real system to behave
according to the model. If the real system does behave according to the 
model,
then clearly it does have some of its real properties which correspond 
exactly to the
abstract model properties.

2.  The systematic aspects of the phenomenon emerged.

The system emerged from configurations of subsystems or constituent 
parts, which
having so configured themselves, give rise to the systematic form or 
behviour of
the whole.

3. The simple, regular, and systematic form or behaviour of the whole, 
having so emerged,
is generic and self-defining, in terms of simplicities, regularities 
etc, and so could,
theoretically at least,
have emerged from other constituents, by other means, in other contexts.
-

Some of the interesting related questions are:

Why do higher-level systems emerge in our universe? Is there something about
some systems that allows the system and its constituent parts to out-compete
alternative configurations of matter and energy?

For self-reproducing, evolving life systems, the answer seems clearly to be
yes. But is it really only the genes that are selected for by natural 
selection,
or is it also the organism as a whole system, or even whole groups of 
organisms
which function well together. The fact that humans have outcompeted other
animals of our size by 100s and 1000fold population factors on Earth may 
suggest that
our complex co-operative systematic group behaviours, themselves, may be 
both
a means and a subject of evolutionary success. Maybe it is because 
humans are
so well suited to creating emergent systems (technologies, adaptive 
cultural behaviours)
at will that we (the constituent parts of those systems) have outcompeted
our competitors. (Yes, I know we're messing things up royally as we go too,
but that may come home to roost and is a topic for another day and 
another forum.)


Are high-level phenomena such as cultural memes and technological 
developments
also systems which are selected for in a competitive evolutionary process?


But is natural selection also true of any other non-living emerged systems?

In some cases, the basic forces and properties of matter seem to produce
emergent systems (galaxies, solar systems) which don't so much out-compete
alternative configurations of matter and energy, but are the only possible
configurations of matter and energy, given the rules. Nevertheless, it 
is still
somewhat interesting that they form systems which have obvious simplicities
about their form and behaviour at macro levels. Why should that be so? Any
reason?

Just a random collection of thoughts on the topic.
---

In summary:

- Properties of being systematic, simple, regular are absolute, logical 
properties,
just as the property of being one thing versus being two things is an 
absolute logical
property. However, just because these are logical properties does not 
mean that the
things which manifest those properties (have isomorphic correspondences 
to those
properties, assuming some consistent, regular individuating rules and 
representation rules)
are not real.

- Emergence of high-level systems is real (and may then
be modelled and experimentally verified).

- Emergent systems may have evolutionary advantages
over non-systematic configurations of matter and energy,
which may be one explanation for their prevalence.







Re: The universe consists of patterns of arrangement of 0's and 1's?

2002-11-26 Thread Eric Hawthorne
As I mentioned in an earlier post, titled quantum computational cosmology
why don't we assume/guess that the substrate (the fundamental concept of 
the
universe or multiverse) is simply a capacity for there to be difference, 
but also,
a capacity for all possible differences  (and thus necessarily all possible
configurations of differences) to potentially exist.

If we assume that all possible configurations of differences can 
potentially exist
and that that unexplained property (i.e. the capacity to manifest any 
configuration of
differences) is THE nature of the substrate, then
a computation can just be defined as a sequence of states selected from all
of the potential difference-configurations inherent in the substrate.

I don't even think that this notion of a computation requires energy to 
do the
information processing.

My main notion in the earlier post was that some selections of a sequence
of the substrate's potential states will corresponds to order-producing
computations (computations which produce emergent structure, systems,
behaviour etc).

Such an order-producing sequence of substrate potential-states might be
considered to be the observable universe (because the order generation
in that sequence was adequate to produce complex systems good enough
to be sentient observers of the other parts of that state-sequence).

If we number the states in that selected order-producing sequence of 
substrate
states from the first-selected state to the last-selected state, we have 
a numbering
which corresponds to the direction of the time arrow in that observable 
universe.

My intuition is that the potential-states (i.e. potentially existing 
configurations of
differences) of the substrate may correspond to quantum states and 
configurations
of quantum entanglement, and that selection of meaningful or 
observable sequences
of potential states corresponds to decoherence of quantum states into 
classical
states.   

Eric

Stephen Paul King wrote:

It is the assumption that the 0's and 1's can exist without some substrate that bothers me. If we insist on making such an assuption, how can we even have a notion of distinguishability between a 0 and a 1?.
   To me, its analogous to claiming that Mody Dick exists but there does not exists any copies of it. If we are going to claim that all possible computations exists, then why is it problematic to imagine

that all
possible implementations of computations exists as well. Hardware is not an
epiphenomena of software nor software an epiphenomena of hardware, they
are very different and yet interdependent entities.






Re: emergence (or is that re-emergence)

2002-11-26 Thread Eric Hawthorne
Let me first apologize for not yet reading the mentioned references on 
the subject,

John Mikes wrote:

As long as we cannot qualify the steps in a 'process' leading to the
emerged new, we call it emergence, later we call it process.
Just look back into the cultural past, how many emergence-mystiques
(miracles included) changed into regular quotidien processes, simply by
developing more information about them.
I did not say: the information.  Some.


I don't think this is correct.

A fundamental concept when talking about emergence ought to be the
pattern, or more precisely, the interesting, coherent, or perhaps useful 
pattern; useful
perhaps in the sense of being a good building block for some other pattern.
Process is a subset of pattern, in the sense in which I'm using 
pattern. Also,
system is a subset of pattern.

Q:
How do you know when you have completely described a pattern?

Two examples, or analogies, for what I mean by this question:

e.g. 1 I used to wonder whether I had completely proved something in 
math, and
would go into circles trying to figure out how to know when something was
sufficiently proved or needed more reductionism i.e. The old
Wait a minute: How do we know that 1 + 1 = 2? problem. The gifted 
mathematicians
teaching me seemed to have no trouble knowing when they were finished 
proving
something. It was intuitively obvious -- load of cods wallop of 
course. And I
still wonder to this day if they were simply way smarter than me or 
prisoners of
an incredibly limited, rote-learned math worldview. The point is, every 
theory;
every description of states-of-affairs and processes or systems 
(patterns) using
concepts and relationships, has a limited domain-of-discourse, and mixing
descriptions of patterns in different domains is unnecessary and 
obfuscates the
essentials of the pattern under analysis.

e.g. 2 Is the essence of human life in the domain of DNA chemistry, or 
in the domain
of sociobiology, psychology, cultural anthropology? Are we likely to 
have a future
DNA based theory of psychology or culture? Definitely not. Cellular 
processes and
psychology and culture are related, but not in any essential manner.

A:
Let's define a complete description of a pattern as a description which
describes the essential properties of the pattern. The essential 
properties of the
pattern are those which, taken together, are sufficient to yield the 
defining
interestingness, coherence, or usefulness of  the pattern.

Note that any other properties (of the medium in which the pattern 
lives) are
accidental properties of the incarnation of the pattern.

Note also that  the more detailed mechanisms or sub-patterns which may 
have generated
each particular essential property of the main pattern are irrelevant to 
the creation
of a minimal complete description of the main pattern being described. 
As long as
the property of the main pattern has whatever nature it has to have as 
far as the
pattern is concerned, it simply doesn't matter how the property got that 
way, or
what other humps on its back the property also has in the particular 
incarnation.

And that level-independence or spurious-detail independence or simply
abstractness of useful patterns is one of the reasons why it makes 
sense to talk
about emergence.

e.g.of level-independence of a pattern.

1.  Game of Pong

2a. Visual Basic   2b. Pascal program   2c. Ping-pong table,
 program on PCon a Mac  ball, 
bats, players

3a. x86 ML program   3b. PowerPC ML program3c. Newtonian physics of
   
  everyday objects
4a.  voltage patterns in   4b. voltage patterns in
 silicon NAND gates Gallium Arsenide NOR gates (you get the idea)

Key:
-
1. The main pattern being described

2, 3, 4. Lower-level i.e. implementation-level or 
building-block-level patterns whose own
internal details are irrelevant to the emergence of the main pattern, 
which emerges
essentially identical from all three of very different lower level 
building-block patterns.

So in summary, an emergent pattern is described as emergent because it 
emerges,
somehow, anyhow, doesn't matter how, as an abstract, useful, independently
describable pattern (process, system, state-of-affairs). A theory of the 
pattern's essential
form or behaviour need make no mention of the properties of the 
substrate in which the
pattern formed, except to confirm that, in some way, some collection of 
the substrate
properties could have generated or accidentally manifested each 
pattern-essential property.
A theory of form and function of the pattern can be perfectly adequate, 
complete, and
predictive (in the pattern-level-appropriate domain of discourse), 
without making any
reference to the substrate properties.

This is not to say that any substrate can generate any pattern. There 
are constraints,
but they are of many-to-many 

Riffing on Wolfram

2002-11-10 Thread Eric Hawthorne
Any comments? Can anyone point me to similar speculations?

Thanks, Eric



 

A collection of thoughts (very much a work in early progress) 
provoked by chapters 9 and 12 of A New Kind of Science 
by Stephen Wolfram.

---
Caveat: The following was written hastily and in somewhat sloppy,
informal terms, with casual or vague use of some arguably 
pseudo-scientific terms, like de-quantized or classicized by which
I mean something like the process whereby a single state or average
of quantum probabilities seems to take on importance so as be considered
the actual state of some particle etc. after it is observed.
---

Wolfram postulates that space-time is a network (of nodes and connections),
manipulated by simple programs which have the 
characteristics that:

1. the only thing they do is make local adjustments
to the configuration of the network (e.g. replace a node by 3 nodes joined
by connections, erase a connection etc.)
2. They are order-invariant (causally-invariant he calls it) 
in their global effect. It doesn't matter which time-order the local 
replacement rules fire in.

and he goes on to begin to prove how relativity, gravity, matter etc. 
work out nicely in such a model. 
But he doesn't say what the substrate of the universe network is, 
and he cannot yet fit quantum theory into his model,
which got me to thinking:




---
Quantum Computational Cosmology??  - E.H. 2002
---

---
--- The universe is information. More specifically, it is emergent  
--- order within an infinite-bandwidth signal, or in other words, is just
--- a particular, priveleged view of all-possible information, all at once.
---


On reading Wolfram's book, and in particular the part about physics as CAs operating on
a network to produce space-time, matter, energy, I was prompted to have the following
ideas. Please excuse the lack of rigour. I'm just trying to convey intuitions here
and get some feedback on whether anyone thinks there's promise in this direction
or if there are other references people can point me to.

These questions arise: 
1. What would the network of nodes and arcs between nodes, in Wolfram's 
   spacetime-as-network be made of? i.e. what is the substrate of Wolfram's 
   universe network?

2. How do we define the time arrow and what makes the universe 
   as it appears to be?  

My essential concepts are these:


Principle 1
-
The substrate is simply (all possible arrangements of differences)
- 
or perhaps put another way, the substrate of the universe is
the capacity for all possible information, 



The fundament is the binary difference. Each direct difference is an arc,
and network nodes are created simply by virtue of being the things at either end of
a direct difference.

Let's posit that there is a multiverse, which we can think of as
all possible states of all possible universes, or as the information substrate
of the universe.

An information-theoretic interpretation of the multiverse might say that it is
defined as:

a universe with just one thing and no differences (boring) +
a universe with one difference (ergo, two things) +
all possible configurations of two differences +
all possible configurations of three differences + etc.



   ----- A binary difference - the fundamental unit of information
   
   
   
 A --- B   -- two things, A and B, created just by virtue of being defined
   to be at the opposite poles of the binary difference.



To define a particular configuration of the universe, that is, a network
of binary direct-difference relationships between a certain number of
postulated individuals, you can use binary bits, as follows: 
The individual things are denoted A,B,C...
A 1 in the matrix (below left) denotes that a direct difference exists between the
column-labeling individual and the row-labeling individual. 

  E D C B AB - C 
E   1 0 1 1   / \  
D 0 0 1   equivalent to  A - E
C   1 0   \ /
B 1D  
   
   
Every fundamental-level thing that exists is either at the end of a 
direct difference from another thing, or is reachable by some chain of
direct differences from the other thing. things which are not reachable
by a chain of direct differences from some other thing do not exist.

So why don't we posit that the Wolfram network that describes the form
of spacetime at its smallest-grained (i.e. plank-length) level is in fact 
comprised of nodes and arcs which have no other reality (no other material 
that they are made of) other than binary differences. i.e.