Re: 2C Mary - How minds perceive things and not things

2003-06-08 Thread Bretton Vine
Hi
Been lurking a few weeks, feeling a little overwhelmed by how much more
everyone seems educated on the topic than I've managed to absorb through
my own casual readings. Internet geek, psychonaut from South Africa, no
formal education. :-)
(Apologies for length - so much I want to cover)

R Hlywka wrote:
 Think of a brain more than just an intake valve, reacting
 to similar stuff, and not so similar stuff.

Yes it may be more than an input valve in the sense that it's an
information processing organ, but it's still largely just an automation
organ governed by rules, filters and prior knowledge (genetic or learned
through bumps on the head as we stumble). A key factor here is the
automation element. I don't have the references at hand (but it's
available online) in which some researchers (well one in particular
whose name escapes me) theorises that as humans we are 99.999%
automation, with the only real stream of conscious thought being the
0.001% clean slate we start off with if you take away the predisposed
starting conditions of genetics and nurture.

My point may not be directly relevant to the thread, but it's useful to
remember in the context of the simulation argument. It's much easier to
code an automated system (even a self-learning one) than it is to code
conscious thought. Given enough processing power and computational
cycles, it's possible to accept that the output may so closely resemble
conscious thought that it can be defined as such, even if it's only a
self-evident to the program itself (i.e. as a starting condition, when X
= Y, consider self as Z(en) :-)

 But that is merely predisposed grow pattern.

There is an interesting paradox here though. While we could write off
everything we think/do/feel as merely iterations of an existing
predisposed grow pattern, and that all external input in addition to the
predisposed grow pattern as the same process occuring to external
things, we are still faced with the perception (illusion?) that we are
conscious and that we can influence the direction in which our lives
move forward (or consciously adjust our pattern matching techniques to
better suit survival)

Then there is the added element of randomness or perhaps entropy ...
even in a 100% controlled simulation experiment, with every possible
starting condition being accounted for, something new can still enter
the equation. In turn this leads to more knowledge about starting
conditions, which affects the next simulation, which in turn gives rise
to a new random element.

What is this random element exactly? I don't know (besides unlearned
knowledge). Seems like the more we learn about the universe, the more
questions we don't have answers for. In this sense, an infinite number
of simulation programs only serve as a filter for discovering a more
complete set of starting conditions for each iteration. (Assuming you
have an existing 'reality' to compare the simulations to).

Anyway, my point is that any predisposed grow pattern is just a filter
technique for finding out what has been missed from the initial starting
conditions, even if the predisposed grow pattern is an entirely random
biological process. (life continues even if the organisms don't)

 but this is where you get a smart
 galaxy. it can learn to filter out what it feels it does not need to PAY
 ATTENTION TO.

It's much harder to consiously learn filteration patterns than it is to
unconciously learn them. Just observe any child growing up - they take
on more of their parents behavioral patterns through unconcious
behavioral duplication than anything else. (Kinda in direct opposite to
Do as I say, not as I do)

The fact that an organ can replicate the filtration processes of other
organs in it's sphere of influence does not mean it's smart. Parrots can
fool people into thinking they're conscious of the meaning of the words
they replicate, but they're not much smarter than any animal that
realises it can have an easy ride to food and protection. I'd say that's
more instinct than 'smart'. Ditto for humans. Instinctively we replicate
the behavioral patterns and learnings of the other humans within our
sphere of influence because if we don't - we get rejected (or in some
cases killed) because we don't resemble our family/tribe/community
closely enough.

(You want cultural diversity - come to South Africa - no matter how PC
you may feel you are, integration is a massively difficult task for any
species, even within the species itself. Even a conscious attempt
demonstrates just how preconcieved our thinking patterns really are)

 We code our memory by the continious rearangement of pathways.

This is an entirely unconscious process, born of millions of years of
accidental combinations of chemicals and environment. Working just on a
sort of game theory principle, even the most simple of single celled
organisms would find it beneficial to colocate as a multicellular
organism for the simple purpose of reducing the number of functions each

Re: a prediction of the anthropic principle/MWT

2003-06-08 Thread Bretton Vine
John Collins wrote:
 For instance, our planet might have
 experienced an unusually high number of 'near misses' with other
 astronomical bodies.

I'm always amused by the sense of deja-vu which occured on mailing
lists. There I was looking at the moon, thinking how lucky we are it
caught a number of astronomical bodies instead of us, only to come to
the computer and find the same/similar topic being broght up. Sit on
enough mailing lists and it soon becomes apparent the same/similar thing
gets thought of (and at times communicated) by a significant number of
people for it to be more than mere co-incidence.

(Brings to mind the research being done as to whether a mass
concentration of thought can actually affect the outcome of a random
computational process - with some successes already being demonstrated)

See the Global Consciousness Project http://noosphere.princeton.edu for
more information on this.

 Now that we're here to watch, the universe will be
 forced to obey the law of averages, so there could be a significantly
 higher probability of a deadly asteroid collision than would be
 indicated by the historical frequeny of said events. Perhaps we should
 carefully compare how often the other planets have been hit with how
 often we have: They certainly look more craterful

See above :-)
What if on an purely unconscious level we can manipulate reality itself?

If in a group of prepared trials, a number of people concentrating on a
single number (all the same number) can cause a computational random
number generator to be statistically less than random for the duration
of that 'group' thought then maybe the same process can apply outside of
merely influencing an electrical process. Maybe it can be extended to
matter itself?

Perhaps there are two ways of looking at it:
a) in any universe which gives rise to complex organisms (perhaps
sentient) there is a statistically lower averare of astronomical
collissions compared to other bodies in the same region, leading to the
argument that the lower than average collisions allowed for complex
organisms to form
b) in any universe which gives rise to complex organisms, the number of
astronomical collisions will decline in proportion to the complexity of
the biological organisms present (even if that means sending organisms
into space to reroute potential collisions :-P)

Bretton
-- 
Cellular: +27.82.494.6902  Yahoo: bretton_cubed  ICQ: 175753755
GPG key : http://bretton.hivemind.net/bretton_public.key
 trends::nu-media::techno-philosophy::ai

I suppose the secret to happiness is learning to appreciate the
moment. -Calvin





Re: Response to R.Hlywka's brain/mind comments

2003-06-08 Thread Bretton Vine
Eric Hawthorne wrote:
 I don't remember claiming (maybe someone else claimed) you could copy a
 mind.

It's a dream for many transhumanists :-) and I reckon it will be
possible to the extent that external observers can no longer tell the
difference, even if the copied mind itself new it wasn't the original.[1]

A more interesting question is:
Can you reconstruct one?

Assuming you had enough prior knowledge of (relevant) starting
conditions, would the simulation give rise to the same mind in each
instance?

Or, assuming you had collected all possible information at point A, and
based on information of A + n, could work backwards and determine A - n.
In such a case, could you reconstruct the mind itself if you had the
ability to simulate the matter, matter location, and biological
instances of matter combinations.

Bretton
[1] Over a period of time replace each nueron with a non-biological
equivelent until such time as all nuerons are no longer biological. Snap
in RJ45 and #cp brain /dev/hda

-- 
Cellular: +27.82.494.6902  Yahoo: bretton_cubed  ICQ: 175753755
GPG key : http://bretton.hivemind.net/bretton_public.key
 trends::nu-media::techno-philosophy::ai

Reality is that which, when you stop believing in it, doesn't go away.
- Philip K. Dick, How to Build a Universe





Re: are we in a simulation?

2003-06-08 Thread George Levy




We exist in an infinite number of simulations. Any arbitrary number of simulations
less than infinity  would require a reason. We are led to this conclusion
by assuming a TOE which by definition has no a-priori reason. (This is the
philosophical rationale for postulating the plenitude)

Discreteness may be important in our world for the development of consciousness, 
but it is certainly not necessary across worlds. I believe therefore that
the differences between the simulations is infinitesimal - not discrete -
and therefore that the number of simulations is infinite like the continuum.


Not only is the number of simulations infinite but the number of levels in
simulation may also be infinite. The levels are discrete - I cannot imagine
how they could be otherwise.

Given the above, let's consider one particular conscious being. His awareness
of his own states is likely to be uncertain. Another way of saying this is
that several states transitions could generate the same consciousness
stream. Modeling the state transitions as an algorithm, for example, there
may be multiple algorithmic paths that could generate the same output.


Hence his consciousness will have "thickness across the multi-worlds," overlapping
a set of multi-worlds each slightly differing from the others. How many
multi-worlds will it overlap? An infinite number since they differ as in
a continuum. Everytime a "measurement" is made, the set of worlds spanned
by this consciousness is defined more narrowly, but the number in the set
remains infinite. In addition, each simulation in the set need not belong
to the same "level."

We're faced with the strange possibility that the consciousness spans an
infinite number of simulations distributed over widely different levels.
Each individual simulation implementation becomes infinitesimal and unimportant
in comparison with the the whole infinite set of implementations that the
consciousness covers. A particular simulation that stops operating (for example
because the plug is pulled) will hardly affect or be missed by the consciousness
as a whole.  In fact I rather think of the "simulations" as static states
in the plenitude, and consciousness as a locus in the plenitude linking these
states in a causally and logically significant manner. We live in the plenitude,
not in any particular simulation. Each point in the conscious locus perceives
the world that gives it meaning.

George