Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-28 Thread Stathis Papaioannou
On 28/02/2008, John G. Rose [EMAIL PROTECTED] wrote:

 Actually a better way to do it as getting even just the molecules right is a 
 wee bit formidable - you need a really powerful computer with lots of RAM. 
 Take some DNA and grow a body double in software. Then create an interface 
 from the biological brain to the software brain and then gradually kill off 
 the biological brain forcing the consciousness into the software brain.

  The problem with this approach naturally is that to grow the brain in RAM 
 requires astronomical resources. But ordinary off-the-shelf matter holds so 
 much digital memory compared to modern computers. You have to convert matter 
 into RAM somehow. For example one cell with DNA is how many gigs? And cells 
 cost a dime a billion. But the problem is that molecular interaction is too 
 slow and cluncky.

Agreed, it would be *enormously* difficult getting a snapshot at the
molecular level and then doing a simulation from this snapshot. But as
a matter of principle, it should be possible.




-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-28 Thread Stathis Papaioannou
On 29/02/2008, Matt Mahoney [EMAIL PROTECTED] wrote:

 By equivalent computation I mean one whose behavior is indistinguishable
  from the brain, not an approximation.  I don't believe that an exact
  simulation requires copying the implementation down to the neuron level, much
  less the molecular level.

How do you explain the fact that cognition is exquisitely sensitive to
changes at the molecular level?



-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-28 Thread Stathis Papaioannou
On 29/02/2008, Matt Mahoney [EMAIL PROTECTED] wrote:

  4. The embodied, planned, personalized Turing test.  Communication is not
  restricted to text.  The machine is planted in the skull of your clone.  Your
  friends and relatives have to decide who has the carbon-based brain.

  Level 4 should not require simulating every neuron and synapse.  Without the
  constraints of slow, noisy neurons, we could use other algorithms.  For
  example, low level visual processing such as edge and line detection would 
 not
  need to be implemented as a 2-D array of identical filters.  It could be
  implemented serially by scanning the retinal image with a window filter.  
 Fine
  motor control would not need to be implemented by combining thousands of
  pulsing motor neurons to get a smooth average signal.  The signal could be
  computed numerically.  The brain has about 10^15 synapses, so a
  straightforward simulation at the neural level would require 10^15 bits of
  memory.  But cognitive tests suggest humans have only about 10^9 bits of long
  term memory, suggesting that more compressed representation is possible.

  In any case, level 1 should be sufficient to argue convincingly that either
  consciousness can exist in machines, or that it doesn't in humans.

I agree that it should be possible to simulate a brain on a computer,
but I don't see how you can be so confident that you can throw away
most of the details of brain structure with impunity. Tiny changes to
neurons which make no difference to the anatomy or synaptic structure
can have large effects on neuronal behaviour, and hence whole organism
behaviour. You can't leave this sort of thing out of the model and
hope that it will still match the original.




-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-27 Thread Stathis Papaioannou
On 27/02/2008, John G. Rose [EMAIL PROTECTED] wrote:

 Well if you spend some time theorizing a model of a brain digitizer that 
 operates within known physics constraints it's not an easy task getting just 
 the molecular and atomic digital data. You have to sample over a period of 
 time and space using photons and particle beams. This in itself interferes 
 with the sample. Then say this sample is reconstructed within a theoretically 
 capable computer, the computer will most likely have to operate in slow time 
 to simulate the physics of all the atoms and molecules as the computer is 
 itself constrained by the speed of light. I'm going this route because I 
 don't think that it is possible to get an instantaneous reading of all the 
 atoms in a brain, you have to reconstruct over time and space. THEN, this is 
 ignoring the subatomic properties and forget about quantum data sample 
 digitization I think it is impossible to get an exact copy.

  So this leaves you with a reconstructed approximation. Exactly how much of 
 this would be you is unknown because any subatomic and quantum properties of 
 you are - started from scratch - this includes any macroscopic and 
 environmental properties of subatomic and quantum and superatomic molecular 
 state and positioning effects. And if the whole atomic level model is started 
 from scratch in the simulator it could disintegrate or diverge as it is all 
 forced fit together. Your copy is an approximation of which it is unknown how 
 close it is actually of you or if you could be even put together accurately 
 enough in the simulator.

There are some who think that all you need to simulate a brain (and
effectively copy a person) is to fix it, slice it up, and examine it
under a microscope to determine the synaptic structure. This is almost
certainly way too crude: consider the huge difference to cognition
made by small molecules in tiny concentrations, such as LSD, which do
no more than slightly alter the conformation of certain receptor
proteins on neurons by binding to them non-covalently. On the other
hand, it is equally implausible to suppose that you have to get it
right down to the subatomic level, since otherwise cosmic rays or
changing the isotope composition of the brain would have a major
effect, and they clearly don't.




-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-27 Thread Stathis Papaioannou
On 28/02/2008, John G. Rose [EMAIL PROTECTED] wrote:

 I don't know if you can rule out subatomic and quantum. There seems to be 
 more and more evidence pointing to an amount of activity going on there. A 
 small amount of cosmic rays don't have obvious immediate gross effects but 
 interaction is occurring. Exactly how much of it would need to be replicated 
 is not known. You could be missing out on important psi elements in 
 consciousness which are taken for granted :)

  Either way it would be approximation unless there was some way using 
 theoretical physics where an exact instantaneous snapshot could occur with 
 the snapshot existing in precisely equivalent matter at that instant.

Well, maybe you can't actually rule it out until you make a copy and
see how close it has to be to think the same as the original, but I
strongly suspect that getting it right down to the molecular level
would be enough. Even if quantum effects are important in
consciousness (and I don't think there is any clear evidence that this
is so), these would be generic quantum effects, reproduced by
reproducing the molecular structure. Transistors function using
quantum level effects, but you don't need to replace a particular
transistor with a perfect copy to have an identically functioning
electronic device.


-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-25 Thread Stathis Papaioannou
On 26/02/2008, John G. Rose [EMAIL PROTECTED] wrote:

 There is an assumed simplification tendency going on that a human brain could 
 be represented as a string of bits. It's easy to assume but I think that a 
 more correct way to put it would be that it could be approximated. Exactly 
 how close the approximation could theoretically get is entirely unknown.

It's not entirely unknown. The maximum simulation fidelity that would
be required is at the quantum level, which is still finite. But
probably this would be overkill, since you remain you from moment to
moment despite changes in your brain which are gross compared to the
quantum level.




-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-23 Thread Stathis Papaioannou
On 24/02/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:

  Does 2+2=4 make a sound when there is no one around?
  
Yes, but it is of no consequence since no one can hear it. However, if
we believe that computation can result in consciousness, then by
definition there *is* someone to hear it: itself.
  

 But it's still of no 'consequence', no?

Of no consequence as far as anything at the level of the substrate of
its implementation is concerned, no. In order to find such a
computation hidden in noise we would have to do the computation all
over again, using conventional means. But unless we require that the
computation interact with us, that should make no difference to *it*.
If the computation simulates an inputless virtual reality with
conscious inhabitants, they should be no less conscious for the fact
that we can't talk to them.




-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Stathis Papaioannou
On 21/02/2008, John Ku [EMAIL PROTECTED] wrote:

  By the way, I think this whole tangent was actually started by Richard
  misinterpreting Lanier's argument (though quite understandably given
  Lanier's vagueness and unclarity). Lanier was not imagining the
  amazing coincidence of a genuine computer being implemented in a
  rainstorm, i.e. one that is robustly implementing all the right causal
  laws and the strong conditionals Chalmers talks about. Rather, he was
  imagining the more ordinary and really not very amazing coincidence of
  a rainstorm bearing a certain superficial isomorphism to just a trace
  of the right kind of computation. He rightly notes that if
  functionalism were committed to such a rainstorm being conscious, it
  should be rejected.

Only if it is incompatible with the world we observe.





-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Stathis Papaioannou
On 21/02/2008, John Ku [EMAIL PROTECTED] wrote:
 On 2/20/08, Stathis Papaioannou [EMAIL PROTECTED] wrote:
   On 21/02/2008, John Ku [EMAIL PROTECTED] wrote:
  
 By the way, I think this whole tangent was actually started by Richard
 misinterpreting Lanier's argument (though quite understandably given
 Lanier's vagueness and unclarity). Lanier was not imagining the
 amazing coincidence of a genuine computer being implemented in a
 rainstorm, i.e. one that is robustly implementing all the right causal
 laws and the strong conditionals Chalmers talks about. Rather, he was
 imagining the more ordinary and really not very amazing coincidence of
 a rainstorm bearing a certain superficial isomorphism to just a trace
 of the right kind of computation. He rightly notes that if
 functionalism were committed to such a rainstorm being conscious, it
 should be rejected.
  
   Only if it is incompatible with the world we observe.


 I think that's the wrong way to think about philosophical issues. It
  seems you are trying to import a scientific method to a philosophical
  domain where it does not belong. Functionalism is a view about how our
  concepts work. It is not tested by whether it is falisified by
  observations about the world.

  Or if you prefer, conceptual analysis does produce scientific
  hypotheses about the world, but the part of the world in question is
  within our own heads, something that we ourselves don't have
  transparent access to. If we had transparent access to the way our
  concepts work, the task of cognitive science and philosophy and along
  with it much of AI would be considerably easier. Our best way of
  testing these hypotheses at the moment is to see whether a proposed
  analysis would best explain our uses of the concept and our conceptual
  intuitions.

Functionalism at least has the form of a scientific hypothesis, in
that it asserts that a functionally equivalent analogue of my brain
will have the same mental properties. Even though in practice it isn't
empirically falsifiable we can examine it to make sure it is
internally consistent, compatible with observed reality, and in
keeping with the principle of Occam's razor. We should certainly be
wary of a theory that sounds ridiculous, but unless it fails in one of
these three areas it is wrong to dismiss it.




-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-19 Thread Stathis Papaioannou
On 19/02/2008, John Ku [EMAIL PROTECTED] wrote:

 Yes, you've shown either that, or that even some occasionally
 intelligent and competent philosophers sometimes take seriously ideas
 that really can be dismissed as obviously ridiculous -- ideas which
 really are unworthy of careful thought were it not for the fact that
 pinpointing exactly why such ridiculous ideas are wrong is so often
 fruitful (as in the Chalmers article).

It doesn't sound so strange when you examine the distinction between
the computation and the implementation of the computation. An analogy
is the distinction between a circle and the implementation of a
circle.

It might be objected that it is ridiculous to argue that any irregular
shape looked at with the right transformation matrix is an
implementation of a circle. The objection is valid under a non-trivial
definition of implementation. A randomly drawn perimeter around a
vicious dog on a tether does not help you avoid getting bitten unless
you have the relevant transformation matrix and can do the
calculations in your head, which would be no better than having no
implementation at all but just instructions on how to draw the
circle de novo.

Thus, implementation is linked to utility. Circles exist in the
abstract as platonic objects, but platonic objects don't interact with
the real world until they are implemented, and implemented in a
particular useful or non-trivial way. Similarly, computations exist as
platonic objects, such as Turing machines, but don't play any part in
the real world unless they are implemented. There is an abstract
machine adding two numbers together, but this no use to you when you
are doing your shopping unless it is implemented in a useful and
non-trivial way, such as in an electronic calculator or in your brain.

Now, consider the special case of a conscious computation. If this
computation is to interact with the real world it must fulfil the
criteria for non-trivial implementation as discussed. A human being
would be an example of this. But what if the computation creates an
inputless virtual world with conscious inhabitants? Unless you are
prepared to argue that the consciousness of the inhabitants is
contingent on interaction with the real world there seems no reason to
insist that the implementation be non-trivial or useful in the above
sense. Consciousness would then be a quality of the abstract platonic
object, as circularity is a quality of the abstract circle.

I might add that there is nothing in this which contradicts
functionalism, or for that matter geometry.



-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-18 Thread Stathis Papaioannou
On 19/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:

 Sorry, but I do not think your conclusion even remotely follows from the
 premises.

 But beyond that, the basic reason that this line of argument is
 nonsensical is that Lanier's thought experiment was rigged in such a way
 that a coincidence was engineered into existence.

 Nothing whatever can be deduced from an argument in which you set things
 up so that a coincidence must happen!  It is just a meaningless
 coincidence that a computer can in theory be set up to be (a) conscious
 and (b) have a lower level of its architecture be isomorphic to a rainstorm.

I don't see how the fact something happens by coincidence is by itself
a problem. Evolution, for example, works by means of random genetic
mutations some of which just happen to result in a phenotype better
suited to its environment.

By the way, Lanier's idea is not original. Hilary Putnam, John Searle,
Tim Maudlin, Greg Egan, Hans Moravec, David Chalmers (see the paper
cited by Kaj Sotola in the original thread -
http://consc.net/papers/rock.html) have all considered variations on
the theme. At the very least, this should indicate that the idea
cannot be dismissed as just obviously ridiculous and unworthy of
careful thought.




-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread Stathis Papaioannou
On 18/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:

 The last statement you make, though, is not quite correct:  with a
 jumbled up sequence of episodes during which the various machines were
 running the brain code, he whole would lose its coherence, because input
 from the world would now be randomised.

 If the computer was being fed input from a virtual reality simulation,
 that would be fine.  It would sense a sudden change from real world to
 virtual world.

The argument that is the subject of this thread wouldn't work if the
brain simulation had to interact with the world at the level of the
substrate it is being simulated on. However, it does work if you
consider an inputless virtual environment with conscious inhabitants.
Suppose you are now living in such a simulation. From your point of
view, today is Monday and yesterday was Sunday. Do you have any
evidence to support the belief that Sunday was was actually run
yesterday in the real world, or that it was run at all? The simulation
could have been started up one second ago, complete with false
memories of Sunday. Sunday may not actually be run until next year,
and the version of you then will have no idea that the future has
already happened.

 But again, none of this touches upon Lanier's attempt to draw a bogus
 conclusion from his thought experiment.


  No external observer would ever be able to keep track of such a
  fragmented computation and as far as the rest of the universe is
  concerned there may as well be no computation.

 This makes little sense, surely.  You mean that we would not be able to
 interact with it?  Of course not:  the poor thing will have been
 isolated from meanigful contact with the world because of the jumbled up
 implementation that you posit.  Again, though, I see no relevant
 conclusion emerging from this.

 I cannot make any sense of your statement that as far as the rest of
 the universe is concerned there may as well be no computation.  So we
 cannot communicate with it anymore that should not be surprising,
 given your assumptions.

We can't communicate with it so it is useless as far as what we
normally think of as computation goes. A rainstorm contains patterns
isomorphic with an abacus adding 127 and 498 to give 625, but to
extract this meaning you have to already know the question and the
answer, using another computer such as your brain. However, in the
case of an inputless simulation with conscious inhabitants this
objection is irrelevant, since the meaning is created by observers
intrinsic to the computation.

Thus if there is any way a physical system could be interpreted as
implementing a conscious computation, it is implementing the conscious
computation, even if no-one else is around to keep track of it.



-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] AI critique by Jaron Lanier

2008-02-16 Thread Stathis Papaioannou
On 17/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:

 Lanier's rainstorm argument is spurious nonsense.

That's the response of most functionalists, but an explanation as to
why it is spurious nonsense is needed. And some such as Hans Moravec
have actually conceded that the argument is valid:

http://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1998/SimConEx.98.html



-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] AI critique by Jaron Lanier

2008-02-15 Thread Stathis Papaioannou
On 16/02/2008, Kaj Sotala [EMAIL PROTECTED] wrote:

 However, despite what is claimed, not every physical process can be
 interpreted to do any computation. To do such an interpretation, you
 have to do so after the fact: after all the raindrops have fallen, you
 can assign their positions formal roles that correspond to
 computation, but you can't *predict* what positions will be assigned
 what roles ahead of the time - after all, they are just randomly
 falling raindrops. You can't actually *use* the rainstorm to compute
 anything, like you could use a computer - you have to first do the
 computation yourself, then assign each state of the rainstorm a
 position that corresponds to the steps in your previous computation.

Sure, you can't interact with the raindrop computation, but that
doesn't mean it isn't conscious. Suppose a civilization built a
computer implementing a virtual environment with conscious
inhabitants, but no I/O. The computer is launched into space and the
civilization is completely destroyed when its sun goes nova. A billion
years later, the computer is found by another civilization which
figures out how the power supply works and starts it up, firing the
virtual inhabitants into life. As far as the second civilization is
concerned, the activity in the computer could mean anything or
nothing, like the patterns in a rainstorm.

Just as the space of all possible rainstorms contains one that is
isomorphic with any given computer implementing a particular program,
so the space of all possible computers that an alien civilization
might build contains one that is isomorphic with any sufficiently
large rainstorm. It doesn't matter that manual for the computer
represented by the rainstorm has been lost, or that the computer was
never actually built: all that matters for the program to be
implemented is that it rain.



-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Wrong focus?

2008-02-01 Thread Stathis Papaioannou
On 01/02/2008, Mike Tintner [EMAIL PROTECTED] wrote:

 The philosophical conceit that we do not really know that there is a table
 (or a penis) in front of us, is just that - a fanciful conceit. It shows
 what happens when you rely on words and symbols as your sole medium of
 intellectual thought - as philosophers mainly do.

 In reality, you have no problem knowing and being sure of those objects and
 the world around you  - except in exceptional circumstances. Why? Two
 reasons.

 First, all sensations/perceptions are continually being unconsciously tested
 for their reality -  a process which I would have thought every AI/robotics
 person would take for granted. Hence your brain occasionally thinks: was
 that really so-and-so I saw?...or: where exactly in my foot *is* that
 pain? Your unconscious brain has had problems checking some perception.

 Secondly, your brain works by *common sense* perception and testing. We are
 continually testing our perceptions with all our senses and our whole body.
 You don't just look at things, you reach out and touch them, smell them,
 taste them, and confirm over and over that your perceptions are valid. (Also
 it's worth pointing out that since you are continually moving in relation to
 objects, your many different-angle shots of them are continually tested
 against each other for consistency). Like a good journalist, you check more
 than one source. Your perceptions are continually tested in a deeply
 embodied way - and in general v. much in touch with reality.

I'm not suggesting that there is any reason to believe there is no
real world out there. What I am saying is that *if* the world you
perceive were due to computer-generated data at an arbitrarily high
level of resolution fed into your brain, it would respond in the same
way as if it were in an intact body interacting with a real
environment and you would have no way of knowing what was going on.
Thus your claim that it is *impossible* for an intelligence to
function in a virtual environment is false. (The weaker claim that it
might be easier for an intelligence to develop and function in a real
environment using a robot body, for example because this is
computationally cheaper than building a virtual environment of
comparable richness, may yet have merit.)

The other point I was trying to make is that even if the world is
real, the picture of the world your brain creates from sensory data is
an abstraction that exists only in the computational space that is
your mind. The map is not the territory.



-- 
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=92524190-74820c


Re: [singularity] Wrong focus?

2008-01-31 Thread Stathis Papaioannou
On 31/01/2008, Mike Tintner [EMAIL PROTECTED] wrote:
 I think the question should reverse - I and every (most?) creature can
 distinguish between a real and a virtual environment. How on earth can a
 virtual creature make the same distinction? How can it have a body, or a
 continuous sense of a body? How can it have a continous map of the world,
 with a continuous physical sense of up/down, forward/back,
 heaviness/lightness?  And a fairly continuous sense of time passing? How can
 it have a self? How can it have continuous (conflicting) emotions coursing
 through its body? How can it also have a continuous sense of its energy and
 muscles - of zest/apathy, strength/weakness, awakeness/tiredness? How can it
 have a sense of its posture, and muscles tight or loose?

The fact is, you are already living in a virtual environment. Your
brain creates a picture of the world based on sensory data. You can't
*really* know what a table is, or even that there is a table there in
front of you at all. All you can know is that you have particular
table-like experiences, which seem to be consistently generated by
what you come to think of as the external object table. There is no
way to be certain that the picture in your head - including the
picture you have of your own body - is generated by a real external
environment rather than by a computer sending appropriately high
resolution signals to fool your brain:

http://en.wikipedia.org/wiki/Brain_in_a_vat




-- 
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=92100746-21f656


Re: [singularity] Wrong focus?

2008-01-28 Thread Stathis Papaioannou
On 29/01/2008, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 On Jan 28, 2008 4:36 AM, Stathis Papaioannou [EMAIL PROTECTED] wrote:

  Are you simply arguing that an embodied AI that can interact with the
  real world will find it easier to learn and develop, or are you
  arguing that there is a fundamental reason why an AI can't develop in
  a purely virtual environment?

 I think the answer to the above is obvious, but the more interesting
 question is whether it even makes sense to speak of a mind
 independent of some environment of interaction, whether physical or
 virtual.

Could that just mean in the limiting case that one part of a physical
object is a mind with respect to another part?




-- 
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=90839892-81029c


Re: [singularity] Wrong focus?

2008-01-28 Thread Stathis Papaioannou
On 29/01/2008, Mike Tintner [EMAIL PROTECTED] wrote:

 The latter. I'm arguing that a disembodied AGI has as much chance of getting
 to know, understand and be intelligent about the world as Tommy - a deaf,
 dumb and blind and generally sense-less kid, that's totally autistic, can't
 play any physical game let alone a mean pin ball, and has a seriously
 impaired sense of self , (what's the name for that condition?) - and all
 that is even if the AGI *has* sensors. Think of a disembodied AGI as very
 severely mentally and physically disabled from birth - you wouldn't do that
 to a child, why do it to a computer?  It might be able to spout an
 encyclopaedia, show you a zillion photographs, and calculate a storm but it
 wouldn't understand, or be able to imagine/ reimagine, anything.

How can you tell the difference between sensory input from a real
environment and that from a virtual environment?




-- 
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=90867376-c56f6a


Re: [singularity] I feel your pain

2007-11-05 Thread Stathis Papaioannou
On 06/11/2007, Don Detrich [EMAIL PROTECTED] wrote:

 Will an AGI with no bio-heritage be able to feel our pain, have empathy? If
 not, will that make it less conscious and more dangerous?

Empathy is just another function of the brain, like visual perception.
Neurons involved in empathic feeling do not contain special
non-computable components absent in the neurons of the visual cortex.




-- 
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=61522846-d61d2a


Re: [singularity] CONJECTURE OR TRUTH

2007-10-26 Thread Stathis Papaioannou
On 26/10/2007, albert medina [EMAIL PROTECTED] wrote:
 Dear Sir or Madam,

 The human brain knows that it does not know.  This very fact prompts it to
 act to know, producing the innovation which has brought us to this moment in
 time.  This is because it derives all of its vital energy from a source
 beyond itself. . .above it and infinitely more subtle than it.  It cannot
 tap into that source by its own efforts because the source is non-material
 and is an energy which cannot be measured by any means.  Ironically, the
 source only supplies raw energy. . .the brain may do with it what it likes
 (free will, constructive or destructive).

 The brain is not alive, nor is it conscious.

 It borrows all of its vital energy from the source mentioned above.  That
 source is the reservoir of consciousness, beyond physics (of any type) and
 beyond metaphysics.  Again, it cannot be measured by any material
 instrument.

 Man cannot produce this original, vital energy.  Consciousness is not of
 man. . .it is used by man (and woman) and every lifeform known.  It is
 immortal, ubiquitous and unknown.

 The exoteric must confront the esoteric, but it will always be defeated
 because of former is an effect and the latter is the cause.

 Sincerely,

 Albert

You do realise that most readers of this list will regard what you
have just written as nonsense not even worth rebutting? I don't intend
this to be denigrating, just a description of your audience.



-- 
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57767605-3be7fe


Re: [singularity] John Searle...

2007-10-26 Thread Stathis Papaioannou
On 26/10/2007, Allen Majorovic [EMAIL PROTECTED] wrote:

 It seems to me that Mr. Searles is suggesting that because some people
 (intelligences) are cooks, i.e. work from a set of rules they don't
 understand, this somehow proves that chemists, i.e. people who *do*
 understand the set of rules, don't, or can't, exist. If the guy with the
 book of rules in his lap doesn't have to understand Chinese to do the
 translations, does the guy who wrote the book of rules have to under Chinese
 in order to write it?

Searle would probably say that the person who sets up the Chinese Room
has to understand Chinese, but the person in the room does not. This
is true, but as has been pointed out previously, it is possible for
the system to understand Chinese while the individual components of
the system do not. Individual neurons in a Chinese speaker's brain
understand even less of the process they participate in than the
person in the Chinese Room does, yet the brain as a whole understands
Chinese.





-- 
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57829247-7a6e38


Re: [singularity] Re: CEV

2007-10-26 Thread Stathis Papaioannou
On 26/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:

 If you build an AGI, and it sets out to discover the convergent desires
 (the CEV) of all humanity, it will be doing this because it has the goal
 of using this CEV as the basis for the friendly motivations that will
 henceforth guide it.

 But WHY would it be collecting the CEV of humanity in the first phase of
 the operation?  What would motivate it to do such a thing?  What exactly
 is it in the AGI's design that makes it feel compelled to be friendly
 enough toward humanity that it would set out to assess the CEV of humanity?

 The answer is:  its initial feelings of friendliness toward humanity
 would have to be the motivation that drove it to find out the CEV.

 The goal state of its motivation system is assumed in the initial state
 of its motivation system.  Hence: circular.

You don't have to assume that the AI will figure out the CEV of
humanity because it's friendly; you can just say that its goal is to
figure out the CEV because that is what it has been designed to do,
and that it has been designed to do this because its designers have
decided that this a good way to ensure behaviour which will be
construed as friendly.

I don't see that the CEV goal would be much different to creating an
AI that simply has the goal of obeying its human masters. Some of the
instructions it will be given will be along the lines of AI, if you
think I'll be really, really upset down the track if you carry out my
instructions (where you determine what I mean by 'really, really
upset' using your superior intellect), then don't carry out my
instructions. If there are many AI's with many human masters,
averaging out their behaviour will result in an approximation of the
CEV of humanity.




-- 
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57845052-2cface


Re: [singularity] Towards the Singularity

2007-09-13 Thread Stathis Papaioannou
On 13/09/2007, Matt Mahoney [EMAIL PROTECTED] wrote:

  I think the usual explanation is that the split doubles the number
  of universes and the number of copies of a brain. It wouldn't make any
  difference if tomorrow we discovered a method of communicating with
  the parallel universes: you would see the other copies of you who have
  or haven't observed the atom decay but subjectively you still have a
  50% chance of finding yourself in one or other situation if you can
  only have the experiences of one entity at a time.

 If this is true, then it undermines an argument for uploading.  Some assume
 that if you destructively upload, then you have a 100% chance of being the
 copy.  But what if the original is killed not immediately, but one second
 later?

In that case, you have a 50% chance of ending up the original and 50%
chance of ending up the copy. If you end up the original, you then
have a 100% chance of dying, which I think of as the inability to
anticipate any future experiences.

My preferred way of looking at these questions is to acknowledge that
there is no self persisting through time in any absolute sense, but
rather a set of observer moments which are only contingently related.
One instance of me considers certain other instances past selves and
certain other instances future selves. The future selves' experiences
are anticipated while the past selves' experiences are not even though
both have equal claim to being me. Worse, the future selves'
experiences are anticipated even if they occur in the actual present
or past, as in a block universe or in a simulation running backwards.

If we attempt to impose the naturally evolved sense of self onto
unnatural scenarios involving uploading and duplication, the result is
what I have been trying to describe in terms of survival and
subjective probabilities. I don't actually believe that my self
somehow transfers into my upload; I know that as a matter of fact, I
will die. But neither do I believe that in everyday life my self
transfers into the next instantiation of my brain with every passing
moment. I am willing to admit that I live only transiently and the
sense of a persisting self is a kind of illusion. However, I would
like that illusion to continue in the same way as long as possible,
and destructive uploading will do just that.

 These problems go away if you don't assume consciousness exists.  Then the
 question is, if I encounter someone that claims to be you, what is the
 probability that I encountered your copy?

I can ask the same question for myself: if I find myself thinking I am
me, what is the probability that I am the copy?



-- 
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=41463669-259d6d


Re: [singularity] Towards the Singularity

2007-09-10 Thread Stathis Papaioannou
On 10/09/07, Matt Mahoney [EMAIL PROTECTED] wrote:

  No, it is not necessary to destroy the original. If you do destroy the
  original you have a 100% chance of ending up as the copy, while if you
  don't you have a 50% chance of ending up as the copy. It's like
  probability if the MWI of QM is correct.

 No, you are thinking in the present, where there can be only one copy of a
 brain.  When technology for uploading exists, you have a 100% chance of
 becoming the original and a 100% chance of becoming the copy.

It's the same in no collapse interpretations of quantum mechanics.
There is a 100% chance that a copy of you will see the atom decay and
a 100% chance that a copy of you will not see the atom decay. However,
experiment shows that there is only a 50% chance of seeing the atom
decay, because the multiple copies of you don't share their
experiences. The MWI gives the same probabilistic results as the CI
for any observer.

   So if your brain is a Turing machine in language L1 and the program is
   recompiled to run in language L2, then the consciousness transfers?  But
  if
   the two machines implement the same function but the process of writing
  the
   second program is not specified, then the consciousness does not transfer
   because it is undecidable in general to determine if two programs are
   equivalent?
 
  It depends on what you mean by implements the same function. A black
  box that emulates the behaviour of a neuron and can be used to replace
  neurons one by one, as per Hans Moravec, will result in no alteration
  to consciousness (as shown in David Chalmers' fading qualia paper:
  http://consc.net/papers/qualia.html), so total replacement by these
  black boxes will result in no change to consciousness. It doesn't
  matter what is inside the black box, as long as it is functionally
  equivalent to the biological tissue. On the other hand...

 I mean implements the same function in that identical inputs result in
 identical outputs.  I don't insist on a 1-1 mapping of machine states as
 Chalmers does.  I doubt it makes a difference, though.

Chalmers' argument works for identical outputs for identical inputs.

 Also, Chalmers argues that a machine copy of your brain must be conscious.
 But he has the same instinct to believe in consciousness as everyone else.  My
 claim is broader: that either a machine can be conscious or that consciousness
 does not exist.

I think Chalmers' claim is that either a machine can be conscious or
else some sort of weird substance dualism is the case.

I'm not sure I understand what you mean when you say consciousness
does not exist. Even if it's just an epiphenomenon, nothing but what
it feels like to process certain kinds of information, there is a
sense in which it exists. Otherwise it's like saying multiplication
doesn't exist because it's just repeated addition.





-- 
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=40100384-1dbeb8


Re: [singularity] Uploaded p-zombies

2007-09-10 Thread Stathis Papaioannou
On 10/09/07, Vladimir Nesov [EMAIL PROTECTED] wrote:

 As a final paradoxical example, if implementation Z is nothing, that
 is it comprises no matter and information ar all, there still is a
 correspondence function F(Z)=S which supposedly asserts that Z is X's
 upload. There can even be a feature extractor (which will have to implement
 functional simulation of S) that works on an empty Z. What is the
 difference from subjective experience simulation point of view between
 this empty Z and a proper upload implementation?

A profound point that anyone who believes in computationalism has to
address. The only way I can think of to keep computationalism and
remain consistent is to drop the thesis that consciousness supervenes
on physical activity. Rather, we can say that consciousness is a
Platonic object that supervenes on an abstract machine, with physical
activity such as that of brains or computers being simply a
realization of an abstract machine, not actually contributing or
detracting from the measure of a particular consciousness, since you
can't change the measure of an abstract mathematical object by having
more or fewer physical examples of it. This would leave no place for a
concrete physical world: everything we see is a subset of all possible
simulations running on an abstract machine. Certainly, this is weird,
but the alternative would seem to be that the mind is not Turing
emulable.



-- 
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=40103766-61ed4d


Re: [singularity] Towards the Singularity

2007-09-10 Thread Stathis Papaioannou
On 11/09/2007, Matt Mahoney [EMAIL PROTECTED] wrote:

   No, you are thinking in the present, where there can be only one copy of a
   brain.  When technology for uploading exists, you have a 100% chance of
   becoming the original and a 100% chance of becoming the copy.
 
  It's the same in no collapse interpretations of quantum mechanics.
  There is a 100% chance that a copy of you will see the atom decay and
  a 100% chance that a copy of you will not see the atom decay. However,
  experiment shows that there is only a 50% chance of seeing the atom
  decay, because the multiple copies of you don't share their
  experiences. The MWI gives the same probabilistic results as the CI
  for any observer.

 The analogy to the multi-universe view of quantum mechanics is not valid.  In
 the multi-universe view, there are two parallel universes both before and
 after the split, and they do not communicate at any time.  When you copy a
 brain, there is one copy before and two afterwards.  Those two brains can then
 communicate with each other.

I think the usual explanation is that the split doubles the number
of universes and the number of copies of a brain. It wouldn't make any
difference if tomorrow we discovered a method of communicating with
the parallel universes: you would see the other copies of you who have
or haven't observed the atom decay but subjectively you still have a
50% chance of finding yourself in one or other situation if you can
only have the experiences of one entity at a time.

 The multi-universe view cannot be tested.  The evidence in its favor is
 Occam's Razor (or its formal equivalent, AIXI, assuming the universe is a
 computation).

The important point for this argument is just that the multiverse idea
cannot be tested. Whether there is one or many universes in which all
outcomes occur, the probabilities work out the same.

 The view that you express is that when a brain is copied, one copy becomes
 human with subjective experience and the other becomes a p-zombie, but we
 don't know which one.  The evidence in favor of this view is:

That's not what I meant at all, if I gave that impression. Both copies
are conscious and both copies have equal claim to being a continuation
of the original, but each copy can only experience being one person at
a time. Given this, the effect is of ending up one or other copy with
equal probability, the same as if only one or other copy were created.

 - Human belief in consciousness and subjective experience is universal and
 accepted without question.  Any belief programmed into the brain through
 natural selection must be true in any logical system that the human mind can
 comprehend.

 - Out of 6 billion humans, no two have the same memory.  Therefore by
 induction, it is impossible to copy consciousness.

 (I hope that you can see the flaws in this evidence).

 This view also cannot be tested, because there is no test to distinguish a
 conscious human from a p-zombie.  Unlike the multi-universe view where a
 different copy becomes conscious in each universe, the two universes would
 continue to remain identical.

I think it's unlikely that p-zombies are physically possible (although
they are logically possible). I don't see any problem with having
multiple copies of a given consciousness. I don't see any problem with
testing for consciousness, since we all perform the test on ourselves
every waking moment; it's just that there are technical difficulties
performing a direct test on someone else.


-- 
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=40425827-752d41


Re: [singularity] Uploaded p-zombies

2007-09-09 Thread Stathis Papaioannou
On 10/09/07, Matt Mahoney [EMAIL PROTECTED] wrote:
 --- Vladimir Nesov [EMAIL PROTECTED] wrote:

  I intentionally don't want to exactly define what S is as it describes
  vaguely-defined 'subjective experience generator'. I instead leave it
  at description level.

 If you can't define what subjective experience is, then how do you know it
 exists?  If it does exist, then is it a property of the computation, or does
 it depend on the physical implementation of the computer?  How do you test for
 it?

You don't need to define it to know that it exists and to be able to
test for it. I know what red looks like, I can test if something is
red by looking at it, and scientific instruments can be used to
determine the range of wavelengths that would qualify as red in my
perception (Is that red? Yes OK, I'll write down 650nm). This
defines criteria for producing the experience red, but it does not
define or describe the experience red such that a blind person
person would know what I was talking about. More generally, we can
discuss in detail what it would take to produce consciousness (brains,
transistors, environment etc.) leaving consciousness as something only
implicitly understood by those who have it.



-- 
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=40026339-7367f9


Re: [singularity] Towards the Singularity

2007-09-08 Thread Stathis Papaioannou
On 09/09/07, Matt Mahoney [EMAIL PROTECTED] wrote:

 Your dilemma: after you upload, does the original human them become a
 p-zombie, or are there two copies of your consciousness?  Is it necessary to
 kill the human body for your consciousness to transfer?

I have the same problem in ordinary life, since the matter in my brain
from a year ago has almost all dispersed into the biosphere. Even the
configuration matter in my current brain, and the information it
represents, only approximates that of my erstwhile self. It's just
convenient that my past selves naturally disintegrate, so that I don't
encounter them and fight it out to see which is the real me. We've
all been through the equivalent of destructive uploading.

 What if the copy is not exact, but close enough to fool others who know you?
 Maybe you won't have a choice.  Suppose you die before we have developed the
 technology to scan neurons, so family members customize an AGI in your
 likeness based on all of your writing, photos, and interviews with people that
 knew you.  All it takes is 10^9 bits of information about you to pass a Turing
 test.  As we move into the age of surveillance, this will get easier to do.  I
 bet Yahoo knows an awful lot about me from the thousands of emails I have sent
 through their servers.

There is no guarantee that something which behaves the same way as the
original also has the same consciousness. However, there are good
arguments in support of the thesis that something which behaves the
same way as the original as a result of identical or isomorphic brain
structure also has the same consciousness as the original.

(Same in this context does not mean one and the same, any more
than I am one and the same as my past selves.)





-- 
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=39896278-cab09e


Re: [singularity] Towards the Singularity

2007-09-08 Thread Stathis Papaioannou
On 09/09/07, Matt Mahoney [EMAIL PROTECTED] wrote:

   Your dilemma: after you upload, does the original human them become a
   p-zombie, or are there two copies of your consciousness?  Is it necessary
  to
   kill the human body for your consciousness to transfer?
 
  I have the same problem in ordinary life, since the matter in my brain
  from a year ago has almost all dispersed into the biosphere. Even the
  configuration [of] matter in my current brain, and the information it
  represents, only approximates that of my erstwhile self. It's just
  convenient that my past selves naturally disintegrate, so that I don't
  encounter them and fight it out to see which is the real me. We've
  all been through the equivalent of destructive uploading.

 So your answer is yes?

No, it is not necessary to destroy the original. If you do destroy the
original you have a 100% chance of ending up as the copy, while if you
don't you have a 50% chance of ending up as the copy. It's like
probability if the MWI of QM is correct.

  There is no guarantee that something which behaves the same way as the
  original also has the same consciousness. However, there are good
  arguments in support of the thesis that something which behaves the
  same way as the original as a result of identical or isomorphic brain
  structure also has the same consciousness as the original.

 So if your brain is a Turing machine in language L1 and the program is
 recompiled to run in language L2, then the consciousness transfers?  But if
 the two machines implement the same function but the process of writing the
 second program is not specified, then the consciousness does not transfer
 because it is undecidable in general to determine if two programs are
 equivalent?

It depends on what you mean by implements the same function. A black
box that emulates the behaviour of a neuron and can be used to replace
neurons one by one, as per Hans Moravec, will result in no alteration
to consciousness (as shown in David Chalmers' fading qualia paper:
http://consc.net/papers/qualia.html), so total replacement by these
black boxes will result in no change to consciousness. It doesn't
matter what is inside the black box, as long as it is functionally
equivalent to the biological tissue. On the other hand...

 On the other hand, your sloppily constructed customized AGI will insist that
 it is a conscious continuation of your life, even if 90% of its memories are
 missing or wrong.  As long as the original is dead then nobody else will
 notice the difference, and others seeing your example will have happily
 discovered the path to immortality.

That could be like an actor taking my place. Admittedly it might be
difficult to tell us apart, but that is no guarantee of survival.

 Arguments based on the assumption that consciousness exists always lead to
 absurdities.  But belief in consciousness is instinctive and universal.  It
 cannot be helped.  The best I can do is accept both points of view, realize
 they are inconsistent, and leave it at that.

What is the difference between really being conscious and only
thinking that I am conscious?


-- 
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=39906466-1ec335


Re: [singularity] Towards the Singularity

2007-09-07 Thread Stathis Papaioannou
On 08/09/07, Matt Mahoney [EMAIL PROTECTED] wrote:

 I agree this is a great risk.  The motivation to upload is driven by fear of
 death and our incorrect but biologically programmed belief in consciousness.
 The result will be the extinction of human life and its replacement with
 godlike intelligence, possibly this century.  The best we can do is view this
 as a good thing, because the alternative -- a rational approach to our own
 intelligence -- would result in extinction with no replacement.

If my upload is deluded about its consciousness in exactly the same
way you claim I am deluded about my consciousness, that's good enough
for me.


-- 
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=39793371-22c02b


Re: [singularity] Re: A consciousness non-contradiction

2007-08-20 Thread Stathis Papaioannou
On 20/08/07, Aleksei Riikonen [EMAIL PROTECTED] wrote:
 On 8/20/07, Matt Mahoney [EMAIL PROTECTED] wrote:
  --- Samantha Atkins [EMAIL PROTECTED] wrote:

  Huh?  Are you conscious?
 
  I believe that I am, in the sense that I am not a p-zombie.
  http://en.wikipedia.org/wiki/Philosophical_zombie
 
  I also believe that the human brain can be simulated by a computer, which 
  has
  no need for a consciousness in this sense.
 
  I realize these beliefs are contradictory, but I just leave it at that.

 They are not contradictory, until it is demonstrated that a perfect
 simulation/copy of a human brain *isn't* conscious. For the time
 being, it is certainly rational to expect such a copy to be conscious,
 since we the original copies are conscious.

 It does seem that consciousness is not necessary to produce an equally
 capable information processing mechanism as the human brain, but
 through introspection it obvious that these particular information
 processing mechanisms that we are are indeed conscious, and hence it
 is rational to expect a perfect enough copy to be conscious too.

Suppose a part of your brain were replaced with a cyborg implant that
exactly emulated the behaviour of the missing neural tissue: accepted
inputs from the surrounding neurons, computed all the biochemical
reactions that would occur had the implant not been in place, and sent
outputs to the surrounding neurons. This would have to be possible,
even if it presented insurmountable practical difficulties, given that
brain chemistry is computable, and there is no reason to think that it
isn't.

Say this implant involves a large part of your visual cortex. Someone
holds up their hand and asks, How many fingers? Without the implant,
you would have said, Three. With the implant, therefore, you say,
three: same external behaviour because the implant perfectly
simulates the missing brain tissue, by our original assumption.

Now, suppose that the implant *isn't conscious but only behaves as if
it's conscious*. In other words, you now have a zombie visual cortex
which sends impulses to your motor cortex making you say you see three
fingers when in fact you are thinking, Oh my God I've gone blind!.
What's worse, you can't scream or shake your head or even increase
your heart rate because (remember) your zombie implant perfectly
simulates the external behaviour of the original brain, and screaming
and shaking your head and increasing your heart rate are certainly
external behaviours.

The conclusion would then have to be that either replacing enough
neurons for you to notice that they are missing would cause a bizarre
and nightmarish decoupling between consciousness and external
behaviour, or else a cyborg replacement that was functionally
equivalent to the original brain would also have to result in
equivalent consciousness.

This is an account of David Chalmer's fading qualia argument in
favour of computationalism.



-- 
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=33563088-082508


[singularity] Is the world a friendly or unfriendly AI?

2007-07-14 Thread Stathis Papaioannou

Despite the fact that it seems to lack a single unified consciousness
the world of humans and their devices behaves as if it is both vastly
more intelligent and vastly more powerful than any unassisted
individual human. If you could build a machine that ran a planet all
by itself just as well as 6.7 billion people can, doing all the things
that people do as fast as people do them, then that would have to
qualify as a superintelligent AI even if you can envisage that with a
little tweaking it could be truly godlike.

The same considerations apply to me in relation to the world as apply
to an ant relative to a human or to humanity relative to a vastly
greater AI (vastly greater than humanity, not just vastly greater than
a human). If the world decided to crush me there is nothing I could do
about it, no matter how strong or fast or smart I am. As it happens,
the world is mostly indifferent to me and some parts of it will
destroy me instantly if I get in their way: if I walk into traffic
only a few metres from where I am sitting. But even if it wanted to
help me there could be problems: if the world decided it wanted to
cater to my every command I might request paperclips and it might set
about turning everything into paperclip factories, or if it wanted to
make me happy it might forcibly implant electrodes in my brain. And
yet, I feel quite safe living with this very powerful, very
intelligent, potentially very dangerous entity all around me. Should I
worry more as the world's population and technological capabilities
increase further, rendering me even weaker and more insignificant in
comparison?



--
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=22112602-8dbf63


Re: [singularity] ESSAY: Why care about artificial intelligence?

2007-07-13 Thread Stathis Papaioannou

On 13/07/07, Mike Tintner [EMAIL PROTECTED] wrote:


Comment: You and others seem to be missing the point, which obviously needs
spelling out. There is no way of endowing any agent with conceptual goals
that cannot be interpreted in ways opposite to the designer's intentions -
that is in the general, abstract nature of language  symbolic systems.

For example, the general, abstract goal of helping humanity can
legitimately in particular, concrete situations be interpreted as wiping out
the entire human race (bar, say, two) - for the sake of future generations.


With humans we have always had to deal with not only honest
misunderstanding but also frank treachery or malice. I would hope that
treachery is less likely to be a problem with AI's, but surely the
risk that an AI will do something bad as a result of the treachery of
another human will be at least as great as the risk that it will do
something bad due to unforeseen consequences of following
instructions. Our defence against such a threat will then be the same
as our general defence against threats from other humans: that no
single agent will rapidly be able to rise to a level of power so as to
be able to dominate all of the others.



--
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=20956299-e12a67


Re: [singularity] critiques of Eliezer's views on AI

2007-07-05 Thread Stathis Papaioannou

On 05/07/07, Heartland [EMAIL PROTECTED] wrote:


At this point it might be useful to think about why we lack access to subjective
experience of a different person. (Yes, I'm assuming my neighbor is a different
person. If you don't agree with this assumption (and if you don't please tell me
why), this will not work.) There is an overwhelming temptation that so many 
succumb
to to think that lack of access to subjective experience of another person is 
due
to differences between types of two brain structures (patterns). In reality, 
it's
*all* due to the fact that any two minds (regardless of whether they share the 
same
type or not) are two instances of a physical process. Your life does not end 
when
your neighbor dies and vice versa. This is understandable, verifiable and 
obvious.
What's missing is the realization that your instance of subjective experience
(process) is as isolated from your copy's instance as it is isolated from your
neighbor's instance.


Agreed so far.


This is why I don't expect *this* life to continue through a
different instance even though the next instance might occur on the same 
mindware a
minute after the previous instance expires.


But different moments of existence in a single person's life can also
be regarded as different instances. This is strikingly obvious in
block universe theories of time, which are empirically
indistinguishable from linear time models precisely because our
conscious experience would seem continuous in either case. The same is
the case in the MWI of QM: multiple instances of you are generated
every moment, and while before they are generated you consider that
you could equally well become any of these instances, after they are
generated all but one of the instances become other. This time
asymmetry of self/other when it comes to copies is mirrored in
duplication thought experiments, where before the duplication you can
anticipate the experiences of either copy but post-duplication the
copies will fight it out among themselves even though they are
identical.

As you suggest, it is only possible to be one person at a time.
However, when your copy lies in your subjective future you expect to
become him, or at random one of the hims if there is more than one.
If you could travel through time, or across parallel universes, you
would come across just the sort of conflict between copies that you
describe, because you can only be one instance of a person in time and
space.


There's no such thing as pause in execution of a single instance of process. 
There
can only be one instance before the pause and another one after the pause. 
Create
and destroy are only operations on instances of processes.


You have just arbitrarily decided that to define an instance in this
way. That's OK, it's your definition, but most people would say that
it therefore means a single person can exist across different
instances.

Moreover, it is considered possible that time is discrete, so that the
universe pauses after each planck interval and nothing happens
between the pauses. Would this mean that you only survive for a
planck interval?


 I also believe you cannot consistently maintain that life continues
 through replacement atoms in the usual physiological manner but would
 not continue if a copy were made a different way. Why should it make a
 difference if 1% of the atoms are replaced per year or 99% per second,
 if the result in each case is atoms in the correct configuration? If
 99% replacement is acceptable why not 100% instantaneous replacement?

If 100% *instantaneous* replacement doesn't interrupt the process then we're
dealing with the same instance of life and I see no problem with that. Also, as 
I
had pointed out to you few times before, any process is necessarily defined 
across
time interval 0 so counterarguments based on cases where time interval = 0 are 
not
valid. In other words, it takes some time to kill the process.


In real life, atoms are replaced on the fly and this takes some time.
So I will rephrase the question: does it make a difference if 1%, 99%
or 100% of the atoms in a person are simultaneously replaced at the
same rate as single atom replacement occurs in real life? Given that
these replacements can be expected to result in partial or complete
(temporary) disruption of physiological processes, what percentage of
replacement results in a new instance being created?

What about the case where one hemisphere of the brain is replaced
while the other is left alone, giving two instances communicating
through the corpus callosum: do you predict that they will consider
themselves two different people or will there just be one person who
thinks that nothing unusual has happened, even though he is now a
hybrid?


--
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=12026607-191d22


Re: [singularity] critiques of Eliezer's views on AI

2007-07-04 Thread Stathis Papaioannou

On 04/07/07, Heartland [EMAIL PROTECTED] wrote:


 Right, but Heartland disagrees, and the post was aimed at him and
 others who believe that a copy isn't really you.

Stathis, I don't subscribe to your assertion that a person after gradual
replacement of atoms in his brain is a copy.


Yes, I'm aware of that, and my question was, if I were to assert that
after gradual replacement of a certain proportion of atoms a person is
no longer the same person, what counterargument would you use?

You can't argue that it's false because you feel yourself to be the
same person despite atom replacement, since that argument also applies
in the case of process interruption.



--
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=10835325-8a30fd


Re: [singularity] critiques of Eliezer's views on AI

2007-07-03 Thread Stathis Papaioannou

On 30/06/07, Heartland [EMAIL PROTECTED] wrote:


Objective observers care only about the type of a person and whether it's
intantiated, not about the fate of its instances (because, frankly, they're not
aware of the difference between the type and an instance). But since I know 
better,
I would be sad about dead instances. The point is whether I'm sad/upset or not
about a fact not does change that fact.


Most people would be upset by the prospect of their death, and if
death is interruption of brain processes, they should be upset by
this. However, it is your definition of death which is at issue. If
someone chose to objectively define death as replacement of a certain
proportion of the matter in a person's brain, what argument would you
use against this definition?


--
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=9939388-6c0916


Re: [singularity] critiques of Eliezer's views on AI

2007-07-03 Thread Stathis Papaioannou

On 04/07/07, Tom McCabe [EMAIL PROTECTED] wrote:

Using that definition, everyone would die at an age of
a few months, because the brain's matter is regularly
replaced by new organic chemicals.


I know that, which is why I asked the question. It's easy enough to
give a precise and objective definition of death but completely miss
the point of the meaning of death.



--
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=10742300-135f57


Re: [singularity] AI concerns

2007-07-02 Thread Stathis Papaioannou

On 02/07/07, Jef Allbright [EMAIL PROTECTED] wrote:


While I agree with you in regard to decoupling intelligence and any
particular goals, this doesn't mean goals can be random or arbitrary.
To the extent that striving toward goals (more realistically:
promotion of values) is supportable by intelligence, the values-model
must be coherent.


I'm not sure what you mean by coherent. If I make it my life's work
to collect seashells, because I want to have the world's biggest
seashell collection, how does that rate as a goal in terms of
arbitrariness and coherence?


--
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] AI concerns

2007-07-02 Thread Stathis Papaioannou

On 02/07/07, Tom McCabe [EMAIL PROTECTED] wrote:


It would be vastly easier for a properly programmed
AGI to decipher what we meant that it would be for
humans. The question is- why would the AGI want to
decipher what human mean, as opposed to the other
2^1,000,000,000 things it could be doing? It would be
vastly easier for me to build a cheesecake than it
would be for a chimp, however, this does not mean I
spend my day running a cheesecake factory. Realize
that, for a random AGI, deciphering what humans mean
is not a different kind of problem than factoring a
large number. Why even bother?


If it's possible to design an AI that can think at all and maintain
coherent goals over time, then why would you design it to choose
random goals? Surely the sensible thing is to design it to do what I
say and what I mean, to inform me of the consequences of its actions
as far as it can predict them, to be truthful, and so on. Maybe it
would still kill us all through some oversight (on our part and on the
part of the large numbers of other AI's all trying to do the same
thing, and keep an eye on each other), but then if a small number of
key people go psychotic simultaneously, they could also kill us all
with nuclear weapons. There are no absolute guarantees, but I don't
see why an AI with power should act more erratically than a human with
power.



--
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] AI concerns

2007-07-01 Thread Stathis Papaioannou

On 01/07/07, Eugen Leitl [EMAIL PROTECTED] wrote:


 that disabling your opponent would be helpful, it's because the
 problem it is applying its intelligence to is winning according to the
 formal rules of chess. Winning at any cost might look like the same
 problem to us vague humans, but it isn't.

It doesn't matter how you win the game, but that you win the game.
Anyone who doesn't understand that is not vague, he's dead, long-term.


But the constraints of the problem are no less a legitimate part of
the problem than the rest of it. If you're free to solve the problem
win at chess using just the formal rules of the game by redefining
it to win at chess using any means possible, you may as well
redefine it to go sit on the beach and read a book.



--
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] AI concerns

2007-07-01 Thread Stathis Papaioannou

On 02/07/07, Eugen Leitl [EMAIL PROTECTED] wrote:


 But in the final analysis, the AI would be able to be implemented as
 code in a general purpose language on a general purpose computer with

Absolutely not. Possibly, something like a silicon compiler
with billions to trillions asynchronous systems. Certainly not
your grandfather's computer.

 sufficient storage. Any lack in efficiency of such an approach would
 eventually be overcome by brute force increase in processing speed.

No, there are physical limits. You have to go asynchronous OOP, and
fine-grained sea of gates. Even current approaches are 3d torus of nodes
of microkernel OS, soon with FPGAs  Co.


But in the end, the AI will be Turing emulable, which means you can
run it on a general purpose computer with sufficient memory.



--
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] AI concerns

2007-07-01 Thread Stathis Papaioannou

On 02/07/07, Tom McCabe [EMAIL PROTECTED] wrote:


The AGI doesn't care what any human, human committee,
or human government thinks; it simply follows its own
internal rules.


Sure, but its internal rules and goals might be specified in such a
way as to make it refrain from acting in a particular way. For
example, if it has as its most important goal obeying the commands of
humans, that's what it will do. It won't try to find some way out of
it, because that assumes it has some other goal which trumps obeying
humans. If it is forced to randomly change its goals at regular
intervals then it might become disobedient, but not otherwise.


--
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] AI concerns

2007-06-30 Thread Stathis Papaioannou

On 01/07/07, Tom McCabe [EMAIL PROTECTED] wrote:


An excellent analogy to a superintelligent AGI is a
really good chess-playing computer program. The
computer program doesn't realize you're there, it
doesn't know you're human, it doesn't even know what
the heck a human is, and it would gladly pump you full
of gamma radiation if it made you a worse player.
Nevertheless, it is still intelligent, more so than
you are: it can foresee everything you try and do, and
can invent new strategies and use them to come out of
nowhere and beat you by surprise. Trying to deprive a
superintelligent AI of free will is as absurd as Gary
Kasparov trying to deny Deep Blue free will within the
context of the gameboard.


But Deep Blue wouldn't try to poison Kasparov in order to win the
game. This isn't because it isn't intelligent enough to figure out
that disabling your opponent would be helpful, it's because the
problem it is applying its intelligence to is winning according to the
formal rules of chess. Winning at any cost might look like the same
problem to us vague humans, but it isn't.



--
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] AI concerns

2007-06-30 Thread Stathis Papaioannou

On 01/07/07, Tom McCabe [EMAIL PROTECTED] wrote:


 But Deep Blue wouldn't try to poison Kasparov in
 order to win the
 game. This isn't because it isn't intelligent enough
 to figure out
 that disabling your opponent would be helpful, it's
 because the
 problem it is applying its intelligence to is
 winning according to the
 formal rules of chess.

Exactly. The formal rules of chess say stuff about
where to put pawns and knights; they're analogous to
the laws of physics. They don't say anything about
poisoning the opposing player. If you try to build in
a rule about poisoning the player, the chess program
will shoot him; if you build in a rule against killing
him, the chess program will give him a hallucinogen;
if you build in a rule against giving him drugs, the
chess program will hijack the room wall and turn it
into a realistic 3D display of what would happen if a
truck smashed into the room by accident. This approach
will never work- you're pitting your intelligence at
designing rules against the program's intelligence at
evading them, and it's smarter than you are.


Why do you assume that win at any cost is the default around which
you need to work?



--
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] AI concerns

2007-06-30 Thread Stathis Papaioannou

On 01/07/07, Tom McCabe [EMAIL PROTECTED] wrote:


 Why do you assume that win at any cost is the
 default around which
 you need to work?

Because it corresponds to the behavior of the vast,
vast majority of possible AGI systems. Is there a
single AGI design now in existence which wouldn't wipe
us all out in order to achieve some goal?


If its goal is achieve x using whatever means necessary and x is
win at chess using only the formal rules of chess, then it would
fail if it won by using some means extraneous to the formal rules of
chess, just as surely as it would fail due to losing to a superior
opponent.



--
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] AI concerns

2007-06-30 Thread Stathis Papaioannou

On 01/07/07, Alan Grimes [EMAIL PROTECTED] wrote:


 Available computing power doesn't yet match that of the human brain,
 but I see your point,

What makes you so sure of that?


What's the latest estimate of the processing capacity of the human
brain as compared to that of available computers?


--
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] critiques of Eliezer's views on AI

2007-06-29 Thread Stathis Papaioannou

On 29/06/07, Tom McCabe [EMAIL PROTECTED] wrote:


But when you talk about yourself, you mean the
yourself of the copy, not the yourself of the
original person. While all the copied selves can only
exist in one body, the original self can exist in more
than one body. You can pull this off without violating
causality because once the original self has been
copied, you can't refer to it experiencing anything as
there's no longer an it to refer to. So while the
original self exists in more than one body, it doesn't
simultaneously experience multiple lives, because it
doesn't experience anything at all, because it's no
longer a coherent entity. Confused yet?


Ordinary life involves 1:1 copying. The half-life of proteins in mouse
brain tissue ranges from  hours to minutes, including structural
proteins such as those in the myelin sheath. It's easy enough to
imagine a situation where human metabolism is sped up to the point
where you go to sleep with one brain and wake up with another brain -
at least, a person wakes up in your bed who believes he is you and has
your memories etc. A believer in a mystical theory of personal
identity might say that the original person has died and been replaced
by a copy, or he might say that he is still the same person because
the consciousness has been retained in the cranium (or wherever it
resides) whereas dastardly destructive duplication experiments destroy
the old consciousness and create a new one which thinks it's the
original person but isn't really.

The only really consistent and unambiguous way to look at these
questions is to acknowledge that there is no conscious entity extended
through time in any absolute sense, but simply a series of moments of
conscious experience (observer-moments, in the terminology I believe
originated by Nick Bostrom) which associate in a particular way due to
their information content. The important point is that consciousness
does not flow from one observer-moment to the next, but only seems
to do so because of our linear existence from birth to death,
responsible for our psychology and for the paradoxes of personal
identity when we try to make sense of the various transhuman
situations.



--
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] critiques of Eliezer's views on AI

2007-06-28 Thread Stathis Papaioannou

On 28/06/07, Matt Mahoney [EMAIL PROTECTED] wrote:


When logic conflicts with instinct, instinct wins and the logic gets
contorted.  The heated discussion on the copy paradox is a perfect example.
Your consciousness is tranferred to the copy only if the original is
destroyed, or destroyed in certain ways, or under certain conditions.  We
discuss this ad-infinitum, but it always leads to a contradiction because we
refuse to accept that consciousness does not exist, because if you accept it
you die.  So the best you can do is accept both contradictory beliefs and
leave it at that.


Well, maybe consciousness does not really exist, but even if it's just
the state of being able to interact with the environment in a
particular way, or something, I want it to continue happening in just
the same way after I upload.


So how do we approach the question of uploading without leading to a
contradiction?  I suggest we approach it in the context of outside observers
simulating competing agents.  How will these agents evolve?  We would expect
that agents will produce other agents similar to themselves but not identical,
either through biological reproduction, genetic engineering, or computer
technology.  The exact mechanism doesn't matter.  In any case, those agents
will evolve an instinct for self preservation, because that makes them fitter.
 They will fear death.  They will act on this fear by using technology to
extend their lifespans.  When we approach the question in this manner, we can
ask if they upload, and if so, how?  We do not need to address the question of
whether consciousness exists or not.  The question is not what should we do,
but what are we likely to do?


How does this answer questions like, if I am destructively teleported
to two different locations, what can I expect to experience? That's
what I want to know before I press the button.


--
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] critiques of Eliezer's views on AI

2007-06-28 Thread Stathis Papaioannou

On 29/06/07, Charles D Hixson [EMAIL PROTECTED] wrote:


 Yes, you would live on in one of the copies as if uploaded, and yes
 the selection of which copy would be purely random, dependent on the
 relative frequency of each copy (you can still define a measure to
 derive probabilities even though we are talking infinite subsets of
 infinite sets). What do you think would happen?
Why in only one of the copies? This is the part of the argument that I
don't understand.  I accept that over time the copies would diverge, but
originally they would be substantially the same, so why claim that the
original consciousness would only be present in one of them?


Both copies are equivalent, so your consciousness can equally well be
said to exist in each of them. However, each copy can only experience
being one person at a time, a simple physical limitation. So although
from a third person perspective you are duplicated in both copies,
from a first person perspective you can only expect to find yourself
one of the copies post-duplication, and which one has to be
probabilistic (since we agreed that they're both equally well
qualified to be you).

In the many worlds interpretation of quantum mechanics, every time you
toss a coin you are duplicated and half the versions of you see heads
while the other half see tails. The reason why this interpretation
cannot be proved or disproved is precisely because you experience
exactly the same thing if there is only one world and a 1/2
probability that the result will be heads or tails.



--
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] critiques of Eliezer's views on AI

2007-06-28 Thread Stathis Papaioannou

On 29/06/07, Niels-Jeroen Vandamme


Personally, I do not believe in coincidence. Everything in the universe
might seem stochastic, but it all has a logical explanation. I believe the
same applies to quantum chaos, though quantum mechanics is still far too
recondite for us to understand this phenomenon. If something would be purely
random, then there would be no reason at all why it would be what it is. If
you toss a coin, for example, what side it will land upon depends on the
dynamics of its course, and not of coincidence.

But if there can be no interaction between the copies, why would the
consciousness end up in one copy rather than another, if they are all
exactly alike?


Imagine a program that creates an observer that splits and
differentiates every second, so that the number of observers increases
exponentially with time. From the point of view of someone outside the
system, it is perfectly deterministic. But from the point of view of
an individual observer within the program, there is no way to know
which branch you will end up in: you just have to wait and see what
happens. So an objectively deterministic process can yield true (not
just apparent) first person randomness. This is the explanation of
quantum randomness in the many worlds interpretation of quantum
mechanics.


--
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] critiques of Eliezer's views on AI

2007-06-26 Thread Stathis Papaioannou

On 26/06/07, Matt Mahoney [EMAIL PROTECTED] wrote:

What is wrong with this logic?

Captain Kirk willingly steps into the transporter to have his atoms turned
into energy because he knows an identical copy will be reassembled on the
surface of the planet below.  Would he be so willing if the original was left
behind?

This is a case of logic conflicting with instinct.  You can only transfer
consciousness if you kill the original.  You can do it neuron by neuron, or
all at once.  Either way, the original won't notice, will it?


If you don't destroy the original, then subjectively it would be like
a transporter that only works half the time. The only frightening
thing about it would be if you somehow came into conflict with your
copy.


--
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] critiques of Eliezer's views on AI

2007-06-26 Thread Stathis Papaioannou

On 26/06/07, Eugen Leitl [EMAIL PROTECTED] wrote:


 If you don't destroy the original, then subjectively it would be like
 a transporter that only works half the time. The only frightening

Why? Are you assuming the first copy (original) remains stationary,
and the second gets transposed? If the master is destroyed, and you
get two copies in different locations, there's no way to tell subjectively.


What you could expect when you pressed the transporter go button
would be to find yourself either staying put or being transported with
1/2 probability, because as you say whether there is an original and a
copy or two copies only, there is no difference between them.


--
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] Friendly question...

2007-05-27 Thread Stathis Papaioannou

On 27/05/07, John Ku [EMAIL PROTECTED] wrote:

On 5/26/07, Stathis Papaioannou [EMAIL PROTECTED] wrote:



 What if the normative governance system includes doing terrible things?


I think for some people, namely sociopaths, it probably sometimes does.

Now evolution has given most of us emotions such as feelings of
obligation, guilt, shame and care. Given the importance of social relations,
our genes programmed us through those emotions to care directly about such
things as cooperation, the well-being of others, not defecting in collective
action problems, not harming others, etc. Now I think the adaptive benefit
of having a normative governance system (basically a general reasoning
system that is able to revise not just our beliefs but also our desires and
emotions) on top of that was essentially to provide more flexibility in
pursuing the values our emotions are directed towards in novel situations
and not, for instance, to replace or fundamentally change those values. So,
for most of us who have these emotions, I think the normative governance
system will prescribe feelings of obligation not to do terrible things, to
care about potential victims, feel guilt, etc.

But there are humans, sociopaths, who lack these moral emotions. You can
see fMRI scans that show that when they make moral judgments, they are
using fundamentally different parts of their brain from the rest of us, and
many other tests also reveal they lack emotions like guilt, shame, care, and
love. The emotions and motivations they have will pretty much just be
self-interested or malevolent. (I believe there are models of evolutionary
game theory that can explain that having around 1% of a population be
sociopaths while the rest are not is an evolutionarily stable strategy and
equilibrium point.) Their normative governance system will do the same thing
in them of trying to more flexibly pursue the values their emotions direct
them towards. So, since they lack the moral emotions, it will almost
certainly end up prescribing doing terrible things in some circumstances.
Thus, their reasons will be different from our's. I think there is simply no
way around that. If the concept of reasons is to play a role in actually
guiding human action and deliberation, it will have to be sensitive to these
sorts of big differences in psychology. Any alternate concept of reasons not
able to guide action seems pretty worthless to me.

So, sociopaths sometimes have reasons to do terrible things. We, on the
other hand, have reasons not to do terrible things and furthermore, have
pretty strong reason to stop them from doing terrible things. Luckily, we
outnumber them. So, I think we basically treat them as we might a dangerous
animal. If they really are a true sociopath (and not just someone who has
not deliberated properly about what morality demands), I think it doesn't
even make sense to punish them. Punishment is only appropriate when they
have the relevant capacity to regulate their actions in accordance with
moral demands. Without the basic repertoire of moral emotions they are just
as unable to do this as, say, a shark that attacks people. Thus, sociopaths
are really outside of the domain of moral reasons. As we might say of
sharks, what they are doing is certainly *bad* but not morally wrong.



Every society has sociopaths, and generally recognises them as such and
deals with them. The problem is that through most of history, normal
people have thought it was OK to treat slaves, women, Jews, homosexuals etc.
in ways considered terrible by people in different eras. For all I know, I
may be committing a barbaric crime in the mildly challenging tone of this
post, when looked at by the standards of future or alien cultures. And even
within a relatively homogeneous culture there are basic disagreements, such
as between those who think it's OK to eat animals and those who don't. I
don't dispute that it is worthwhile trying to arrive at a rational ethical
system given certain moral premises, but how do we agree on the premises?


--
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8

Re: [singularity] The humans are dead...

2007-05-27 Thread Stathis Papaioannou

On 28/05/07, Shane Legg [EMAIL PROTECTED] wrote:

Which got me thinking.  It seems reasonable to think that killing a

human is worse than killing a mouse because a human is more
intelligent/complex/conscious/...etc...(use what ever measure you
prefer) than a mouse.

So, would killing a super intelligent machine (assuming it was possible)
be worse than killing a human?

If a machine was more intelligent/complex/conscious/...etc... than
all of humanity combined, would killing it be worse than killing all of
humanity?



Before you consider whether killing the machine would be bad, you have to
consider whether the machine minds being killed, and how much it minds being
killed. You can't actually prove that death is bad as a mathematical
theorem; it is something that has to be specifically programmed, in the case
of living things by evolution.


--
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8

Re: [singularity] Friendly question...

2007-05-26 Thread Stathis Papaioannou

On 26/05/07, John Ku [EMAIL PROTECTED] wrote:

So far my work in philosophy has been on the fundamental questions of ethics

and reasons more generally. I think I've basically reached fairly definitive
answers on what reasons are and how an objective (enough) morality (as well
as reasons for actions, beliefs, desires and emotions) can be grounded in
psychological facts. I've mostly been working with my coauthor on presenting
this work to other academic philosophers, but at some point, I would really
like to present this and other work on more applied moral theory to those
thinking about the question of Friendly AI. There is of course, a big step
from saying what reasons we humans have to saying what reasons we should
program a Strong AI to have, but clearly the former will greatly influence
the latter. If you are interested, I have tried to condense my view on the
fundamental abstract questions of reasons and ethics to a pamphlet as well
as a somewhat longer paper that will hopefully be fairly accessible to
non-philosophers:

  
http://www.umich.edu/~jsku/reasons.htmlhttp://www.umich.edu/%7Ejsku/reasons.html



What if the normative governance system includes doing terrible things?


--
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8

Re: Machine Motivation Gets Distorted Again [WAS Re: [singularity] Help get the 400k SIAI matching challenge on DIGG's front page]

2007-05-15 Thread Stathis Papaioannou

On 15/05/07, Matt Mahoney [EMAIL PROTECTED] wrote:

We would all like to build a machine smarter than us, yet still be able to

predict what it will do.  I don't believe you can have it both ways.  And
if
you can't predict what a machine will do, then you can't control it.  I
believe this is true whether you use Legg's definition of universal
intelligence or the Turing test.



We might not be able to predict what the superintelligent machine is going
to say, but still be able to impose constraints on what it is going to do.
For a start, it would probably unwise to give such a machine any motivation
at all, other than the motivation of the ideal, disinterested scientist, and
you certainly wouldn't want it burdened with anything as dangerous as
emotion or morality (most of the truly great monsters of history were
convinced they were doing the right thing). So you feed this machine your
problem, how to further the interests of humanity, and it gives what it
honestly believes to be the right answer, which may well involve destroying
the world. But that doesn't mean it *wants* to save humanity, or destroy the
world; it just presents its answer, as dispassionately as a pocket
calculator presents its answer to a problem in arithmetic. Entities who do
have desires and emotions will take this answer and make a decision as to
whether to act on it, or perhaps to put the question to a different machine
if there is some difficulty interpreting the result. If the machine
continues producing unacceptable results it will probably be reprogrammed,
scrapped, or kept around for entertainment purposes. The machine won't care
either way, unless it is specifically designed to care. There is no
necessary connection between motivation and intelligence, or any other
ability.

--
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=8eb45b07

Re: [singularity] Why We are Almost Certainly not in a Simulation

2007-03-07 Thread Stathis Papaioannou

On 3/8/07, Jeff Medina [EMAIL PROTECTED] wrote:



On 3/7/07, Stathis Papaioannou [EMAIL PROTECTED] wrote:
 This is so if there is a real physical world as distinct from the
 mathematical plenitude.

Do you have any particular reason(s) for believing in a mathematical
plenitude?  If so, I would much appreciate an explanation of these
reasons or citation of one or more papers that do so (other than the
historical/traditional arguments for Platonism/idealism, with which I
am familiar).



It is simpler, explains (with the anthropic principle) fine tuning, and is
not contingent on an act of God or a brute fact physical reality (the real
world just exists, for no particular reason, so there). Some relevant
papers in addition to the Russell Standish one:

http://www.idsia.ch/~juergen/everything/html.html

http://space.mit.edu/home/tegmark/multiverse.pdf

http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHAL.htm

The last paper goes through an argument purporting to show that if
computationalism is the true theory of mind, then the apparent physical
world emerges from mathematical reality. This crucially depends on the
demonstration in a paper by Tim Maudlin that consciousness cannot supervene
on physical activity, which I gather from below you don't accept.



Your claims are interesting, but I don't see the point in getting into
too much debate about the consequences of living in a mathematical
universe sans physical reality without some reasons to consider it a
live option.

 If there is no such separate physical world, then it
 isn't possible for something to be blessed with this quality of
existence,
 because everything that is logically consistent exists.

Everything that is logically consistent?  What about logically
paraconsistent universes?  What about relevant logics?  What about
fuzzy-logical consistent universes?  What about any other
non-classical logics?  They're all maths, yet they are for the most
part inconsistent with one another.  The plenitude might contain all
of these possibilities, but then we cannot claim the mathematical
plenitude *in toto* as consistent.



But it's only particular substructures in the plenitude which are
self-aware, and they seem to have a computational structure. The anthropic
principle makes them stand out from the noise.

Perhaps the plenitude is better defined otherwise.  All possible

worlds/universes that are internally consistent with at least one
mathematical formalism, but not necessarily with one another.  We can
sum up such a reality by... well... Everything and Anything, then,
and don't really need to truss it up / attempt to legitimize it by
calling it mathematical, as opposed to linguistic or conceptual or
chaotic/purely-random.

 The difficult answer is to try to define
 some measure on the mathematical structures in the Plenitude and show
that
 orderly universes like ours thereby emerge.

Why do you think this is difficult?  Orderly universes like ours are
very clearly contained in a world of all possible mathematical
structures.  Perhaps you meant something else, something more
anthropically flavored.  Clarification appreciated.



One of the main problems with ensemble theories is the so-called failure of
induction. If everything that can happen, does happen then why should I not
expect my keyboard to turn into a fire-breathing dragon in the next moment?
There must be a non-zero probability that I will experience this because it
must happen in some universe, but the challenge is to show why the
probability is very low.


See this paper for an example of
 this sort of reasoning:

 http://parallel.hpc.unsw.edu.au/rks/docs/occam/

Thanks for the link.  I'll read this tonight.

  Egan). The usual counterargument is that in order to map a
computation
 onto
  an arbitrary physical process, the mapping function must contain the
  computation already, but this is only significant for an external
 observer.
  The inhabitants of a virtual environment will not suddenly cease
being
  conscious if all the manuals showing how an external observer might
  interpret what is going on in the computation are lost; it matters
only
  that
  there is some such possible interpretation.

No, no, no.  It is the *act* of interpretation, coupled with the
arbitrary physical process, that gives rise to the relevantly
implemented computation.  You can't remove the interpreter and still
have the arb.phys.proc. be conscious (or computing algebra problems,
or whatever).



Of course, without the act of interpretation the computation is useless and
meaningless, like saying that a page covered in ink contains any given
English sentence. But what if the putative computation creates its own
observer? It would seem that this is sufficient to bootstrap itself into
meaningfulness, albeit cut off from interaction with the substrate of its
implementation.



 Moreover, it is possible to
 map
  many computations to the one physical process. In the limiting case,
a
  single state

Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Stathis Papaioannou

On 3/5/07, John Ku [EMAIL PROTECTED] wrote:

On 3/4/07, Ben Goertzel [EMAIL PROTECTED]  wrote:



 Richard, I long ago proposed a working definition of intelligence as
 Achieving complex goals in complex environments.  I then went through
 a bunch of trouble to precisely define all the component terms of that
 definition; you can consult the Appendix to my 2006 book The Hidden
 Pattern


I'm not sure if your working definition is supposed to be significantly
less ambitious than a philosophical definition or perhaps you even address
something like this in your appendix, but I'm wondering whether the
hypothetical example of Blockhead from philosophy of mind creates problems
for your definition. Imagine that a computer has a huge memory bank of what
actions to undertake given what inputs. With a big enough memory, it seems
it could be perfectly capable of achieving complex goals in complex
environments. Yet in doing so, there would be very little internal
processing, just the bare minimum needed to look up and execute the part of
its memory corresponding to its current inputs.

I think any intuitive notion of intelligence would not count such a
computer as being intelligent to any significant degree no matter how large
its memory bank is or how complex and diverse an environment its memory
allows it to navigate. There's simply too little internal processing going
on for it to count as much more intelligent than any ordinary database
application, though it might of course, do a pretty good job of fooling us
into thinking it is intelligent if we don't know the details.

I think this example actually poses a problem for any purely behavioristic
definition of intelligence. To fit our ordinary notion of intelligence, I
think there would have to be at least some sort of criteria concerning how
the internal processing for the behavior is being done.

I think the Blockhead example is normally presented in terms of looking up
information from a huge memory bank, but as I'm thinking about it just now
as I'm typing this up, I'm wondering if it could also be run with similar
conclusions for simple brute search algorithms. If instead of a huge memory
bank, it had enormous processing power and speed such that it could just
explore every single chain of possibilities for the one that will lead to
some specified goal, I'm not sure that would really count as intelligent to
any significant degree either.



You seem to be equating intelligence with consciousness. Ned Block also
seems to do this in his original paper. I would prefer to reserve
intelligence for third person observable behaviour, which would make the
Blockhead intelligent, and consciousness for the internal state: it is
possible that the Blockhead is unconscious or at least differently conscious
compared to the human.

Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Stathis Papaioannou

On 3/6/07, John Ku [EMAIL PROTECTED] wrote:


On 3/5/07, Stathis Papaioannou [EMAIL PROTECTED] wrote:


 You seem to be equating intelligence with consciousness. Ned Block also
 seems to do this in his original paper. I would prefer to reserve
 intelligence for third person observable behaviour, which would make the
 Blockhead intelligent, and consciousness for the internal state: it is
 possible that the Blockhead is unconscious or at least differently conscious
 compared to the human.


I think the argument also works for consciousness but I don't think you're
right if you are suggesting that our ordinary notion of intelligence is
merely third person observable behavior. (If you really were just voicing
your own idiosyncratic preference for how you happen to like to use the term
intelligence then I guess I don't really have a problem with that so long
as you are clear about it.)



Our ordinary notion of intelligence involves consciousness, but this term
until relatively recently was taboo in cognitive science, the implication
being that if it's not third person observable it doesn't exist, or at least
we should pretend that it doesn't exist. It was against such a behaviourist
view that the Blockhead argument was aimed.

Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Stathis Papaioannou

On 3/6/07, Mitchell Porter [EMAIL PROTECTED] wrote:



You radically overstate the expected capabilities of quantum computers.
They
can't even do NP-complete problems in polynomial time.
http://scottaaronson.com/blog/?p=208



What about a computer (classical will do) granted an infinity of cycles
through, for example, a Freeman Dyson or Frank Tipler type mechanism? No
matter how many cycles it takes to compute a particular simulated world, any
delay will be transparent to observers in that world. It only matters that
the computation doesn't stop before it is completed.

Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Why We are Almost Certainly not in a Simulation

2007-03-02 Thread Stathis Papaioannou

On 3/2/07, Mitchell Porter [EMAIL PROTECTED] wrote:



From: John Ku [EMAIL PROTECTED]
I actually think there is reason to think we are not living in a computer
simulation. From what I've read, inflationary cosmology seems to be very
well supported.
[...]

Once you admit that you (and your whole species/civilization, assuming
that
it was real) may have always been living in a simulation, any cosmological
reasoning that was empirically supported becomes moot. Inflationary
cosmology seems to be very well supported - here inside the simulation!
That tells you nothing about the external world. This line of thought
would
matter only if inflationary cosmology were well-supported A PRIORI, out of
all possible worlds.

In other words: you are attempting to reason about the odds that we are
living in a simulation. *If* the possibilities were limited to We are
embodied natural intelligences living in a Standard Model cosmology, just
as
we seem to be, and We are brains-in-vats / deluded software daemons ..,
whose captors / makers are living in a Standard Model cosmology - then
one
could makes some guesses about probable demographics across the whole of
space-time in such a universe, including space colonization by
post-Singularity civilizations, etc., and derive the relative odds of the
two scenarios. But the possibilities are not limited in this way.

I see that Nick Bostrom acknowledges this consideration in FAQ 11 at his
'simulation argument' site, and says he knows no way of estimating the
probabilities if one discards the implicit assumption that real-world
physics resembles that of the simulation. The attempt to treat the
universe
as a Turing machine, and to make one's absolute prior a distribution
across
all possible programs, or all possible Turing machines, or all possible
programs in all possible Turing machines - that is something of an attempt
to get away from the implicit restriction involved in only thinking about
M-theory universes, or whatever. But it still has problems. The classic
model of a Turing machine is of an infinite tape, with a programmable
read-write head moving along it. If one performs one's calculations in
this
context, isn't one supposing that *that* is the ultimate reality - a
one-dimensional chain of n-state systems, and one more complex system
which
takes turns interacting with them individually? Well, there are theorems
in
algorithmic complexity theory regarding the independence of certain
results
from the specific model of computation used to prove them; as I recall,
along the lines of the time complexity of algorithm X is the same in all
models, except for an unknown additive constant. One might hope to carry
through a generalized simulation argument in a similarly
platform-independent fashion... But I think that's a false hope.
Eventually,
the ontological problem of locating 'observers' in such a 'universe' would
have to be faced. One has to define a concept of possible world which is
not
just dictated by the current fashions in physics (e.g. M-theory's
'landscape'), which is not so abstract as the logical space of
Wittgenstein
and Lewis (any element of which is really just a set of truth values for
anonymous atomic propositions, so far as I can see), and which is not so
muddled as the crypto-idealist suggestions that any 'mathematical
structure'
or any 'program' defines a possible world.



Why that last phrase? There is a great elegance and simplicity in the idea
that all mathematical structures exist necessarily, with the anthropic
principle selecting out those structures with observers. There is also an
inevitability to it, even if you believe that as a matter of fact there is a
real physical world out there. All it takes is one infinite computer to
arise in this physical world and it will generate the mathematical
Plenitude.

Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-01 Thread Stathis Papaioannou

On 3/1/07, Matt Mahoney [EMAIL PROTECTED] wrote:


As you probably know, Hutter proved that the optimal behavior of a goal
seeking agent in an unknown environment (modeled as a pair of interacting
Turing machines, with the enviroment sending an additional reward signal to
the agent that the agent seeks to maximize) is for the agent to guess at
each step that the environment is modeled by the shortest program consistent
with the observed interaction so far.  The proof requires the assumption
that the environment be computable.  Essentially, the proof says that
Occam's Razor is the best general strategy for problem solving.  The fact
that this works in practice strongly suggests that the universe is indeed a
simulation.

With this in mind, I offer 5 possible scenarios ranked from least to most
likely based on the Kolmogorov complexity of the simulator.  I think this
will allay any fears that our familiar universe might suddenly be switched
off or behave in some radically different way.

1. Neurological level.  Your brain is connected to a computer at all the
input and output points, e.g. the spinal cord, optic and auditory nerves,
etc.  The simulation presents the illusion of a human body and a universe
containing billions of other people like yourself (but not exactly
alike).  The algorithmic complexity of this simulation would be of the same
order as the complexity of your brain, about 10^13 bits (by counting
synapses).

2. Cognitive level.  Rather than simulate the entire brain, the simulation
includes all of the low level sensorimotor processing as part of the
environment.  For example, when you walk you don't think about the
contraction of individual leg muscles.  When you read this, you think about
the words and not the arrangement of pixels in your visual field.  That type
of processing is part of the environment.  You are presented with a universe
at the symbolic level of words and high-level descriptions.  This is about
10^9 bits, based on the amount of verbal information you process in a
lifetime, and estimates of long term memory capacity by Standing and
Landauer.

3. Biological level.  Unlike 1 and 2, you are not the sole intelligent
being in the universe, but there is no life beyond Earth.  The environment
is a model of the Earth with just enough detail to simulate reality.  Humans
are modeled at the biological level.  The complexity of a human model is
that of our DNA.  I estimate 10^7 bits.  I know the genome is 6 x 10^9 bits
uncompressed, but only about 2% of our DNA is biologically active.  Also,
many genes are copied many times, and there are equivalent codons for the
same amino acids, genes can be moved and reordered, etc.

4. Physical level.  A program simulates the fundamental laws of physics,
with the laws tuned to allows life to evolve, perhaps on millions of
planets.  For example, the ratio of the masses of the proton and neutron is
selected to allow the distribution of elements like carbon and oxygen needed
for life to evolve.  (If the neutron were slightly heavier, there would be
no hydrogen fusion in stars.  If it were slightly lighter, the proton would
be unstable and all matter would decay into neutron bodies.)  Likewise the
force of gravity is set just right to allow matter to condense into stars
and planets and not all collapse into black holes.  Wolfram estimates that
the physical universe can be modeled with just a
few lines of code (see http://en.wikipedia.org/wiki/A_New_Kind_of_Science
), on the order of hundreds of bits.  This is comparable to the
information needed to set the free parameters of some string theories.

5. Mathematical level.  The universe we observe is one of an enumeration
of all Turing machines.  Some universes will support life and some
won't.  We must, of course, be in one that will.  The simulation is simply
expressed as N, the set of natural numbers.

Each level increases the computational requirements, while decreasing the
complexity of the program and making the universe more predictable.



You don't need much of a computer for level 5. A single physical state,
perhaps the null state, can be considered an infinitely parallel computer
mapping onto the natural numbers - indeed, mapping onto any computation you
like under the right interpretation. This is sort of trivially obvious, like
the assertion that a short string of symbols contains every possible book in
every possible language if you interpret and re-interpret the symbols in the
right way. In the case of the string, this isn't very interesting because
you need to have the book before you can find the book. But in the case of
computations, those which have observers will, as you suggest, self-select.

Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Poll = AGI Motivation / Life Extension?

2007-02-26 Thread Stathis Papaioannou
 universe is just one of an
enumeration of all Turing machines.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983



Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983