RE: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-11 Thread John G. Rose
 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 
 That's true.  The visual perception process is altered after the
 experiment to
 favor recognition of objects seen in the photos.  A recall test doesn't
 measure this effect.  I don't know of a good way to measure the quantity
 of
 information learned.
 

When you learn something is it stored as electrical state or are molecules
created? Perhaps precise measurements of particular chemicals in certain
regions could correlate to data differential. A problem though is that the
data may be spread over a wide region making it difficult to measure. And
you'd have to be able to measure chemicals in tissue structure though
software could process out the non-applicable.

Also you could estimate by calculating average data intake and estimate what
is thrown away. So many bits are consumed, so many are tossed, the rest is
stored, independent of recall.

But a curious number in addition to average long term memory storage is
MIPS. How many actual bit flips are occurring? This is where you have to be
precise as even trace chemicals, light, temperature, effect this number.
Though just a raw number won't tell you that much compared to say
spatiotemporal MIPS density graphs.

John

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-05 Thread John G. Rose
 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 --- John G. Rose [EMAIL PROTECTED] wrote:
  Is there really a bit per synapse? Is representing a synapse with a
 bit an
  accurate enough simulation? One synapse is a very complicated system.
 
 A typical neural network simulation uses several bits per synapse.  A
 Hopfield
 net implementation of an associative memory stores 0.15 bits per
 synapse.  But
 cognitive models suggest the human brain stores .01 bits per
 synapse.
 (There are 10^15 synapses but human long term memory capacity is 10^9
 bits).

A cognitive model may only allocate so much data per synapse but the REAL
data being stored in one biological synapse has got to be quite high. How
much of it is unique among a group of synapses and how much of that affects
the running biological cognitive entity grossly is in a degree particular to
that brain. Any simulation that throws x bits per synapse IS a simulation
and not a copy. A copied simulation could adapt itself to its new home
if given enough latitude to model itself as it was in its biological host,
if you are trying to copy a consciousness it depends on what it actually is,
how much it can be simplified or molded to a digital transistor-like
environment verses the rich unique electro-chemical environment of a
biological brain. A simulation of a brain is a lossy compression since you
can't get it all, each cell ultimately holds many gigs of data. You can try
to get a functionally isomorphic compressed copy but due to the size you
still going to have to average out much of it.

A computer software simulation is going to be WAY more flexible and
extensible. Biological electrochemical systems are, at least with current
technology, not very changeable. But looking at the sophistication of
natural molecular digital physics there has to a number of breakthroughs
down the road...

John

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-05 Thread Richard Loosemore

Matt Mahoney wrote:

--- John G. Rose [EMAIL PROTECTED] wrote:

Is there really a bit per synapse? Is representing a synapse with a bit an
accurate enough simulation? One synapse is a very complicated system.


A typical neural network simulation uses several bits per synapse.  A Hopfield
net implementation of an associative memory stores 0.15 bits per synapse.  But
cognitive models suggest the human brain stores .01 bits per synapse. 
(There are 10^15 synapses but human long term memory capacity is 10^9 bits).


Sorry, I don't buy this at all.  This makes profound assumptions about 
how information is stored in memory, averagng out the net storage and 
ignoring the immediate storage capacity.  A typical synapse actually 
stores a great deal more than a fraction of a bit, as far as we can 
tell, but this information is stored in such a way that the system as a 
whole can actually use the information in a meaningful way.


In that context, quoting 0.01 bits per synapse is a completely 
meaningless statement.


Also, typical neural network simulations use more than a few bits as 
well.  When I did a number of backprop NN studies in the early 90s, my 
networks had to use floating point numbers because the behavior of the 
net deteriorated badly if the numerical precision was reduced.  This was 
especially important on long training runs or large datasets.





Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-05 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:


Matt Mahoney wrote:

--- John G. Rose [EMAIL PROTECTED] wrote:

Is there really a bit per synapse? Is representing a synapse with a bit

an

accurate enough simulation? One synapse is a very complicated system.

A typical neural network simulation uses several bits per synapse.  A

Hopfield
net implementation of an associative memory stores 0.15 bits per synapse. 

But
cognitive models suggest the human brain stores .01 bits per synapse. 
(There are 10^15 synapses but human long term memory capacity is 10^9

bits).

Sorry, I don't buy this at all.  This makes profound assumptions about 
how information is stored in memory, averagng out the net storage and 
ignoring the immediate storage capacity.  A typical synapse actually 
stores a great deal more than a fraction of a bit, as far as we can 
tell, but this information is stored in such a way that the system as a 
whole can actually use the information in a meaningful way.


In that context, quoting 0.01 bits per synapse is a completely 
meaningless statement.


I was referring to Landauer's estimate of long term memory learning rate of
about 2 bits per second.  http://www.merkle.com/humanMemory.html
This does not include procedural memory, things like visual perception and
knowing how to walk.  So 10^-6 bits is low.  But how do we measure such
things?


I think my general point is that bits per second or bits per synapse 
is a valid measure if you care about something like an electrical signal 
line, but is just simply an incoherent way to talk about the memory 
capacity of the human brain.


Saying 0.01 bits per synapse is no better than opening and closing 
one's mouth without saying anything.




Richard Loosemore.

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-05 Thread Matt Mahoney

--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  --- John G. Rose [EMAIL PROTECTED] wrote:
  Is there really a bit per synapse? Is representing a synapse with a bit
 an
  accurate enough simulation? One synapse is a very complicated system.
  
  A typical neural network simulation uses several bits per synapse.  A
 Hopfield
  net implementation of an associative memory stores 0.15 bits per synapse. 
 But
  cognitive models suggest the human brain stores .01 bits per synapse. 
  (There are 10^15 synapses but human long term memory capacity is 10^9
 bits).
 
 Sorry, I don't buy this at all.  This makes profound assumptions about 
 how information is stored in memory, averagng out the net storage and 
 ignoring the immediate storage capacity.  A typical synapse actually 
 stores a great deal more than a fraction of a bit, as far as we can 
 tell, but this information is stored in such a way that the system as a 
 whole can actually use the information in a meaningful way.
 
 In that context, quoting 0.01 bits per synapse is a completely 
 meaningless statement.

I was referring to Landauer's estimate of long term memory learning rate of
about 2 bits per second.  http://www.merkle.com/humanMemory.html
This does not include procedural memory, things like visual perception and
knowing how to walk.  So 10^-6 bits is low.  But how do we measure such
things?

 Also, typical neural network simulations use more than a few bits as 
 well.  When I did a number of backprop NN studies in the early 90s, my 
 networks had to use floating point numbers because the behavior of the 
 net deteriorated badly if the numerical precision was reduced.  This was 
 especially important on long training runs or large datasets.

That's what I meant by few.  In the PAQ8 compressors I have to use at least
16 bits.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-05 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:
 Matt Mahoney wrote:
  I was referring to Landauer's estimate of long term memory learning rate
 of
  about 2 bits per second.  http://www.merkle.com/humanMemory.html
  This does not include procedural memory, things like visual perception and
  knowing how to walk.  So 10^-6 bits is low.  But how do we measure such
  things?
 
 I think my general point is that bits per second or bits per synapse 
 is a valid measure if you care about something like an electrical signal 
 line, but is just simply an incoherent way to talk about the memory 
 capacity of the human brain.
 
 Saying 0.01 bits per synapse is no better than opening and closing 
 one's mouth without saying anything.

Bits is a perfectly sensible measure of information.  Memory can be measured
using human recall tests, just as Shannon used human prediction tests to
estimate the information capacity of natural language text.  The question is
important to anyone who needs to allocate a hardware budget for an AI design.

[For those not familiar with Richard's style: once he disagrees with something
he will dispute it to the bitter end in long, drawn out arguments, because
nothing is more important than being right.]


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-05 Thread Eric B. Ramsay


Matt Mahoney [EMAIL PROTECTED] wrote:

[For those not familiar with Richard's style: once he disagrees with something
he will dispute it to the bitter end in long, drawn out arguments, because
nothing is more important than being right.]

What's the purpose for this comment? If the people here are intelligent enough 
to have meaningful discussions on a difficult topic, then they will be able to 
sort out for themselves the styles of others. 

Eric B. Ramsay

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-05 Thread Matt Mahoney
--- Eric B. Ramsay [EMAIL PROTECTED] wrote:

 
 
 Matt Mahoney [EMAIL PROTECTED] wrote:
 
 [For those not familiar with Richard's style: once he disagrees with
 something
 he will dispute it to the bitter end in long, drawn out arguments, because
 nothing is more important than being right.]
 
 What's the purpose for this comment? If the people here are intelligent
 enough to have meaningful discussions on a difficult topic, then they will
 be able to sort out for themselves the styles of others. 

Sorry, he posted a similar comment about me on the AGI list.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-04 Thread John G. Rose
 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
   From: Matt Mahoney [mailto:[EMAIL PROTECTED]
  
   By equivalent computation I mean one whose behavior is
   indistinguishable
   from the brain, not an approximation.  I don't believe that an exact
   simulation requires copying the implementation down to the neuron
 level,
   much
   less the molecular level.
  
 
  So how would you approach constructing such a model? I suppose a
 superset
  intelligence structure could analyze properties and behaviors of a
 brain and
  simulate it within itself. If it absorbed enough data it could
 reconstruct
  and eventually come up with something close.
 
 Well, nobody has solved the AI problem, much less the uploading problem.
 Consider the problem in stages:
 
 1. The Turing test.
 
 2. The personalized Turing test.  The machine pretends to be you and
 the
 judges are people who know you well.
 
 3. The planned, personalized Turing test.  You are allowed to
 communicate
 with judges in advance, for example, to agree on a password.
 
 4. The embodied, planned, personalized Turing test.  Communication is
 not
 restricted to text.  The machine is planted in the skull of your clone.
 Your
 friends and relatives have to decide who has the carbon-based brain.
 
 Level 4 should not require simulating every neuron and synapse.  Without
 the
 constraints of slow, noisy neurons, we could use other algorithms.  For
 example, low level visual processing such as edge and line detection
 would not
 need to be implemented as a 2-D array of identical filters.  It could be
 implemented serially by scanning the retinal image with a window filter.
 Fine
 motor control would not need to be implemented by combining thousands of
 pulsing motor neurons to get a smooth average signal.  The signal could
 be
 computed numerically.  The brain has about 10^15 synapses, so a
 straightforward simulation at the neural level would require 10^15 bits
 of
 memory.  But cognitive tests suggest humans have only about 10^9 bits of
 long
 term memory, suggesting that more compressed representation is possible.
 
 In any case, level 1 should be sufficient to argue convincingly that
 either
 consciousness can exist in machines, or that it doesn't in humans.



These tests still though are very subjective, nothing exact.

Is there really a bit per synapse? Is representing a synapse with a bit an
accurate enough simulation? One synapse is a very complicated system.

John







---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-04 Thread Matt Mahoney
--- John G. Rose [EMAIL PROTECTED] wrote:
 Is there really a bit per synapse? Is representing a synapse with a bit an
 accurate enough simulation? One synapse is a very complicated system.

A typical neural network simulation uses several bits per synapse.  A Hopfield
net implementation of an associative memory stores 0.15 bits per synapse.  But
cognitive models suggest the human brain stores .01 bits per synapse. 
(There are 10^15 synapses but human long term memory capacity is 10^9 bits).

-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-28 Thread Stathis Papaioannou
On 28/02/2008, John G. Rose [EMAIL PROTECTED] wrote:

 Actually a better way to do it as getting even just the molecules right is a 
 wee bit formidable - you need a really powerful computer with lots of RAM. 
 Take some DNA and grow a body double in software. Then create an interface 
 from the biological brain to the software brain and then gradually kill off 
 the biological brain forcing the consciousness into the software brain.

  The problem with this approach naturally is that to grow the brain in RAM 
 requires astronomical resources. But ordinary off-the-shelf matter holds so 
 much digital memory compared to modern computers. You have to convert matter 
 into RAM somehow. For example one cell with DNA is how many gigs? And cells 
 cost a dime a billion. But the problem is that molecular interaction is too 
 slow and cluncky.

Agreed, it would be *enormously* difficult getting a snapshot at the
molecular level and then doing a simulation from this snapshot. But as
a matter of principle, it should be possible.




-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-28 Thread Matt Mahoney

--- Stathis Papaioannou [EMAIL PROTECTED] wrote:

 On 28/02/2008, John G. Rose [EMAIL PROTECTED] wrote:
 
  Actually a better way to do it as getting even just the molecules right is
 a wee bit formidable - you need a really powerful computer with lots of RAM.
 Take some DNA and grow a body double in software. Then create an interface
 from the biological brain to the software brain and then gradually kill off
 the biological brain forcing the consciousness into the software brain.
 
   The problem with this approach naturally is that to grow the brain in RAM
 requires astronomical resources. But ordinary off-the-shelf matter holds so
 much digital memory compared to modern computers. You have to convert matter
 into RAM somehow. For example one cell with DNA is how many gigs? And cells
 cost a dime a billion. But the problem is that molecular interaction is too
 slow and cluncky.
 
 Agreed, it would be *enormously* difficult getting a snapshot at the
 molecular level and then doing a simulation from this snapshot. But as
 a matter of principle, it should be possible.

And that is the whole point.  You don't need to simulate the brain at the
molecular level or even at the level of neurons.  You just need to produce an
equivalent computation.  The whole point of such fine grained simulations is
to counter arguments (like Penrose's) that qualia and consciousness cannot be
explained by computation or even by physics.  Penrose (like all humans) is
reasoning with a brain that is a product of evolution, and therefore biased
toward beliefs that favor survival of the species.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-28 Thread John G. Rose
 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 
 And that is the whole point.  You don't need to simulate the brain at
 the
 molecular level or even at the level of neurons.  You just need to
 produce an
 equivalent computation.  The whole point of such fine grained
 simulations is
 to counter arguments (like Penrose's) that qualia and consciousness
 cannot be
 explained by computation or even by physics.  Penrose (like all humans)
 is
 reasoning with a brain that is a product of evolution, and therefore
 biased
 toward beliefs that favor survival of the species.
 

An equivalent computation will be some percentage of the complexity of a
perfect molecular simulation. You can simplify the computation but you have
to know what to simplify out and what to discard. Losing too much of the
richness may produce a simulation that is like a scratchy audio recording of
a philharmonic or probably even worse the simulated system will not function
as a coherent entity, it'll just be contentious noise unless there is ample
abetting by external control. But a non-molecular and non-neural simulation
may require even more computational complexity than a direct model.
Reformatting the consciousness to operate within another substrate without
first understanding its natural substrate, ya, still may be the best choice
due to technological limitations.

John

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-28 Thread Stathis Papaioannou
On 29/02/2008, Matt Mahoney [EMAIL PROTECTED] wrote:

 By equivalent computation I mean one whose behavior is indistinguishable
  from the brain, not an approximation.  I don't believe that an exact
  simulation requires copying the implementation down to the neuron level, much
  less the molecular level.

How do you explain the fact that cognition is exquisitely sensitive to
changes at the molecular level?



-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-28 Thread Matt Mahoney

--- Stathis Papaioannou [EMAIL PROTECTED] wrote:

 On 29/02/2008, Matt Mahoney [EMAIL PROTECTED] wrote:
 
  By equivalent computation I mean one whose behavior is indistinguishable
   from the brain, not an approximation.  I don't believe that an exact
   simulation requires copying the implementation down to the neuron level,
 much
   less the molecular level.
 
 How do you explain the fact that cognition is exquisitely sensitive to
 changes at the molecular level?

In what way?  Why can't you replace neurons with equivalent software?


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-28 Thread Matt Mahoney

--- John G. Rose [EMAIL PROTECTED] wrote:

  From: Matt Mahoney [mailto:[EMAIL PROTECTED]
  
  By equivalent computation I mean one whose behavior is
  indistinguishable
  from the brain, not an approximation.  I don't believe that an exact
  simulation requires copying the implementation down to the neuron level,
  much
  less the molecular level.
  
 
 So how would you approach constructing such a model? I suppose a superset
 intelligence structure could analyze properties and behaviors of a brain and
 simulate it within itself. If it absorbed enough data it could reconstruct
 and eventually come up with something close.

Well, nobody has solved the AI problem, much less the uploading problem. 
Consider the problem in stages:

1. The Turing test.

2. The personalized Turing test.  The machine pretends to be you and the
judges are people who know you well.

3. The planned, personalized Turing test.  You are allowed to communicate
with judges in advance, for example, to agree on a password.

4. The embodied, planned, personalized Turing test.  Communication is not
restricted to text.  The machine is planted in the skull of your clone.  Your
friends and relatives have to decide who has the carbon-based brain.

Level 4 should not require simulating every neuron and synapse.  Without the
constraints of slow, noisy neurons, we could use other algorithms.  For
example, low level visual processing such as edge and line detection would not
need to be implemented as a 2-D array of identical filters.  It could be
implemented serially by scanning the retinal image with a window filter.  Fine
motor control would not need to be implemented by combining thousands of
pulsing motor neurons to get a smooth average signal.  The signal could be
computed numerically.  The brain has about 10^15 synapses, so a
straightforward simulation at the neural level would require 10^15 bits of
memory.  But cognitive tests suggest humans have only about 10^9 bits of long
term memory, suggesting that more compressed representation is possible.

In any case, level 1 should be sufficient to argue convincingly that either
consciousness can exist in machines, or that it doesn't in humans.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-28 Thread Stathis Papaioannou
On 29/02/2008, Matt Mahoney [EMAIL PROTECTED] wrote:

  4. The embodied, planned, personalized Turing test.  Communication is not
  restricted to text.  The machine is planted in the skull of your clone.  Your
  friends and relatives have to decide who has the carbon-based brain.

  Level 4 should not require simulating every neuron and synapse.  Without the
  constraints of slow, noisy neurons, we could use other algorithms.  For
  example, low level visual processing such as edge and line detection would 
 not
  need to be implemented as a 2-D array of identical filters.  It could be
  implemented serially by scanning the retinal image with a window filter.  
 Fine
  motor control would not need to be implemented by combining thousands of
  pulsing motor neurons to get a smooth average signal.  The signal could be
  computed numerically.  The brain has about 10^15 synapses, so a
  straightforward simulation at the neural level would require 10^15 bits of
  memory.  But cognitive tests suggest humans have only about 10^9 bits of long
  term memory, suggesting that more compressed representation is possible.

  In any case, level 1 should be sufficient to argue convincingly that either
  consciousness can exist in machines, or that it doesn't in humans.

I agree that it should be possible to simulate a brain on a computer,
but I don't see how you can be so confident that you can throw away
most of the details of brain structure with impunity. Tiny changes to
neurons which make no difference to the anatomy or synaptic structure
can have large effects on neuronal behaviour, and hence whole organism
behaviour. You can't leave this sort of thing out of the model and
hope that it will still match the original.




-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-28 Thread Matt Mahoney
--- Stathis Papaioannou [EMAIL PROTECTED] wrote:
 I agree that it should be possible to simulate a brain on a computer,
 but I don't see how you can be so confident that you can throw away
 most of the details of brain structure with impunity. Tiny changes to
 neurons which make no difference to the anatomy or synaptic structure
 can have large effects on neuronal behaviour, and hence whole organism
 behaviour. You can't leave this sort of thing out of the model and
 hope that it will still match the original.

And people can lose millions of neurons without a noticeable effect.  And
removing a 0.1 micron chunk out of a CPU chip can cause it to fail, yet I can
run the same programs on a chip with half as many transistors.

Nobody knows how to make an artificial brain, but I am pretty confident that
it is not necessary to preserve its structure to preserve its function.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-27 Thread John G. Rose
 From: Stathis Papaioannou [mailto:[EMAIL PROTECTED]
 
 On 26/02/2008, John G. Rose [EMAIL PROTECTED] wrote:
 
  There is an assumed simplification tendency going on that a human
 brain could be represented as a string of bits. It's easy to assume but
 I think that a more correct way to put it would be that it could be
 approximated. Exactly how close the approximation could theoretically
 get is entirely unknown.
 
 It's not entirely unknown. The maximum simulation fidelity that would
 be required is at the quantum level, which is still finite. But
 probably this would be overkill, since you remain you from moment to
 moment despite changes in your brain which are gross compared to the
 quantum level.
 
 

Well if you spend some time theorizing a model of a brain digitizer that 
operates within known physics constraints it's not an easy task getting just 
the molecular and atomic digital data. You have to sample over a period of time 
and space using photons and particle beams. This in itself interferes with the 
sample. Then say this sample is reconstructed within a theoretically capable 
computer, the computer will most likely have to operate in slow time to 
simulate the physics of all the atoms and molecules as the computer is itself 
constrained by the speed of light. I'm going this route because I don't think 
that it is possible to get an instantaneous reading of all the atoms in a 
brain, you have to reconstruct over time and space. THEN, this is ignoring the 
subatomic properties and forget about quantum data sample digitization I think 
it is impossible to get an exact copy.

So this leaves you with a reconstructed approximation. Exactly how much of this 
would be you is unknown because any subatomic and quantum properties of you are 
- started from scratch - this includes any macroscopic and environmental 
properties of subatomic and quantum and superatomic molecular state and 
positioning effects. And if the whole atomic level model is started from 
scratch in the simulator it could disintegrate or diverge as it is all forced 
fit together. Your copy is an approximation of which it is unknown how close it 
is actually of you or if you could be even put together accurately enough in 
the simulator.

John

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-27 Thread Stathis Papaioannou
On 27/02/2008, John G. Rose [EMAIL PROTECTED] wrote:

 Well if you spend some time theorizing a model of a brain digitizer that 
 operates within known physics constraints it's not an easy task getting just 
 the molecular and atomic digital data. You have to sample over a period of 
 time and space using photons and particle beams. This in itself interferes 
 with the sample. Then say this sample is reconstructed within a theoretically 
 capable computer, the computer will most likely have to operate in slow time 
 to simulate the physics of all the atoms and molecules as the computer is 
 itself constrained by the speed of light. I'm going this route because I 
 don't think that it is possible to get an instantaneous reading of all the 
 atoms in a brain, you have to reconstruct over time and space. THEN, this is 
 ignoring the subatomic properties and forget about quantum data sample 
 digitization I think it is impossible to get an exact copy.

  So this leaves you with a reconstructed approximation. Exactly how much of 
 this would be you is unknown because any subatomic and quantum properties of 
 you are - started from scratch - this includes any macroscopic and 
 environmental properties of subatomic and quantum and superatomic molecular 
 state and positioning effects. And if the whole atomic level model is started 
 from scratch in the simulator it could disintegrate or diverge as it is all 
 forced fit together. Your copy is an approximation of which it is unknown how 
 close it is actually of you or if you could be even put together accurately 
 enough in the simulator.

There are some who think that all you need to simulate a brain (and
effectively copy a person) is to fix it, slice it up, and examine it
under a microscope to determine the synaptic structure. This is almost
certainly way too crude: consider the huge difference to cognition
made by small molecules in tiny concentrations, such as LSD, which do
no more than slightly alter the conformation of certain receptor
proteins on neurons by binding to them non-covalently. On the other
hand, it is equally implausible to suppose that you have to get it
right down to the subatomic level, since otherwise cosmic rays or
changing the isotope composition of the brain would have a major
effect, and they clearly don't.




-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-27 Thread Stathis Papaioannou
On 28/02/2008, John G. Rose [EMAIL PROTECTED] wrote:

 I don't know if you can rule out subatomic and quantum. There seems to be 
 more and more evidence pointing to an amount of activity going on there. A 
 small amount of cosmic rays don't have obvious immediate gross effects but 
 interaction is occurring. Exactly how much of it would need to be replicated 
 is not known. You could be missing out on important psi elements in 
 consciousness which are taken for granted :)

  Either way it would be approximation unless there was some way using 
 theoretical physics where an exact instantaneous snapshot could occur with 
 the snapshot existing in precisely equivalent matter at that instant.

Well, maybe you can't actually rule it out until you make a copy and
see how close it has to be to think the same as the original, but I
strongly suspect that getting it right down to the molecular level
would be enough. Even if quantum effects are important in
consciousness (and I don't think there is any clear evidence that this
is so), these would be generic quantum effects, reproduced by
reproducing the molecular structure. Transistors function using
quantum level effects, but you don't need to replace a particular
transistor with a perfect copy to have an identically functioning
electronic device.


-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-27 Thread John G. Rose
 From: Stathis Papaioannou [mailto:[EMAIL PROTECTED]
 Well, maybe you can't actually rule it out until you make a copy and
 see how close it has to be to think the same as the original, but I
 strongly suspect that getting it right down to the molecular level
 would be enough. Even if quantum effects are important in
 consciousness (and I don't think there is any clear evidence that this
 is so), these would be generic quantum effects, reproduced by
 reproducing the molecular structure. Transistors function using
 quantum level effects, but you don't need to replace a particular
 transistor with a perfect copy to have an identically functioning
 electronic device.
 

Actually a better way to do it as getting even just the molecules right is a 
wee bit formidable - you need a really powerful computer with lots of RAM. Take 
some DNA and grow a body double in software. Then create an interface from the 
biological brain to the software brain and then gradually kill off the 
biological brain forcing the consciousness into the software brain.

The problem with this approach naturally is that to grow the brain in RAM 
requires astronomical resources. But ordinary off-the-shelf matter holds so 
much digital memory compared to modern computers. You have to convert matter 
into RAM somehow. For example one cell with DNA is how many gigs? And cells 
cost a dime a billion. But the problem is that molecular interaction is too 
slow and cluncky. 

John


---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-25 Thread John G. Rose
There is an assumed simplification tendency going on that a human brain
could be represented as a string of bits. It's easy to assume but I think
that a more correct way to put it would be that it could be approximated.
Exactly how close the approximation could theoretically get is entirely
unknown. Though something could be achieved and even different forms of
consciousness even ones that may be superior and more efficient and
structured better than biological ones are there for discovery and I believe
that there are potentially many variations. There is a tendency to think of
levels of consciousness but perhaps this is wrong there are just variants
some of which have stronger properties than others but common denominators
are there IOW there are certain required properties for something to be
classified as conscious. Consciousness seems not to be a point but an n
dimensional continuous function.

 

John

 

From: Panu Horsmalahti [mailto:[EMAIL PROTECTED] 
Sent: Sunday, February 24, 2008 12:08 PM
To: singularity@v2.listbox.com
Subject: Re: [singularity] Re: Revised version of Jaron Lanier's thought
experiment.

 

If we assume 2x2x2 block of space floating somewhere, and would assign each
element the value 1 if a single atom happens to be inside the subspace
defined by the grid, and 0 if not. How many ways would there be to read this
grid to create (2*2*2) = 8bits? The answer is 8! = 40 320. Lets then assume
that a single atom can hold atleast 100 bits[1], there would be atleast 9 *
10^157 ways to read a single atom. This is just by calculating the different
permutations, but we can also apply *any* mathematical calculation to our
information reading algorithm. One would be the 'NOT' argument, which simply
inverses our bits. This already doubles the amount of bits we can extract.
If you take this further, we can read *all* different permutations of 100
bits from a single atom. For any string of bits, there exists at least one
algorithm to calculate it from any input, since a single bit could be
calculated into 10 if the bit is 0, and 11 if the bit is 1 or example.

It must then be concluded that you can construct an algorithm/computer to
read a static string of bits that defines any human state of consciousness
(the string of bits could for example be calculated to match exactly those
bits that would be in the memory of a computer that simulates a human brain)
from pretty much any space or substrate.

One opposition people have is that most of that complexity is actually in
the algorithm itself, but that is irrelevant if it still creates
consciousness.

If we assume that our universe has some kind of blind physical law, that has
as input the atoms/matter/energy in some space, and then searches through
all the possible algorithms, it is bound to find atleast one that should
create consciousness. It would be quite a miracle if this physical law would
have a human bias, and would think like humans to only create consciousness
when the computation is done in biological neurons. If you say that
computers cannot be truly conscious, you're saying that the universe has
some kind of magical human bias, which seems to be religious thinking to me.

As I showed, some space can be interpreted as many different kinds of
computation (actually a *massive* number), only our human perspective forces
us to choose the interpretation that fits us. For example, if we create a
computer that calculates bullet trajectories, we interpret it to do just
that. But it can be interpreted 'in theory' to mean many other things, but
we only care of the computation we designed it for. A small box of 3 atoms
bouncing around can be interpreted to mean a massive number of different
computations, in addition of simulation 3 atoms.

As it is trivial to read a static string of bits to match some state of
consciousness, some argue that it is not enough. They claim that a single
state is not enough to create consciousness. However, to imagine a computer
that not only creates the first string of bits in the consciousness
computation, but also the second one (and possibly more ad infinitum) just
makes the algorithm/computer more complex, but is not an argument against
the thought experiment.


1. The Singularity is Near, Ray Kurzweil

  _  


singularity |  http://www.listbox.com/member/archive/11983/=now Archives
http://www.listbox.com/member/archive/rss/11983/ |
http://www.listbox.com/member/?;
Modify Your Subscription

 http://www.listbox.com 

 

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-25 Thread Stathis Papaioannou
On 26/02/2008, John G. Rose [EMAIL PROTECTED] wrote:

 There is an assumed simplification tendency going on that a human brain could 
 be represented as a string of bits. It's easy to assume but I think that a 
 more correct way to put it would be that it could be approximated. Exactly 
 how close the approximation could theoretically get is entirely unknown.

It's not entirely unknown. The maximum simulation fidelity that would
be required is at the quantum level, which is still finite. But
probably this would be overkill, since you remain you from moment to
moment despite changes in your brain which are gross compared to the
quantum level.




-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-24 Thread John G. Rose
The program that is isomorphically equivalent to raindrop positions inputted
into the hypothetical computer implements a brain. I have a blinkey safety
light on the back of my bicycle that goes on and off at 1 sec frequency.
There exists a hypothetical computer that that takes a 1 sec on/off pulse as
program instructions and implements my brain. This doesn't say much as the
hypothetical computer is almost 100% equivalent to my brain. Where is the
hypothetical computer? Still have to come up with it.

 

But Lanier does scrape the surface of something bigger with all this. He is
pointing to an intelligence in all things or some structure in all things
that has some amount of potential intelligent as with potential energy in
physics, or some effect with intelligence IOW the structure means something.


 

And I found this interesting that he said - 

 

This means that software packaged as being non-intelligent is more likely
to improve, because the designers will receive better critical feedback from
users. The idea of intelligence removes some of the evolutionary pressure
from software, by subtly indicating to users it is they, rather than the
software, that should be changing.

 

As it happens, machine decision making is already running our household
finances to a scary degree, but it's doing so with a Wizard of Oz-like
remote authority that keeps us from questioning it. I'm referring to the
machines that calculate our credit ratings. Most of us have decided to
change our habits in order to appeal to these machines. We have simplified
ourselves in order to be comprehensible to simplistic data-bases, making
them look smart and authoritative. Our demonstrated willingness to
accommodate machines in this way is ample reason to adopt a standing bias
against the idea of artificial intelligence.

 

As it is true. There is a herding effect by AI and computers in general to
be aware of.

 

John

 

 

 

From: Eric B. Ramsay [mailto:[EMAIL PROTECTED] 
Sent: Friday, February 22, 2008 10:12 AM
To: singularity@v2.listbox.com
Subject: [singularity] Re: Revised version of Jaron Lanier's thought
experiment.

 

I came across an old Discover magazine this morning with yet another article
by Lanier on his rainstorm thought experiment. After reading the article it
occurred to me that what he is saying may be equivalent to:

Imagine a sufficiently large computer that works according to the
architecture of our ordinary PC's. In the space of Operating Systems (code
interpreters), we can find an operating system such that it will run the
input from the rainstorm such that it appears identical to a computer
running a brain.

If this is true, then functionalism is not affected since we must not forget
to combine program + OS. Thus the rainstorm by itself has no emergent
properties.

Eric B. Ramsay



  _  


singularity |  http://www.listbox.com/member/archive/11983/=now Archives
http://www.listbox.com/member/archive/rss/11983/ |
http://www.listbox.com/member/?;
Modify Your Subscription

 http://www.listbox.com 

 

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-24 Thread Panu Horsmalahti
If we assume 2x2x2 block of space floating somewhere, and would assign each
element the value 1 if a single atom happens to be inside the subspace
defined by the grid, and 0 if not. How many ways would there be to read this
grid to create (2*2*2) = 8bits? The answer is 8! = 40 320. Lets then assume
that a single atom can hold atleast 100 bits[1], there would be atleast 9 *
10^157 ways to read a single atom. This is just by calculating the different
permutations, but we can also apply *any* mathematical calculation to our
information reading algorithm. One would be the 'NOT' argument, which simply
inverses our bits. This already doubles the amount of bits we can extract.
If you take this further, we can read *all* different permutations of 100
bits from a single atom. For any string of bits, there exists at least one
algorithm to calculate it from any input, since a single bit could be
calculated into 10 if the bit is 0, and 11 if the bit is 1 or example.

It must then be concluded that you can construct an algorithm/computer to
read a static string of bits that defines any human state of consciousness
(the string of bits could for example be calculated to match exactly those
bits that would be in the memory of a computer that simulates a human brain)
from pretty much any space or substrate.

One opposition people have is that most of that complexity is actually in
the algorithm itself, but that is irrelevant if it still creates
consciousness.

If we assume that our universe has some kind of blind physical law, that has
as input the atoms/matter/energy in some space, and then searches through
all the possible algorithms, it is bound to find atleast one that should
create consciousness. It would be quite a miracle if this physical law would
have a human bias, and would think like humans to only create consciousness
when the computation is done in biological neurons. If you say that
computers cannot be truly conscious, you're saying that the universe has
some kind of magical human bias, which seems to be religious thinking to me.

As I showed, some space can be interpreted as many different kinds of
computation (actually a *massive* number), only our human perspective forces
us to choose the interpretation that fits us. For example, if we create a
computer that calculates bullet trajectories, we interpret it to do just
that. But it can be interpreted 'in theory' to mean many other things, but
we only care of the computation we designed it for. A small box of 3 atoms
bouncing around can be interpreted to mean a massive number of different
computations, in addition of simulation 3 atoms.

As it is trivial to read a static string of bits to match some state of
consciousness, some argue that it is not enough. They claim that a single
state is not enough to create consciousness. However, to imagine a computer
that not only creates the first string of bits in the consciousness
computation, but also the second one (and possibly more ad infinitum) just
makes the algorithm/computer more complex, but is not an argument against
the thought experiment.


1. The Singularity is Near, Ray Kurzweil

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-23 Thread Joshua Fox
On 24/02/2008, Joshua Fox [EMAIL PROTECTED] wrote:
 Eric B. Ramsay wrote:
   Imagine a sufficiently large computer that works according to the 
 architecture of our ordinary
   PC's. In the space of Operating Systems (code interpreters), we can
   find an operating  system such that it will run the input from the 
 rainstorm such that it appears identical to a computer running a brain


 To find this operating system with reasonable resources would
  require intelligence  -- the exact intelligence which Lanier is
  looking for but failing to identify.

Yes it would require intelligence to find it, but your mental state
is not contingent on someone else finding it. Nor is this an
argument against functionalism.

Consider Arithmetical Functionalism: the theory that a calculation is
multiply realisable, in any device that has the right functional
organisation. But this might mean that somewhere in the vastness of
the universe, a calculation such as 2 + 2 = 4 might be being
implemented purely by chance: in the causal relationship between atoms
in an interstellar gas cloud, for example. This is clearly ridiculous,
so *either* Arithmetical Functionalism is false *or* it is impossible
that a calculation will be implemented accidentally. Right?




-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-23 Thread Vladimir Nesov
On Sun, Feb 24, 2008 at 2:51 AM, Stathis Papaioannou [EMAIL PROTECTED] wrote:

  Consider Arithmetical Functionalism: the theory that a calculation is
  multiply realisable, in any device that has the right functional
  organisation. But this might mean that somewhere in the vastness of
  the universe, a calculation such as 2 + 2 = 4 might be being
  implemented purely by chance: in the causal relationship between atoms
  in an interstellar gas cloud, for example. This is clearly ridiculous,
  so *either* Arithmetical Functionalism is false *or* it is impossible
  that a calculation will be implemented accidentally. Right?


I feel a little uncomfortable when people say things like 'because
Occam's razor is true' or 'otherwise computationalism is false' or
'consciousness doesn't exist'. As these notions are usually quite
loaded and ambiguous, and main issues with them may revolve around the
question of what they actually mean, it's far from clear what is being
asserted when they are declared to be 'true' or 'false'.

Does 2+2=4 make a sound when there is no one around?

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-23 Thread Vladimir Nesov
On Sun, Feb 24, 2008 at 4:06 AM, Stathis Papaioannou [EMAIL PROTECTED] wrote:
 On 24/02/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:

Does 2+2=4 make a sound when there is no one around?

  Yes, but it is of no consequence since no one can hear it. However, if
  we believe that computation can result in consciousness, then by
  definition there *is* someone to hear it: itself.


But it's still of no 'consequence', no?

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-23 Thread Stathis Papaioannou
On 24/02/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:

  Does 2+2=4 make a sound when there is no one around?
  
Yes, but it is of no consequence since no one can hear it. However, if
we believe that computation can result in consciousness, then by
definition there *is* someone to hear it: itself.
  

 But it's still of no 'consequence', no?

Of no consequence as far as anything at the level of the substrate of
its implementation is concerned, no. In order to find such a
computation hidden in noise we would have to do the computation all
over again, using conventional means. But unless we require that the
computation interact with us, that should make no difference to *it*.
If the computation simulates an inputless virtual reality with
conscious inhabitants, they should be no less conscious for the fact
that we can't talk to them.




-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com