Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-19 Thread Stathis Papaioannou
On 19/02/2008, John Ku [EMAIL PROTECTED] wrote:

 Yes, you've shown either that, or that even some occasionally
 intelligent and competent philosophers sometimes take seriously ideas
 that really can be dismissed as obviously ridiculous -- ideas which
 really are unworthy of careful thought were it not for the fact that
 pinpointing exactly why such ridiculous ideas are wrong is so often
 fruitful (as in the Chalmers article).

It doesn't sound so strange when you examine the distinction between
the computation and the implementation of the computation. An analogy
is the distinction between a circle and the implementation of a
circle.

It might be objected that it is ridiculous to argue that any irregular
shape looked at with the right transformation matrix is an
implementation of a circle. The objection is valid under a non-trivial
definition of implementation. A randomly drawn perimeter around a
vicious dog on a tether does not help you avoid getting bitten unless
you have the relevant transformation matrix and can do the
calculations in your head, which would be no better than having no
implementation at all but just instructions on how to draw the
circle de novo.

Thus, implementation is linked to utility. Circles exist in the
abstract as platonic objects, but platonic objects don't interact with
the real world until they are implemented, and implemented in a
particular useful or non-trivial way. Similarly, computations exist as
platonic objects, such as Turing machines, but don't play any part in
the real world unless they are implemented. There is an abstract
machine adding two numbers together, but this no use to you when you
are doing your shopping unless it is implemented in a useful and
non-trivial way, such as in an electronic calculator or in your brain.

Now, consider the special case of a conscious computation. If this
computation is to interact with the real world it must fulfil the
criteria for non-trivial implementation as discussed. A human being
would be an example of this. But what if the computation creates an
inputless virtual world with conscious inhabitants? Unless you are
prepared to argue that the consciousness of the inhabitants is
contingent on interaction with the real world there seems no reason to
insist that the implementation be non-trivial or useful in the above
sense. Consciousness would then be a quality of the abstract platonic
object, as circularity is a quality of the abstract circle.

I might add that there is nothing in this which contradicts
functionalism, or for that matter geometry.



-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Definitions

2008-02-19 Thread Charles D Hixson

John K Clark wrote:

Matt Mahoney [EMAIL PROTECTED]


It seems to me the problem is
defining consciousness, not testing for it.


And it seems to me that beliefs of this sort are exactly the reason 
philosophy is in such a muddle. A definition of consciousness is not
needed, in fact unless you're a mathematician where they can be of 
some use, one can lead a full rich rewarding intellectually life without

having a good definition of anything. Compared with examples
definitions are of trivial importance.

 John K Clark


But consciousness is easy to define, if not to implement:
Consciousness is the entity evaluating a portion of itself which 
represents it's position in it's model of it's environment.


If there's any aspect of consciousness which isn't included within this 
definition, I would like to know about it.  (Proving the definition 
correct would, however, be between difficult and impossible.  As 
normally used consciousness is a term without an external referent, so 
there's no way of determining that any two people are using the same 
definition.  It *may* be possible to determine that they are using 
different definitions.)



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Definitions

2008-02-19 Thread Richard Loosemore

John K Clark wrote:

And I will define consciousness just as soon as you define define.


Ah, but that is exactly my approach.

Thus, the subtitle I gave to my 2006 conference paper was Explaining 
Consciousness by Explaining That You Cannot Explain it, Because Your 
Explanation Mechanism is Getting Zapped.



Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-19 Thread Richard Loosemore

Stathis Papaioannou wrote:

On 19/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:


Sorry, but I do not think your conclusion even remotely follows from the
premises.

But beyond that, the basic reason that this line of argument is
nonsensical is that Lanier's thought experiment was rigged in such a way
that a coincidence was engineered into existence.

Nothing whatever can be deduced from an argument in which you set things
up so that a coincidence must happen!  It is just a meaningless
coincidence that a computer can in theory be set up to be (a) conscious
and (b) have a lower level of its architecture be isomorphic to a rainstorm.


I don't see how the fact something happens by coincidence is by itself
a problem. Evolution, for example, works by means of random genetic
mutations some of which just happen to result in a phenotype better
suited to its environment.

By the way, Lanier's idea is not original. Hilary Putnam, John Searle,
Tim Maudlin, Greg Egan, Hans Moravec, David Chalmers (see the paper
cited by Kaj Sotola in the original thread -
http://consc.net/papers/rock.html) have all considered variations on
the theme. At the very least, this should indicate that the idea
cannot be dismissed as just obviously ridiculous and unworthy of
careful thought.


I am aware of some of those other sources for the idea:  nevertheless, 
they are all nonsense for the same reason.  I especially single out 
Searle:  his writings on this subject are virtually worthless.  I have 
argued with Searle to his face, and I have talked with others 
(Hofstadter, for example) who have also done so, and the consensus among 
these people is that his arguments are built on confusion.


(And besides, I don't stop thinking just because others have expressed 
their view of an idea:  I use my own mind, and if I can come up with an 
argument against the idea, I prefer to use that rather than defer to 
authority. ;-) )


But going back to the question at issue:  this coincidence is a 
coincidence that happens in a thought experiment. If someone constructs 
a thought experiment in which they allow such things as computers of 
quasi-infinite size, they can make anything happen, including ridiculous 
coincidences!


If you set the thought experiment up so that there is enough room for a 
meaningless coincidence to occur within the thought experiment, then 
what you have is *still* just a meaningless coincidence.


I don't think I can put it any plainer than that.



Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-19 Thread Eric B. Ramsay
During the late 70's when I was at McGill, I attended a public talk given by 
Feynman on quantum physics. After the talk, and in answer to a question posed 
from a member of the audience, Feynman said something along the lines of : I 
have here in my pocket a prescription from my doctor that forbids me to answer 
questions from or get into discussions with philosophers or something like 
that. After spending the last couple of days reading all the links on the 
outrageous proposition that rocks, rainstorms or plates of spaghetti implement 
the mind, I now understand Feynman's sentiment. What a waste of mental energy. 
A line of discussion as equally fruitless as solipsism. I am in full agreement 
with Richard Loosemore on this one. 
Eric B. Ramsay

Stathis Papaioannou [EMAIL PROTECTED] wrote: On 20/02/2008, Richard Loosemore 
 wrote:

 I am aware of some of those other sources for the idea:  nevertheless,
 they are all nonsense for the same reason.  I especially single out
 Searle:  his writings on this subject are virtually worthless.  I have
 argued with Searle to his face, and I have talked with others
 (Hofstadter, for example) who have also done so, and the consensus among
 these people is that his arguments are built on confusion.

Just to be clear, this is *not* the same as Searle's Chinese Room
argument, which only he seems to find convincing.




-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Definitions

2008-02-19 Thread Matt Mahoney

--- Charles D Hixson [EMAIL PROTECTED] wrote:

 John K Clark wrote:
  Matt Mahoney [EMAIL PROTECTED]
 
  It seems to me the problem is
  defining consciousness, not testing for it.
 
  And it seems to me that beliefs of this sort are exactly the reason 
  philosophy is in such a muddle. A definition of consciousness is not
  needed, in fact unless you're a mathematician where they can be of 
  some use, one can lead a full rich rewarding intellectually life without
  having a good definition of anything. Compared with examples
  definitions are of trivial importance.
 
   John K Clark
 
 But consciousness is easy to define, if not to implement:
  Consciousness is the entity evaluating a portion of itself which 
 represents it's position in it's model of it's environment.
 
  If there's any aspect of consciousness which isn't included within this 
 definition, I would like to know about it.  (Proving the definition 
 correct would, however, be between difficult and impossible.  As 
 normally used consciousness is a term without an external referent, so 
 there's no way of determining that any two people are using the same 
 definition.  It *may* be possible to determine that they are using 
 different definitions.)

Or consciousness just means awareness...

in which case, it seems to be located in the hippocampus.
http://www.world-science.net/othernews/080219_conscious


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Definitions

2008-02-19 Thread Samantha Atkins

Richard Loosemore wrote:

John K Clark wrote:

And I will define consciousness just as soon as you define define.


Ah, but that is exactly my approach.

Thus, the subtitle I gave to my 2006 conference paper was Explaining 
Consciousness by Explaining That You Cannot Explain it, Because Your 
Explanation Mechanism is Getting Zapped.


Great title.  Couldn't google it though.  Is it perchance available 
online or in a conference proceedings I perhaps subscribe to?


- samantha

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com