Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Vladimir Nesov
On Feb 20, 2008 6:13 AM, Stathis Papaioannou [EMAIL PROTECTED] wrote:

 The possibility of mind uploading to computers strictly depends on
 functionalism being true; if it isn't then you may as well shoot
 yourself in the head as undergo a destructive upload. Functionalism
 (invented, and later repudiated, by Hilary Putnam) is philosophy of
 mind if anything is philosophy of mind, and the majority of cognitive
 scientists are functionalists. Are you still happy asserting that it's
 all bunk?


Philosophy is in most cases very inefficient, hence wasteful. It puts
very much into building its theoretical constructions, few of which
are useful for understainding reality. It might be fun for those who
like this kind of thing, but it is a bad tool.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Stan Nilsen

Vladimir Nesov wrote:

On Feb 20, 2008 6:13 AM, Stathis Papaioannou [EMAIL PROTECTED] wrote:

The possibility of mind uploading to computers strictly depends on
functionalism being true; if it isn't then you may as well shoot
yourself in the head as undergo a destructive upload. Functionalism
(invented, and later repudiated, by Hilary Putnam) is philosophy of
mind if anything is philosophy of mind, and the majority of cognitive
scientists are functionalists. Are you still happy asserting that it's
all bunk?



Philosophy is in most cases very inefficient, hence wasteful. It puts
very much into building its theoretical constructions, few of which
are useful for understainding reality. It might be fun for those who
like this kind of thing, but it is a bad tool.


*** humor intended ***
Oddly enough, one Webster definition of philosopher is one who seeks 
wisdom or enlightenment.   Nothing wrong with that.


It seems that when philosophy is implemented it becomes like nuclear 
physics e.g. break down all the things we essentially understand until 
we come up with pieces, which we give names to, and then admit we don't 
know what the names identify - other than broken pieces of something we 
used to understand when it was whole.  My limited experience with those 
who practice philosophy is that they love to go to the absurd - I 
suspect this is meant as a means of proof, but often comes across as 
macho philosophoso.  Kind of I can prove anything you say is absurd.

I welcome the thoughts of Philosophers.



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Richard Loosemore

Stathis Papaioannou wrote:

On 20/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:


I am aware of some of those other sources for the idea:  nevertheless,
they are all nonsense for the same reason.  I especially single out
Searle:  his writings on this subject are virtually worthless.  I have
argued with Searle to his face, and I have talked with others
(Hofstadter, for example) who have also done so, and the consensus among
these people is that his arguments are built on confusion.


Just to be clear, this is *not* the same as Searle's Chinese Room
argument, which only he seems to find convincing.


Oh, my word:  if only it was just him!

He was at the Tucson Consciousness conference two years ago, and in his 
big talk he strutted about the stage saying I invented the Chinese Room 
thought experiment, and the Computationalists tried to explain it away 
for twenty years until finally the dust settled, and now finally they 
have given up and everyone agrees that I WON!


This statement was followed by tumultuous applause and cheers from a 
large fraction of the 800+ audience.


You're right that it is not the same as the Chinese Room, but if I am 
not mistaken this was one of his attempts to demolish a reply to the 
Chinese Room.




Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread gifting

Quoting Vladimir Nesov [EMAIL PROTECTED]:


On Feb 20, 2008 6:13 AM, Stathis Papaioannou [EMAIL PROTECTED] wrote:


The possibility of mind uploading to computers strictly depends on
functionalism being true; if it isn't then you may as well shoot
yourself in the head as undergo a destructive upload. Functionalism
(invented, and later repudiated, by Hilary Putnam) is philosophy of
mind if anything is philosophy of mind, and the majority of cognitive
scientists are functionalists. Are you still happy asserting that it's
all bunk?



Philosophy is in most cases very inefficient, hence wasteful. It puts
very much into building its theoretical constructions, few of which
are useful for understainding reality. It might be fun for those who
like this kind of thing, but it is a bad tool.


I would beg to differ. Philosophy, science and society dance together.
Philosophy contributes to understanding reality or whatever reality might be.
Gudrun


--
Vladimir Nesov
[EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread John Ku
On 2/20/08, Stan Nilsen [EMAIL PROTECTED] wrote:

 It seems that when philosophy is implemented it becomes like nuclear
 physics e.g. break down all the things we essentially understand until
 we come up with pieces, which we give names to, and then admit we don't
 know what the names identify - other than broken pieces of something we
 used to understand when it was whole.  My limited experience with those
 who practice philosophy is that they love to go to the absurd - I
 suspect this is meant as a means of proof, but often comes across as
 macho philosophoso.  Kind of I can prove anything you say is absurd.
 I welcome the thoughts of Philosophers.

I think most or at least many philosophers, myself included, would
actually agree that most of what (usually other) philosophers produce
is garbage. Of course, they won't agree about *which* philosophical
views and methods are garbage. I would propose that the primary
explanation for this is simply that philosophy is really, really hard.
It is almost by definition those areas of intellectual inquiry in
which there is little established methodology. (I think that is a
little overstated since at least in analytic philosophy, there is
broad agreement on the logical structure of arguments and rather less
broad but growing agreement on the nature of conceptual analysis.)

Notice that it is not just philosophers who say stupid stuff in
philosophy. Evolutionary biologists, computer scientists, economists,
scientists, and just people in general can all be found saying stupid
things when they try to venture into ethics, philosophy of mind,
philosophy of science, etc. In fact, I would say that professional
philosophers have a significantly better track record in philosophy
than people in general or the scientific community when they venture
into philosophy (which may not say very much about their track record
on an absolute scale).

By the way, I think this whole tangent was actually started by Richard
misinterpreting Lanier's argument (though quite understandably given
Lanier's vagueness and unclarity). Lanier was not imagining the
amazing coincidence of a genuine computer being implemented in a
rainstorm, i.e. one that is robustly implementing all the right causal
laws and the strong conditionals Chalmers talks about. Rather, he was
imagining the more ordinary and really not very amazing coincidence of
a rainstorm bearing a certain superficial isomorphism to just a trace
of the right kind of computation. He rightly notes that if
functionalism were committed to such a rainstorm being conscious, it
should be rejected. I think this is true whether or not such
rainstorms actually exist or are likely since a correct theory of our
concepts should deliver the right results as the concept is applied to
any genuine possibility. For instance, if someone's ethical theory
delivers the result that it is perfectly permissible to press a button
that would cause all conscious beings to suffer for all eternity, then
it is no legitimate defense to claim that's okay because it's really
unlikely. As I tried to explain, I think Lanier's argument fails
because he doesn't establish that functionalism is committed to the
absurd result that the rainstorms he discusses are conscious or
genuinely implementing computation. If, on the other hand, Lanier were
imagining a rainstorm miraculously implementing real computation (in
the way Chalmers discusses) and somehow thought that was a problem for
functionalism, then of course Richard's reply would roughly be the
correct one.

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Richard Loosemore

John Ku wrote:


By the way, I think this whole tangent was actually started by Richard
misinterpreting Lanier's argument (though quite understandably given
Lanier's vagueness and unclarity). Lanier was not imagining the
amazing coincidence of a genuine computer being implemented in a
rainstorm, i.e. one that is robustly implementing all the right causal
laws and the strong conditionals Chalmers talks about. Rather, he was
imagining the more ordinary and really not very amazing coincidence of
a rainstorm bearing a certain superficial isomorphism to just a trace
of the right kind of computation. He rightly notes that if
functionalism were committed to such a rainstorm being conscious, it
should be rejected. I think this is true whether or not such
rainstorms actually exist or are likely since a correct theory of our
concepts should deliver the right results as the concept is applied to
any genuine possibility. For instance, if someone's ethical theory
delivers the result that it is perfectly permissible to press a button
that would cause all conscious beings to suffer for all eternity, then
it is no legitimate defense to claim that's okay because it's really
unlikely. As I tried to explain, I think Lanier's argument fails
because he doesn't establish that functionalism is committed to the
absurd result that the rainstorms he discusses are conscious or
genuinely implementing computation. If, on the other hand, Lanier were
imagining a rainstorm miraculously implementing real computation (in
the way Chalmers discusses) and somehow thought that was a problem for
functionalism, then of course Richard's reply would roughly be the
correct one.


Oh, I really don't think I made that kind of mistake in interpreting 
Lanier's argument.


If Lanier was attacking a very *particular* brand of functionalism (the 
kind that would say isomorphism is everything, so any isomorphism 
between a rainstorm and a conscious computer, even for just a 
millisecond, would leave you no option but to say that the rainstorm is 
conscious), then perhaps I agree with Lanier.  That kind of simplistic 
functionalism is just not going to work.


But I don't think he was narrowing his scope that much, was he?  If so, 
he was attacking a straw man.  I just assumed he wasn't doing anything 
so trivial, but I stand to be corrected if he was.  I certainly thought 
that may of the people who cited Lanier's argument were citing it as a 
demolition of functionalism in the large.


There are many functionalists who would say that what matters is a 
functional isomorphism, and that even though we have difficulty at this 
time saying exactly what we mean by a functional isomorphism, 
nevertheless it is not good enough to simply find any old isomorphism 
(especially one which holds for only a moment).


I would also point out one other weakness in his argument:  in order to 
get his isomorphism to work, he almost certainly has to allow the 
hypothetical computer to implement the rainstorm at a different level 
of representation from the consciousness.  It is only if you allow 
this difference of levels between the two things that the hypothetical 
machine is guaranteed to be possible.  If the two things are suposed to 
be present at exactly the same level of representation in the machine, 
then I am fairly sure that the machine is over-constrained and thus we 
cannot say that such a machine is, in general possible.


But if they happen at different levels, then the argument falls appart 
for a different reason:  you can always make two systems coexist in this 
way, but that does not mean that they are the same system.  There is 
no actual isomorphism in this case.  This, of course, was Searle's main 
mistake:  understanding of English and Chinese were happening in two 
different levels, therefore two different systems, and nobody claims 
that what one system understands, the other must also be 
understanding.  (Searle's main folly, of course, is that he has never 
shown any sign of being able to understand this point).




Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Stathis Papaioannou
On 21/02/2008, John Ku [EMAIL PROTECTED] wrote:

  By the way, I think this whole tangent was actually started by Richard
  misinterpreting Lanier's argument (though quite understandably given
  Lanier's vagueness and unclarity). Lanier was not imagining the
  amazing coincidence of a genuine computer being implemented in a
  rainstorm, i.e. one that is robustly implementing all the right causal
  laws and the strong conditionals Chalmers talks about. Rather, he was
  imagining the more ordinary and really not very amazing coincidence of
  a rainstorm bearing a certain superficial isomorphism to just a trace
  of the right kind of computation. He rightly notes that if
  functionalism were committed to such a rainstorm being conscious, it
  should be rejected.

Only if it is incompatible with the world we observe.





-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread John Ku
On 2/20/08, Stathis Papaioannou [EMAIL PROTECTED] wrote:
 On 21/02/2008, John Ku [EMAIL PROTECTED] wrote:

   By the way, I think this whole tangent was actually started by Richard
   misinterpreting Lanier's argument (though quite understandably given
   Lanier's vagueness and unclarity). Lanier was not imagining the
   amazing coincidence of a genuine computer being implemented in a
   rainstorm, i.e. one that is robustly implementing all the right causal
   laws and the strong conditionals Chalmers talks about. Rather, he was
   imagining the more ordinary and really not very amazing coincidence of
   a rainstorm bearing a certain superficial isomorphism to just a trace
   of the right kind of computation. He rightly notes that if
   functionalism were committed to such a rainstorm being conscious, it
   should be rejected.

 Only if it is incompatible with the world we observe.

I think that's the wrong way to think about philosophical issues. It
seems you are trying to import a scientific method to a philosophical
domain where it does not belong. Functionalism is a view about how our
concepts work. It is not tested by whether it is falisified by
observations about the world.

Or if you prefer, conceptual analysis does produce scientific
hypotheses about the world, but the part of the world in question is
within our own heads, something that we ourselves don't have
transparent access to. If we had transparent access to the way our
concepts work, the task of cognitive science and philosophy and along
with it much of AI would be considerably easier. Our best way of
testing these hypotheses at the moment is to see whether a proposed
analysis would best explain our uses of the concept and our conceptual
intuitions.

Sometimes, especially with people who have been in the grip of a
theory, people can (often only partially) switch what concept is
linked to a lexical item and not realize they are (sometimes) using
the word differently from others (including their past selves). Then
the debate gets much more complicated and may among other things, have
to get into the normative issue of which concept(s) we ought to use.
Chances are, though, unless the revision was carefully thought out and
defended rather than accidentally slipped into, it will not serve the
presumably important functions for which we had the original concept.

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Stathis Papaioannou
On 21/02/2008, John Ku [EMAIL PROTECTED] wrote:
 On 2/20/08, Stathis Papaioannou [EMAIL PROTECTED] wrote:
   On 21/02/2008, John Ku [EMAIL PROTECTED] wrote:
  
 By the way, I think this whole tangent was actually started by Richard
 misinterpreting Lanier's argument (though quite understandably given
 Lanier's vagueness and unclarity). Lanier was not imagining the
 amazing coincidence of a genuine computer being implemented in a
 rainstorm, i.e. one that is robustly implementing all the right causal
 laws and the strong conditionals Chalmers talks about. Rather, he was
 imagining the more ordinary and really not very amazing coincidence of
 a rainstorm bearing a certain superficial isomorphism to just a trace
 of the right kind of computation. He rightly notes that if
 functionalism were committed to such a rainstorm being conscious, it
 should be rejected.
  
   Only if it is incompatible with the world we observe.


 I think that's the wrong way to think about philosophical issues. It
  seems you are trying to import a scientific method to a philosophical
  domain where it does not belong. Functionalism is a view about how our
  concepts work. It is not tested by whether it is falisified by
  observations about the world.

  Or if you prefer, conceptual analysis does produce scientific
  hypotheses about the world, but the part of the world in question is
  within our own heads, something that we ourselves don't have
  transparent access to. If we had transparent access to the way our
  concepts work, the task of cognitive science and philosophy and along
  with it much of AI would be considerably easier. Our best way of
  testing these hypotheses at the moment is to see whether a proposed
  analysis would best explain our uses of the concept and our conceptual
  intuitions.

Functionalism at least has the form of a scientific hypothesis, in
that it asserts that a functionally equivalent analogue of my brain
will have the same mental properties. Even though in practice it isn't
empirically falsifiable we can examine it to make sure it is
internally consistent, compatible with observed reality, and in
keeping with the principle of Occam's razor. We should certainly be
wary of a theory that sounds ridiculous, but unless it fails in one of
these three areas it is wrong to dismiss it.




-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-19 Thread Stathis Papaioannou
On 19/02/2008, John Ku [EMAIL PROTECTED] wrote:

 Yes, you've shown either that, or that even some occasionally
 intelligent and competent philosophers sometimes take seriously ideas
 that really can be dismissed as obviously ridiculous -- ideas which
 really are unworthy of careful thought were it not for the fact that
 pinpointing exactly why such ridiculous ideas are wrong is so often
 fruitful (as in the Chalmers article).

It doesn't sound so strange when you examine the distinction between
the computation and the implementation of the computation. An analogy
is the distinction between a circle and the implementation of a
circle.

It might be objected that it is ridiculous to argue that any irregular
shape looked at with the right transformation matrix is an
implementation of a circle. The objection is valid under a non-trivial
definition of implementation. A randomly drawn perimeter around a
vicious dog on a tether does not help you avoid getting bitten unless
you have the relevant transformation matrix and can do the
calculations in your head, which would be no better than having no
implementation at all but just instructions on how to draw the
circle de novo.

Thus, implementation is linked to utility. Circles exist in the
abstract as platonic objects, but platonic objects don't interact with
the real world until they are implemented, and implemented in a
particular useful or non-trivial way. Similarly, computations exist as
platonic objects, such as Turing machines, but don't play any part in
the real world unless they are implemented. There is an abstract
machine adding two numbers together, but this no use to you when you
are doing your shopping unless it is implemented in a useful and
non-trivial way, such as in an electronic calculator or in your brain.

Now, consider the special case of a conscious computation. If this
computation is to interact with the real world it must fulfil the
criteria for non-trivial implementation as discussed. A human being
would be an example of this. But what if the computation creates an
inputless virtual world with conscious inhabitants? Unless you are
prepared to argue that the consciousness of the inhabitants is
contingent on interaction with the real world there seems no reason to
insist that the implementation be non-trivial or useful in the above
sense. Consciousness would then be a quality of the abstract platonic
object, as circularity is a quality of the abstract circle.

I might add that there is nothing in this which contradicts
functionalism, or for that matter geometry.



-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-19 Thread Richard Loosemore

Stathis Papaioannou wrote:

On 19/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:


Sorry, but I do not think your conclusion even remotely follows from the
premises.

But beyond that, the basic reason that this line of argument is
nonsensical is that Lanier's thought experiment was rigged in such a way
that a coincidence was engineered into existence.

Nothing whatever can be deduced from an argument in which you set things
up so that a coincidence must happen!  It is just a meaningless
coincidence that a computer can in theory be set up to be (a) conscious
and (b) have a lower level of its architecture be isomorphic to a rainstorm.


I don't see how the fact something happens by coincidence is by itself
a problem. Evolution, for example, works by means of random genetic
mutations some of which just happen to result in a phenotype better
suited to its environment.

By the way, Lanier's idea is not original. Hilary Putnam, John Searle,
Tim Maudlin, Greg Egan, Hans Moravec, David Chalmers (see the paper
cited by Kaj Sotola in the original thread -
http://consc.net/papers/rock.html) have all considered variations on
the theme. At the very least, this should indicate that the idea
cannot be dismissed as just obviously ridiculous and unworthy of
careful thought.


I am aware of some of those other sources for the idea:  nevertheless, 
they are all nonsense for the same reason.  I especially single out 
Searle:  his writings on this subject are virtually worthless.  I have 
argued with Searle to his face, and I have talked with others 
(Hofstadter, for example) who have also done so, and the consensus among 
these people is that his arguments are built on confusion.


(And besides, I don't stop thinking just because others have expressed 
their view of an idea:  I use my own mind, and if I can come up with an 
argument against the idea, I prefer to use that rather than defer to 
authority. ;-) )


But going back to the question at issue:  this coincidence is a 
coincidence that happens in a thought experiment. If someone constructs 
a thought experiment in which they allow such things as computers of 
quasi-infinite size, they can make anything happen, including ridiculous 
coincidences!


If you set the thought experiment up so that there is enough room for a 
meaningless coincidence to occur within the thought experiment, then 
what you have is *still* just a meaningless coincidence.


I don't think I can put it any plainer than that.



Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-19 Thread Eric B. Ramsay
During the late 70's when I was at McGill, I attended a public talk given by 
Feynman on quantum physics. After the talk, and in answer to a question posed 
from a member of the audience, Feynman said something along the lines of : I 
have here in my pocket a prescription from my doctor that forbids me to answer 
questions from or get into discussions with philosophers or something like 
that. After spending the last couple of days reading all the links on the 
outrageous proposition that rocks, rainstorms or plates of spaghetti implement 
the mind, I now understand Feynman's sentiment. What a waste of mental energy. 
A line of discussion as equally fruitless as solipsism. I am in full agreement 
with Richard Loosemore on this one. 
Eric B. Ramsay

Stathis Papaioannou [EMAIL PROTECTED] wrote: On 20/02/2008, Richard Loosemore 
 wrote:

 I am aware of some of those other sources for the idea:  nevertheless,
 they are all nonsense for the same reason.  I especially single out
 Searle:  his writings on this subject are virtually worthless.  I have
 argued with Searle to his face, and I have talked with others
 (Hofstadter, for example) who have also done so, and the consensus among
 these people is that his arguments are built on confusion.

Just to be clear, this is *not* the same as Searle's Chinese Room
argument, which only he seems to find convincing.




-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-18 Thread Richard Loosemore

Stathis Papaioannou wrote:

On 18/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:
[snip]

But again, none of this touches upon Lanier's attempt to draw a bogus
conclusion from his thought experiment.



No external observer would ever be able to keep track of such a
fragmented computation and as far as the rest of the universe is
concerned there may as well be no computation.

This makes little sense, surely.  You mean that we would not be able to
interact with it?  Of course not:  the poor thing will have been
isolated from meanigful contact with the world because of the jumbled up
implementation that you posit.  Again, though, I see no relevant
conclusion emerging from this.

I cannot make any sense of your statement that as far as the rest of
the universe is concerned there may as well be no computation.  So we
cannot communicate with it anymore that should not be surprising,
given your assumptions.


We can't communicate with it so it is useless as far as what we
normally think of as computation goes. A rainstorm contains patterns
isomorphic with an abacus adding 127 and 498 to give 625, but to
extract this meaning you have to already know the question and the
answer, using another computer such as your brain. However, in the
case of an inputless simulation with conscious inhabitants this
objection is irrelevant, since the meaning is created by observers
intrinsic to the computation.

Thus if there is any way a physical system could be interpreted as
implementing a conscious computation, it is implementing the conscious
computation, even if no-one else is around to keep track of it.



Sorry, but I do not think your conclusion even remotely follows from the 
premises.


But beyond that, the basic reason that this line of argument is 
nonsensical is that Lanier's thought experiment was rigged in such a way 
that a coincidence was engineered into existence.


Nothing whatever can be deduced from an argument in which you set things 
up so that a coincidence must happen!  It is just a meaningless 
coincidence that a computer can in theory be set up to be (a) conscious 
and (b) have a lower level of its architecture be isomorphic to a rainstorm.


It is as simple as that.



Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-18 Thread Stathis Papaioannou
On 19/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:

 Sorry, but I do not think your conclusion even remotely follows from the
 premises.

 But beyond that, the basic reason that this line of argument is
 nonsensical is that Lanier's thought experiment was rigged in such a way
 that a coincidence was engineered into existence.

 Nothing whatever can be deduced from an argument in which you set things
 up so that a coincidence must happen!  It is just a meaningless
 coincidence that a computer can in theory be set up to be (a) conscious
 and (b) have a lower level of its architecture be isomorphic to a rainstorm.

I don't see how the fact something happens by coincidence is by itself
a problem. Evolution, for example, works by means of random genetic
mutations some of which just happen to result in a phenotype better
suited to its environment.

By the way, Lanier's idea is not original. Hilary Putnam, John Searle,
Tim Maudlin, Greg Egan, Hans Moravec, David Chalmers (see the paper
cited by Kaj Sotola in the original thread -
http://consc.net/papers/rock.html) have all considered variations on
the theme. At the very least, this should indicate that the idea
cannot be dismissed as just obviously ridiculous and unworthy of
careful thought.




-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-18 Thread John Ku
On 2/18/08, Stathis Papaioannou [EMAIL PROTECTED] wrote:

 By the way, Lanier's idea is not original. Hilary Putnam, John Searle,
 Tim Maudlin, Greg Egan, Hans Moravec, David Chalmers (see the paper
 cited by Kaj Sotola in the original thread -
 http://consc.net/papers/rock.html) have all considered variations on
 the theme. At the very least, this should indicate that the idea
 cannot be dismissed as just obviously ridiculous and unworthy of
 careful thought.

Yes, you've shown either that, or that even some occasionally
intelligent and competent philosophers sometimes take seriously ideas
that really can be dismissed as obviously ridiculous -- ideas which
really are unworthy of careful thought were it not for the fact that
pinpointing exactly why such ridiculous ideas are wrong is so often
fruitful (as in the Chalmers article).

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread John Ku
On 2/17/08, Stathis Papaioannou [EMAIL PROTECTED] wrote:

 If computation is multiply realizable, it could be seen as being
 implemented by an endless variety of physical systems, with the right
 mapping or interpretation, since anything at all could be arbitrarily
 chosen to represent a tape, a one, a zero, or whatever.

Sure, pretty much anything could be used as a symbol to represent
anything else, but the representing would consist in the network of
causal interactions that constitute the symbol manipulation, not in
the symbols themselves. (And certainly not in anyone having to be
around to understand the machinery of symbol manipulation going on.)

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread Richard Loosemore

Stathis Papaioannou wrote:

On 17/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:


The first problem arises from Lanier's trick of claiming that there is a
computer, in the universe of all possible computers, that has a machine
architecture and a machine state that is isomorphic to BOTH the neural
state of a brain at a given moment, and also isomorphic to the state of
a particular rainstorm at a particular moment.


In the universe of all possible computers and programs, yes.


This is starting to be rather silly because the rainstorm and computer
then diverge in their behavior in the next tick of the clock. Lanier
then tries to persuade us, with some casually well chosen words, that he
can find a computer that will match up with the rainstorm AND the brain
for a few seconds, or a few minutes ... or ... how long?  Well, if he
posits a large enough computer, maybe the whole lifetime of that brain?

The problem with this is that what his argument really tells us is that
he can imagine a quasi-infinitely large, hypothetical computer that just
happens to be structured to look like (a) the functional equivalent of a
particular human brain for an indefinitely long period of time (at least
the normal lifetime of that human brain), and, coincidentally, a
particular rainstorm, for just a few seconds or minutes of the life of
that rainstorm.

The key word is coincidentally.


There is no reason why it has to be *the same* computer from moment to
moment. If your mind were uploaded to a computer and your physical
brain died, you would experience continuity of consciousness (or if
you prefer, the illusion of continuity of consciousness, which is just
as good) despite the fact that there is a gross physical discontinuity
between your brain and the computer. You would experience continuity
of consciousness even if every moment were implemented on a completely
different machine, in a completely different part of the universe,
running in a completely jumbled up order.


Some of this I agree with, though it does not touch on the point that I 
was making, which was that Lanier's argument was valueless.


The last statement you make, though, is not quite correct:  with a 
jumbled up sequence of episodes during which the various machines were 
running the brain code, he whole would lose its coherence, because input 
from the world would now be randomised.


If the computer was being fed input from a virtual reality simulation, 
that would be fine.  It would sense a sudden change from real world to 
virtual world.


But again, none of this touches upon Lanier's attempt to draw a bogus 
conclusion from his thought experiment.




No external observer would ever be able to keep track of such a
fragmented computation and as far as the rest of the universe is
concerned there may as well be no computation.


This makes little sense, surely.  You mean that we would not be able to 
interact with it?  Of course not:  the poor thing will have been 
isolated from meanigful contact with the world because of the jumbled up 
implementation that you posit.  Again, though, I see no relevant 
conclusion emerging from this.


I cannot make any sense of your statement that as far as the rest of 
the universe is concerned there may as well be no computation.  So we 
cannot communicate with it anymore that should not be surprising, 
given your assumptions.



But if the computation
involves conscious observers in a virtual reality, why should they be
any less conscious due to being unable to observe and interact with
the substrate of their implementation?


No reason at all!  They would be conscious.  Isaac Newton could not 
observe and interact with the substrate of his implementation, without 
making a hole in his skull that would have killed his brain ... but that 
did not have any bearing on his consciousness.



In the final extrapolation of this idea it becomes clear that if any
computation can be mapped onto any physical system, the physical
system is superfluous and the computation resides in the mapping, an
abstract mathematical object.


This is functionalism, no?  I am not sure if you are disagreeing with 
functionalism or supporting it.  ;-)


Well, the computation is not the implemenatation, for sure, but is it 
appropriate to call it an abstract mathematical mapping?



This leads to the idea that all
computations are actually implemented in a Platonic reality, and the
universe we observe emerges from that Platonic reality, as per eg. Max
Tegmark and in the article linked to by Matt Mahoney:


I don't see how this big jump follows.  I have a different 
interpretation that does not need Platonic realities, so it looks like 
a non-sequiteur to me.




http://www.mattmahoney.net/singularity.html


I ind most of what Matt says in this article to be incoherent. 
Assertions pulled out of thin air and citing of unjustifiable claims 
made by others as if they were god-sent truth.



Richard Loosemore


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:
 When people like Lanier allow themselves the luxury of positing 
 infinitely large computers (who else do we know who does this?  Ah, yes, 
 the AIXI folks), they can make infinitely unlikely coincidences happen.

It is a commonly accepted practice to use Turing machines in proofs, even
though we can't actually build one.  Hutter is not proposing a universal
solution to AI.  He is proving that it is not computable.  Lanier is not
suggesting implementing consciousness as a rainstorm.  He is refuting its
existence.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] AI critique by Jaron Lanier

2008-02-17 Thread Matt Mahoney

--- John Ku [EMAIL PROTECTED] wrote:

 On 2/16/08, Matt Mahoney [EMAIL PROTECTED] wrote:
 
   I would prefer to leave behind these counterfactuals altogether and
   try to use information theory and control theory to achieve a precise
   understanding of what it is for something to be the standard(s) in
   terms of which we are able to deliberate. Since our normative concepts
   (e.g. should, reason, ought, etc) are fundamentally about guiding our
   attitudes through deliberation, I think they can then be analyzed in
   terms of what those deliberative standards prescribe.
 
  I agree.  I prefer the approach of predicting what we *will* do as opposed
 to
  what we *ought* to do.  It makes no sense to talk about a right or wrong
  approach when our concepts of right and wrong are programmable.
 
 I don't quite follow. I was arguing for a particular way of analyzing
 our talk of right and wrong, not abandoning such talk. Although our
 concepts are programmable, what matters is what follows from our
 current concepts as they are.
 
 There are two main ways in which my analysis would differ from simply
 predicting what we will do. First, we might make an error in applying
 our deliberative standards or tracking what actually follows from
 them. Second, even once we reach some conclusion about what is
 prescribed by our deliberative standards, we may not act in accordance
 with that conclusion out of weakness of will.

It is the second part where my approach differs.  A decision to act in a
certain way implies right or wrong according to our views, not the views of a
posthuman intelligence.  Rather I prefer to analyze the path that AI will
take, given human motivations, but without judgment.  For example, CEV favors
granting future wishes over present wishes (when it is possible to predict
future wishes reliably).  But human psychology suggests that we would prefer
machines that grant our immediate wishes, implying that we will not implement
CEV (even if we knew how).  Any suggestion that CEV should or should not be
implemented is just a distraction from an analysis of what will actually
happen.

As a second example, a singularity might result in the extinction of DNA based
life and its replacement with a much faster evolutionary process.  It makes no
sense to judge this outcome as good or bad.  The important question is the
likelihood of this occurring, and when.  In this context, it is more important
to analyze the motives of people who would try to accelerate or delay the
progression of technology.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:
When people like Lanier allow themselves the luxury of positing 
infinitely large computers (who else do we know who does this?  Ah, yes, 
the AIXI folks), they can make infinitely unlikely coincidences happen.


It is a commonly accepted practice to use Turing machines in proofs, even
though we can't actually build one.


So?  That was not the practice that I condemned.

My problem is with people like Hutter or Lanier using thought 
experiments in which the behavior of quasi-infinite computers is treated 
as if it were a meaningful thing in the real universe.


There is a world of difference between that and using Turing machines in 
proofs.




Hutter is not proposing a universal
solution to AI.  He is proving that it is not computable.


He is doing nothing of the sort.  As I stated in the quote above, he is 
drawing a meaningless conclusion by introducing a quasi-infinite 
computation into his proof:  when people try to make claims about the 
real world (i.e. claims about what artificial intelligence is) by 
postulating machines with quasi-infinite amounts of computation going on 
inside them, they can get anything to happen.



Lanier is not
suggesting implementing consciousness as a rainstorm.  He is refuting its
existence.


And you missed what I said about Lanier, apparently.

He refuted nothing.  He showed that with a quasi-infinite computer in 
his thought experiment, he can make a coincidence happen.


Big deal.



Richard Loosemore




---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] AI critique by Jaron Lanier

2008-02-17 Thread John Ku
On 2/17/08, Matt Mahoney [EMAIL PROTECTED] wrote:

 Nevertheless we can make similar reductions to absurdity with respect to
 qualia, that which distinguishes you from a philosophical zombie.  There is no
 experiment to distinguish whether you actually experience redness when you see
 a red object, or simply behave as if you do.  Nor is there any aspect of this
 behavior that could not (at least in theory) be simulated by a machine.

You are relying on a partial conceptual analysis of qualia or
consciousness by Chalmers that maintains that there could be an exact
physical duplicate of you that is not conscious (a philosophical
zombie). While he is in general a great philosopher, I suspect his
arguments here ultimately rely too much on moving from, I can create
a mental image of a physical duplicate and subtract my image of
consciousness from it, to therefore, such things are possible.

At any rate, a functionalist would not accept that analysis. On a
functionalist account, consciousness would reduce to something like
certain representational activities which could be understood in
information processing terms. A physical duplicate of you would have
the same information processing properties, hence the same
consciousness properties. Once we understand the relevant properties
it would be possible to test whether something is conscious or not by
seeing what information it is or is not capable of processing. It is
hard to test right now because we have at the moment only very
incomplete conceptual analyses.

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread Stathis Papaioannou
On 18/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:

 The last statement you make, though, is not quite correct:  with a
 jumbled up sequence of episodes during which the various machines were
 running the brain code, he whole would lose its coherence, because input
 from the world would now be randomised.

 If the computer was being fed input from a virtual reality simulation,
 that would be fine.  It would sense a sudden change from real world to
 virtual world.

The argument that is the subject of this thread wouldn't work if the
brain simulation had to interact with the world at the level of the
substrate it is being simulated on. However, it does work if you
consider an inputless virtual environment with conscious inhabitants.
Suppose you are now living in such a simulation. From your point of
view, today is Monday and yesterday was Sunday. Do you have any
evidence to support the belief that Sunday was was actually run
yesterday in the real world, or that it was run at all? The simulation
could have been started up one second ago, complete with false
memories of Sunday. Sunday may not actually be run until next year,
and the version of you then will have no idea that the future has
already happened.

 But again, none of this touches upon Lanier's attempt to draw a bogus
 conclusion from his thought experiment.


  No external observer would ever be able to keep track of such a
  fragmented computation and as far as the rest of the universe is
  concerned there may as well be no computation.

 This makes little sense, surely.  You mean that we would not be able to
 interact with it?  Of course not:  the poor thing will have been
 isolated from meanigful contact with the world because of the jumbled up
 implementation that you posit.  Again, though, I see no relevant
 conclusion emerging from this.

 I cannot make any sense of your statement that as far as the rest of
 the universe is concerned there may as well be no computation.  So we
 cannot communicate with it anymore that should not be surprising,
 given your assumptions.

We can't communicate with it so it is useless as far as what we
normally think of as computation goes. A rainstorm contains patterns
isomorphic with an abacus adding 127 and 498 to give 625, but to
extract this meaning you have to already know the question and the
answer, using another computer such as your brain. However, in the
case of an inputless simulation with conscious inhabitants this
objection is irrelevant, since the meaning is created by observers
intrinsic to the computation.

Thus if there is any way a physical system could be interpreted as
implementing a conscious computation, it is implementing the conscious
computation, even if no-one else is around to keep track of it.



-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] AI critique by Jaron Lanier

2008-02-17 Thread Matt Mahoney

--- John Ku [EMAIL PROTECTED] wrote:

 On 2/17/08, Matt Mahoney [EMAIL PROTECTED] wrote:
 
  Nevertheless we can make similar reductions to absurdity with respect to
  qualia, that which distinguishes you from a philosophical zombie.  There
 is no
  experiment to distinguish whether you actually experience redness when you
 see
  a red object, or simply behave as if you do.  Nor is there any aspect of
 this
  behavior that could not (at least in theory) be simulated by a machine.
 
 You are relying on a partial conceptual analysis of qualia or
 consciousness by Chalmers that maintains that there could be an exact
 physical duplicate of you that is not conscious (a philosophical
 zombie). While he is in general a great philosopher, I suspect his
 arguments here ultimately rely too much on moving from, I can create
 a mental image of a physical duplicate and subtract my image of
 consciousness from it, to therefore, such things are possible.

My interpretation of Chalmers is the opposite.  He seems to say that either
machine consciousness is possible or human consciousness is not.

 At any rate, a functionalist would not accept that analysis. On a
 functionalist account, consciousness would reduce to something like
 certain representational activities which could be understood in
 information processing terms. A physical duplicate of you would have
 the same information processing properties, hence the same
 consciousness properties. Once we understand the relevant properties
 it would be possible to test whether something is conscious or not by
 seeing what information it is or is not capable of processing. It is
 hard to test right now because we have at the moment only very
 incomplete conceptual analyses.

It seems to me the problem is defining consciousness, not testing for it. 
What computational property would you use?  For example, one might ascribe
consciousness to the presence of episodic memory.  (If you don't remember
something happening to you, then you must have been unconscious).  But in this
case, any machine that records a time sequence of events (for example, a chart
recorder) could be said to be conscious.  Or you might ascribe consciousness
to entities that learn, seek pleasure, and avoid pain.  But then I could write
a simple program like http://www.mattmahoney.net/autobliss.txt with these
properties.  It seems to me that any other testable property would have the
same problem.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] AI critique by Jaron Lanier

2008-02-16 Thread Matt Mahoney
--- John Ku [EMAIL PROTECTED] wrote:

 On 2/15/08, Eric B. Ramsay [EMAIL PROTECTED] wrote:
 
   http://www.jaronlanier.com/aichapter.html
 
 
 I take it the target of his rainstorm argument is the idea that the
 essential features of consciousness are its information-processing
 properties.

I believe his target is the existence of consciousness.  There are many proofs
showing that the assumption of consciousness leads to absurdities, which I
have summarized at http://www.mattmahoney.net/singularity.html
In mathematics, it should not be necessary to prove a theorem more than once. 
But proof and belief are different things, especially when the belief is hard
coded into the brain.

For now, these apparent paradoxes are just philosophical arguments because
they depend on technologies that have not yet been developed, such as AGI,
uploading, copying people, and programming the brain.  But we will eventually
have to confront them.

The result will not be pretty.  The best definition (not solution) of
friendliness is probably CEV ( http://www.singinst.org/upload/CEV.html ) which
can be summarized as our wish if we knew more, thought faster, were more the
people we wished we were, had grown up farther together.  What would you wish
for if your brain was not constrained by the hardwired beliefs and goals that
you were born with and you knew that your consciousness did not exist?  What
would you wish for if you could reprogram your own goals?  The logical answer
is that it doesn't matter.  The pleasure of a thousand permanent orgasms is
just a matter of changing a few lines of code, and you go into a degenerate
state where learning ceases.



-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] AI critique by Jaron Lanier

2008-02-16 Thread Richard Loosemore

Eric B. Ramsay wrote:
I don't know when Lanier wrote the following but I would be interested 
to know what the AI folks here think about his critique (or direct me to 
a thread where this was already discussed). Also would someone be able 
to re-state his rainstorm thought experiment more clearly -- I am not 
sure I get it:


 http://www.jaronlanier.com/aichapter.html


Lanier's rainstorm argument is spurious nonsense.

It relies on a sleight of hand, and preys on the inability of most 
people to notice the point at which he slips from valid-analogy to 
nonsense-analogy.


He also then goes on to use a debating trick that John Searle is fond 
of:  he claims that the people who disagree with his argument always 
choose a different type of counter-argument.  His implication is that, 
because the follow different paths, therefore they don't agree about 
what is wrong, therefore ALL of them are fools, and therefore NONE of 
their counter-arguments are valid.


Really.  I like Jaron Lanier as a musician, but this is drivel.



Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] AI critique by Jaron Lanier

2008-02-16 Thread Stathis Papaioannou
On 17/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:

 Lanier's rainstorm argument is spurious nonsense.

That's the response of most functionalists, but an explanation as to
why it is spurious nonsense is needed. And some such as Hans Moravec
have actually conceded that the argument is valid:

http://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1998/SimConEx.98.html



-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] AI critique by Jaron Lanier

2008-02-16 Thread John Ku
On 2/16/08, Matt Mahoney [EMAIL PROTECTED] wrote:

  I would prefer to leave behind these counterfactuals altogether and
  try to use information theory and control theory to achieve a precise
  understanding of what it is for something to be the standard(s) in
  terms of which we are able to deliberate. Since our normative concepts
  (e.g. should, reason, ought, etc) are fundamentally about guiding our
  attitudes through deliberation, I think they can then be analyzed in
  terms of what those deliberative standards prescribe.

 I agree.  I prefer the approach of predicting what we *will* do as opposed to
 what we *ought* to do.  It makes no sense to talk about a right or wrong
 approach when our concepts of right and wrong are programmable.

I don't quite follow. I was arguing for a particular way of analyzing
our talk of right and wrong, not abandoning such talk. Although our
concepts are programmable, what matters is what follows from our
current concepts as they are.

There are two main ways in which my analysis would differ from simply
predicting what we will do. First, we might make an error in applying
our deliberative standards or tracking what actually follows from
them. Second, even once we reach some conclusion about what is
prescribed by our deliberative standards, we may not act in accordance
with that conclusion out of weakness of will.

Allowing for the possibility of genuine error is one of the big tasks
to be accomplished by a theory of intentionality. Take an example from
our more ordinary concepts, though the same types of problems will
arise for our deliberative standards. If I see a cow in the night and
my concept of horse fires, what makes it the case that this particular
firing of 'horse' is an error? Why does my concept horse really only
correctly refer to horses rather than the disjunction
horses-or-cows-in-the-night? (Although I earlier mentioned that I
think Dretske's information theoretic semantics is probably the most
promising theory of intentionality, it is at the moment unable to
deliver the right semantics in the face of these types of errors.)

I actually think the second difference poses a very similar type of
problem. What makes it the case that we sometimes really do act out of
weakness of will rather than it being the case that our will really
endorsed that apparent exception in this particular case while
presumably endorsing something different the rest of the time?

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] AI critique by Jaron Lanier

2008-02-15 Thread Matt Mahoney

--- Eric B. Ramsay [EMAIL PROTECTED] wrote:

 I don't know when Lanier wrote the following but I would be interested to
 know what the AI folks here think about his critique (or direct me to a
 thread where this was already discussed). Also would someone be able to
 re-state his rainstorm thought experiment more clearly -- I am not sure I
 get it:
 
  http://www.jaronlanier.com/aichapter.html

This is a nice proof of the non-existence of consciousness (or qualia).  Here
is another (I came across on sl4):

  http://youtube.com/watch?v=nx6v30NMFV8

Such reductions to absurdity are possible because the brain is programmed to
not accept the logical result.

Consciousness is hard to define but you know what it is.  It is what makes you
aware, the little person inside your head that observes the world through
your perceptions, that which distinguishes you from a philosophical zombie. 
We normally associate consciousness with human traits such as episodic memory,
response to pleasure and pain, fear of death, language, and a goal of seeking
knowledge through experimentation.  (Imagine a person without any of these
qualities).

These traits are programmed into our DNA because they increase our fitness. 
You cannot change them, which is what these proofs would do if you could
accept them.

Unfortunately, this question will have a profound effect on the outcome of a
singularity.  Assuming recursive self improvement in a competitive
environment, we should expect agents (possibly including our uploads) to
believe in their own consciousness, but there is no evolutionary pressure to
also believe in human consciousness.  Even if we successfully constrain the
process so that agents have the goal of satisfying our extrapolated volition,
then logically we should expect those agents (knowing what we cannot know) to
conclude that human brains are just computers and our existence doesn't
matter.  It is ironic that our programmed beliefs leads us to advance
technology to the point where the question can no longer be ignored.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] AI critique by Jaron Lanier

2008-02-15 Thread Stathis Papaioannou
On 16/02/2008, Kaj Sotala [EMAIL PROTECTED] wrote:

 However, despite what is claimed, not every physical process can be
 interpreted to do any computation. To do such an interpretation, you
 have to do so after the fact: after all the raindrops have fallen, you
 can assign their positions formal roles that correspond to
 computation, but you can't *predict* what positions will be assigned
 what roles ahead of the time - after all, they are just randomly
 falling raindrops. You can't actually *use* the rainstorm to compute
 anything, like you could use a computer - you have to first do the
 computation yourself, then assign each state of the rainstorm a
 position that corresponds to the steps in your previous computation.

Sure, you can't interact with the raindrop computation, but that
doesn't mean it isn't conscious. Suppose a civilization built a
computer implementing a virtual environment with conscious
inhabitants, but no I/O. The computer is launched into space and the
civilization is completely destroyed when its sun goes nova. A billion
years later, the computer is found by another civilization which
figures out how the power supply works and starts it up, firing the
virtual inhabitants into life. As far as the second civilization is
concerned, the activity in the computer could mean anything or
nothing, like the patterns in a rainstorm.

Just as the space of all possible rainstorms contains one that is
isomorphic with any given computer implementing a particular program,
so the space of all possible computers that an alien civilization
might build contains one that is isomorphic with any sufficiently
large rainstorm. It doesn't matter that manual for the computer
represented by the rainstorm has been lost, or that the computer was
never actually built: all that matters for the program to be
implemented is that it rain.



-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com